Updates from: 07/25/2023 01:13:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
If you don't already have a Facebook account, sign up at [https://www.facebook.c
1. Select **Save Changes**. 1. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**. 1. From the menu, select **Facebook Login**, select **Settings**.
-1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-id/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-id/oauth2/authresp`. Replace `your-tenant-id` with the id of your tenant, and `your-domain-name` with your custom domain.
+1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-id.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-id.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-id` with the id of your tenant, and `your-domain-name` with your custom domain.
1. Select **Save Changes** at the bottom of the page. 1. To make your Facebook application available to Azure AD B2C, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
If the sign-in process is successful, your browser is redirected to `https://jwt
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+## Known Issues
+* Azure AD B2C does not support JWE (JSON Web Encryption) for exchanging encrypted tokens with OpenID connect identity providers.
+ ## Next steps Find more information see the [OpenId Connect technical profile](openid-connect-technical-profile.md) reference guide.
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
You need to store the secret key that you previously recorded for Twitter app in
1. For **Options**, choose `Manual`. 1. Enter a **Name** for the policy key. For example, `TwitterSecret`. The prefix `B2C_1A_` is added automatically to the name of your key. 1. For **Secret**, enter your *API key secret* value that you previously recorded.
-1. For **Key usage**, select `Encryption`.
+1. For **Key usage**, select `Signature`.
1. Click **Create**. ## Configure Twitter as an identity provider
You can define a Twitter account as a claims provider by adding it to the **Clai
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. > [!TIP]
-> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available.
+> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available.
active-directory Inbound Provisioning Api Curl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md
# Quickstart API-driven inbound provisioning with cURL (Public preview) ## Introduction
-[cURL](https://curl.se/) is a popular, free, open-source, command-line tool used by API developers, and it is [available by default on Windows 10/11](https://curl.se/windows/microsoft.html). This tutorial describes how you can quickly test [API-driven inbound provisioning](inbound-provisioning-api-concepts.md) with cURL.
+[cURL](https://curl.se/) is a popular, free, open-source, command-line tool used by API developers, and it's [available by default on Windows 10/11](https://curl.se/windows/microsoft.html). This tutorial describes how you can quickly test [API-driven inbound provisioning](inbound-provisioning-api-concepts.md) with cURL.
## Pre-requisites
``` curl -v "[InboundProvisioningAPIEndpoint]" -d @scim-bulk-upload-users.json -H "Authorization: Bearer [AccessToken]" -H "Content-Type: application/scim+json" ```
-1. Upon successful upload, you will receive HTTP 202 Accepted response code.
+1. Upon successful upload, you'll receive HTTP 202 Accepted response code.
1. The provisioning service starts processing the bulk request payload immediately and you can see the provisioning details by accessing the provisioning logs of the inbound provisioning app. ## Verify processing of the bulk request payload
[![Screenshot of provisioning logs in menu.](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png)](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png#lightbox)
-1. Click on any record in the provisioning logs to view additional processing details.
+1. Click on any record in the provisioning logs to view more processing details.
1. The provisioning log details screen displays all the steps executed for a specific user. [![Screenshot of provisioning logs details.](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png)](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png#lightbox) * Under the **Import from API** step, see details of user data extracted from the bulk request.
The bulk request shown below uses the SCIM standard Core User and Enterprise Use
## Next steps - [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)-- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md) - [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+- [Quick start using PowerShell](inbound-provisioning-api-powershell.md)
+- [Quick start using Azure Logic Apps](inbound-provisioning-api-logic-apps.md)
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
This section describes how you can assign the necessary permissions to a managed
## Next steps-- [Invoke inbound provisioning API using cURL](inbound-provisioning-api-curl-tutorial.md)
+- [Quick start using cURL](inbound-provisioning-api-curl-tutorial.md)
+- [Quick start using Postman](inbound-provisioning-api-postman.md)
+- [Quick start using Postman](inbound-provisioning-api-graph-explorer.md)
- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Graph Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-graph-explorer.md
The bulk request shown below uses the SCIM standard Core User and Enterprise Use
``` ## Next steps - [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)-- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md) - [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+- [Quick start using PowerShell](inbound-provisioning-api-powershell.md)
+- [Quick start using Azure Logic Apps](inbound-provisioning-api-logic-apps.md)
active-directory Inbound Provisioning Api Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-logic-apps.md
# API-driven inbound provisioning with Azure Logic Apps (Public preview)
-This tutorial describes how to use Azure Logic Apps workflow to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+This tutorial describes how to use Azure Logic Apps workflow to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. The article also provides guidance on how the same integration pattern can be used with any system of record.
## Integration scenario
-This tutorial addresses the following integration scenario:
+### Business requirement
+
+Your system of record periodically generates CSV file exports containing worker data. You want to implement an integration that reads data from the CSV file and automatically provisions user accounts in your target directory (on-premises Active Directory for hybrid users and Microsoft Entra ID for cloud-only users).
+
+### Implementation requirement
+
+From an implementation perspective:
+
+* You want to use an Azure Logic Apps workflow to read data from the CSV file exports available in an Azure File Share and send it to the inbound provisioning API endpoint.
+* In your Azure Logic Apps workflow, you don't want to implement the complex logic of comparing identity data between your system of record and target directory.
+* You want to use Microsoft Entra ID provisioning service to apply your IT managed provisioning rules to automatically create/update/enable/disable accounts in the target directory (on-premises Active Directory or Microsoft Entra ID).
:::image type="content" source="media/inbound-provisioning-api-logic-apps/logic-apps-integration-overview.png" alt-text="Graphic of Azure Logic Apps-based integration." lightbox="media/inbound-provisioning-api-logic-apps/logic-apps-integration-overview.png":::
-* Your system of record generates periodic CSV file exports containing worker data which is available in an Azure File Share.
-* You want to use an Azure Logic Apps workflow to automatically provision records from the CSV file to your target directory (on-premises Active Directory or Microsoft Entra ID).
-* The Azure Logic Apps workflow simply reads data from the CSV file and uploads it to the provisioning API endpoint. The API-driven inbound provisioning app configured in Microsoft Entra ID performs the task of applying your IT managed provisioning rules to create/update/enable/disable accounts in the target directory.
+### Integration scenario variations
+
+While this tutorial uses a CSV file as a system of record, you can customize the sample Azure Logic Apps workflow to read data from any system of record. Azure Logic Apps provides a wide range of [built-in connectors](/azure/logic-apps/connectors/built-in/reference) and [managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) with pre-built triggers and actions that you can use in your integration workflow.
+
+Here's a list of enterprise integration scenario variations, where API-driven inbound provisioning can be implemented with a Logic Apps workflow.
-This tutorial uses the Logic Apps deployment template published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/LogicApps/CSV2SCIMBulkUpload). It has logic for handling large CSV files and chunking the bulk request to send 50 records in each request.
+|# |System of record |Integration guidance on using Logic Apps to read source data |
+||||
+| 1 | Files stored on SFTP server | Use either the [built-in SFTP connector](/azure/logic-apps/connectors/built-in/reference/sftp/) or [managed SFTP SSH connector](/azure/connectors/connectors-sftp-ssh) to read data from files stored on the SFTP server. |
+| 2 | Database table | If you're using an Azure SQL server or on-premises SQL Server, use the [SQL Server](/azure/connectors/connectors-create-api-sqlazure) connector to read your table data. <br> If you're using an Oracle database, use the [Oracle database](/azure/connectors/connectors-create-api-oracledatabase) connector to read your table data. |
+| 3 | On-premises and cloud-hosted SAP S/4 HANA or <br> Classic on-premises SAP systems, such as R/3 and ECC | Use the [SAP connector](/azure/logic-apps/logic-apps-using-sap-connector) to retrieve identity data from your SAP system. For examples on how to configure this connector, refer to [common SAP integration scenarios](/azure/logic-apps/sap-create-example-scenario-workflows) using Azure Logic Apps and the SAP connector. |
+| 4 | IBM MQ | Use the [IBM MQ connector](/azure/connectors/connectors-create-api-mq) to receive provisioning messages from the queue. |
+| 5 | Dynamics 365 Human Resources | Use the [Dataverse connector](/azure/connectors/connect-common-data-service) to read data from [Dataverse tables](/dynamics365/human-resources/hr-developer-entities) used by Microsoft Dynamics 365 Human Resources. |
+| 6 | Any system that exposes REST APIs | If you don't find a connector for your system of record in the Logic Apps connector library, You can create your own [custom connector](/azure/logic-apps/logic-apps-create-api-app) to read data from your system of record. |
+
+After reading the source data, apply your pre-processing rules and convert the output from your system of record into a bulk request that can be sent to the Microsoft Entra ID provisioning [bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+
+> [!IMPORTANT]
+> If you'd like to share your API-driven inbound provisioning + Logic Apps integration workflow with the community, create a [Logic app template](/azure/logic-apps/logic-apps-create-azure-resource-manager-templates), document steps on how to use it and submit a pull request for inclusion in the GitHub repository [Entra-ID-Inbound-Provisioning](https://github.com/AzureAD/entra-id-inbound-provisioning).
+
+## How to use this tutorial
+
+The Logic Apps deployment template published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/LogicApps/CSV2SCIMBulkUpload) automates several tasks. It also has logic for handling large CSV files and chunking the bulk request to send 50 records in each request. Here's how you can test it and customize it per your integration requirements.
> [!NOTE] > The sample Azure Logic Apps workflow is provided "as-is" for implementation reference. If you have questions related to it or if you'd like to enhance it, please use the [GitHub project repository](https://github.com/AzureAD/entra-id-inbound-provisioning).
+|# | Automation task | Implementation guidance |
+||||
+|1 | Read worker data from the CSV file. | The Logic Apps workflow uses an Azure Function to read the CSV file stored in an Azure File Share. The Azure Function converts CSV data into JSON format. If your CSV file format is different, update the workflow step "Parse JSON" and "Construct SCIMUser". <br> If your system of record is different, check guidance provided in the section [Integration scenario variations](#integration-scenario-variations) on how you can customize the Logic Apps workflow by using an appropriate connector. |
+|2 | Pre-process and convert data to SCIM format. | By default, the Logic Apps workflow converts each record in the CSV file to a SCIM Core User + Enterprise User representation. If you plan to use custom SCIM schema extensions, you can update the step "Construct SCIMUser" to include your custom SCIM schema extensions. If you want to run C# code for advanced formatting and data validation, you can use [custom Azure Functions](../../logic-apps/logic-apps-azure-functions.md).|
+|3 | Use the right authentication method | You can either [use a service principal](inbound-provisioning-api-grant-access.md#configure-a-service-principal) or [use managed identity](inbound-provisioning-api-grant-access.md#configure-a-managed-identity) to access the inbound provisioning API. Update the step "Send SCIMBulkPayload to API endpoint" with the right authentication method. |
+|4 | Provision accounts in on-premises Active Directory or Microsoft Entra ID. | Configure [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md). This will generate a unique [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. Update the step "Send SCIMBulkPayload to API endpoint" to use the right bulkUpload API endpoint. |
+|5 | Scan the provisioning logs and retry provisioning for failed records. | This automation is not yet implemented in the sample Logic Apps workflow. To implement it refer to the [provisioning logs Graph API](/graph/api/resources/provisioningobjectsummary) |
+|6 | Deploy your Logic Apps based automation to production. | Once you have verified your API-driven provisioning flow and customized the Logic Apps workflow to meet your requirements, you can deploy the automation in your environment. |
++ ## Step 1: Create an Azure Storage account to host the CSV file The steps documented in this section are optional. If you already have an existing storage account or would like to read the CSV file from another source like SharePoint site or Blob storage, you can tweak the Logic App to use your connector of choice.
active-directory Inbound Provisioning Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-postman.md
The bulk request shown below uses the SCIM standard Core User and Enterprise Use
## Next steps - [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)-- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md) - [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+- [Quick start using PowerShell](inbound-provisioning-api-powershell.md)
+- [Quick start using Azure Logic Apps](inbound-provisioning-api-logic-apps.md)
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
# API-driven inbound provisioning with PowerShell script (Public preview)
-This tutorial describes how to use a PowerShell script to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+This tutorial describes how to use a PowerShell script to implement Microsoft Entra ID [API-driven inbound provisioning](inbound-provisioning-api-concepts.md). Using the steps in this tutorial, you can convert a CSV file containing HR data into a bulk request payload and send it to the Microsoft Entra ID provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. The article also provides guidance on how the same integration pattern can be used with any system of record.
-## How to use this tutorial
+## Integration scenario
+
+### Business requirement
+
+Your system of record periodically generates CSV file exports containing worker data. You want to implement an integration that reads data from the CSV file and automatically provisions user accounts in your target directory (on-premises Active Directory for hybrid users and Microsoft Entra ID for cloud-only users).
+
+### Implementation requirement
+
+From an implementation perspective:
-This tutorial addresses the following integration scenario:
-* Your system of record generates periodic CSV file exports containing worker data.
-* You want to use an unattended PowerShell script to automatically provision records from the CSV file to your target directory (on-premises Active Directory or Microsoft Entra ID).
-* The PowerShell script simply reads data from the CSV file and uploads it to the provisioning API endpoint. The API-driven inbound provisioning app configured in Microsoft Entra ID performs the task of applying your IT managed provisioning rules to create/update/enable/disable accounts in the target directory.
+* You want to use an unattended PowerShell script to read data from the CSV file exports and send it to the inbound provisioning API endpoint.
+* In your PowerShell script, you don't want to implement the complex logic of comparing identity data between your system of record and target directory.
+* You want to use Microsoft Entra ID provisioning service to apply your IT managed provisioning rules to automatically create/update/enable/disable accounts in the target directory (on-premises Active Directory or Microsoft Entra ID).
:::image type="content" source="media/inbound-provisioning-api-powershell/powershell-integration-overview.png" alt-text="Graphic of PowerShell-based integration." lightbox="media/inbound-provisioning-api-powershell/powershell-integration-overview.png":::
-Here is a list of automation tasks associated with this integration scenario and how you can implement it by customizing the sample script published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/PowerShell/CSV2SCIM).
+### Integration scenario variations
+
+While this tutorial uses a CSV file as a system of record, you can customize the sample PowerShell script to read data from any system of record. Here's a list of enterprise integration scenario variations, where API-driven inbound provisioning can be implemented with a PowerShell script.
+
+|# |System of record |Integration guidance on using PowerShell to read source data |
+||||
+|1 | Database table | If you're using an Azure SQL database or an on-premises SQL Server, you can use the [Read-SqlTableData](/powershell/module/sqlserver/read-sqltabledata) cmdlet to read data stored in a table of a SQL database. You can use the [Invoke-SqlCmd](/powershell/module/sqlserver/invoke-sqlcmd) cmdlet to run Transact-SQL or XQuery scripts. <br> If you're using an Oracle / MySQL / Postgres database, you can find a PowerShell module either published by the vendor or available in the [PowerShell Gallery](https://www.powershellgallery.com/). Use the module to read data from your database table. |
+|2 | LDAP server | Use the `System.DirectoryServices.Protocols` .NET API or one of the LDAP modules available in the [PowerShell Gallery](https://www.powershellgallery.com/packages?q=ldap) to query your LDAP server. Understand the LDAP schema and hierarchy to retrieve user data from the LDAP server. |
+|3 | Any system that exposes REST APIs | To read data from a REST API endpoint using PowerShell, you can use the [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) cmdlet from the `Microsoft.PowerShell.Utility` module. Check the documentation of your REST API and find out what parameters and headers it expects, what format it returns, and what authentication method it uses. You can then adjust your `Invoke-RestMethod` command accordingly. |
+|4 | Any system that exposes SOAP APIs | To read data from a SOAP API endpoint using PowerShell, you can use the [New-WebServiceProxy](/powershell/module/microsoft.powershell.management/new-webserviceproxy) cmdlet from the `Microsoft.PowerShell.Management` module. Check the documentation of your SOAP API and find out what parameters and headers it expects, what format it returns, and what authentication method it uses. You can then adjust your `New-WebServiceProxy` command accordingly. |
+
+After reading the source data, apply your pre-processing rules and convert the output from your system of record into a bulk request that can be sent to the Microsoft Entra ID provisioning [bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint.
+
+> [!IMPORTANT]
+> If you'd like to share your PowerShell integration script with the community, publish it on [PowerShell Gallery](https://www.powershellgallery.com/) and notify us on the GitHub repository [Entra-ID-Inbound-Provisioning](https://github.com/AzureAD/entra-id-inbound-provisioning), so we can add a reference it.
+
+## How to use this tutorial
+
+The PowerShell sample script published in the [Microsoft Entra ID inbound provisioning GitHub repository](https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/PowerShell/CSV2SCIM) automates several tasks. It has logic for handling large CSV files and chunking the bulk request to send 50 records in each request. Here's how you can test it and customize it per your integration requirements.
> [!NOTE] > The sample PowerShell script is provided "as-is" for implementation reference. If you have questions related to the script or if you'd like to enhance it, please use the [GitHub project repository](https://github.com/AzureAD/entra-id-inbound-provisioning). |# | Automation task | Implementation guidance | ||||
-|1 | Read worker data from the CSV file. | [Download the PowerShell script](#download-the-powershell-script). It has out-of-the-box logic to read data from any CSV file. Refer to [CSV2SCIM PowerShell usage details](#csv2scim-powershell-usage-details) to get familiar with the different execution modes of this script. |
+|1 | Read worker data from the CSV file. | [Download the PowerShell script](#download-the-powershell-script). It has out-of-the-box logic to read data from any CSV file. Refer to [CSV2SCIM PowerShell usage details](#csv2scim-powershell-usage-details) to get familiar with the different execution modes of this script. <br> If your system of record is different, check guidance provided in the section [Integration scenario variations](#integration-scenario-variations) on how you can customize the PowerShell script. |
|2 | Pre-process and convert data to SCIM format. | By default, the PowerShell script converts each record in the CSV file to a SCIM Core User + Enterprise User representation. Follow the steps in the section [Generate bulk request payload with standard schema](#generate-bulk-request-payload-with-standard-schema) to get familiar with this process. If your CSV file has different fields, tweak the [AttributeMapping.psd file](#attributemappingpsd-file) to generate a valid SCIM user. You can also [generate bulk request with custom SCIM schema](#generate-bulk-request-with-custom-scim-schema). Update the PowerShell script to include any custom CSV data validation logic. | |3 | Use a certificate for authentication to Entra ID. | [Create a service principal that can access](inbound-provisioning-api-grant-access.md) the inbound provisioning API. Refer to steps in the section [Configure client certificate for service principal authentication](#configure-client-certificate-for-service-principal-authentication) to learn how to use client certificate for authentication. If you'd like to use managed identity instead of a service principal for authentication, then review the use of `Connect-MgGraph` in the sample script and update it to use [managed identities](/powershell/microsoftgraph/authentication-commands#using-managed-identity). | |4 | Provision accounts in on-premises Active Directory or Microsoft Entra ID. | Configure [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md). This will generate a unique [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. Refer to the steps in the section [Generate and upload bulk request payload as admin user](#generate-and-upload-bulk-request-payload-as-admin-user) to learn how to upload data to this endpoint. Once the data is uploaded, the provisioning service applies the attribute mapping rules to automatically provision accounts in your target directory. If you plan to [use bulk request with custom SCIM schema](#generate-bulk-request-with-custom-scim-schema), then [extend the provisioning app schema](#extending-provisioning-job-schema) to include your custom SCIM schema elements. Validate the attribute flow and customize the attribute mappings per your integration requirements. To run the script using a service principal with certificate-based authentication, refer to the steps in the section [Upload bulk request payload using client certificate authentication](#upload-bulk-request-payload-using-client-certificate-authentication) | |5 | Scan the provisioning logs and retry provisioning for failed records. | Refer to the steps in the section [Get provisioning logs of the latest sync cycles](#get-provisioning-logs-of-the-latest-sync-cycles) to learn how to fetch and analyze provisioning log data. Identify failed user records and include them in the next upload cycle. |
-|6 | Deploy your PowerShell based automation to production. | Once you have verified your API-driven provisioning flow and customized the PowerShell script to meet your requirements, you can deploy the automation as a [PowerShell Workflow runbook in Azure Automation](../../automation/learn/automation-tutorial-runbook-textual.md). |
+|6 | Deploy your PowerShell based automation to production. | Once you have verified your API-driven provisioning flow and customized the PowerShell script to meet your requirements, you can deploy the automation as a [PowerShell Workflow runbook in Azure Automation](../../automation/learn/automation-tutorial-runbook-textual.md) or as a server process [scheduled to run on a Windows server](/troubleshoot/windows-server/system-management-components/schedule-server-process). |
## Download the PowerShell script
To illustrate the procedure, let's use the CSV file `Samples/csv-with-2-records.
:::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/columns.png" alt-text="Screenshot of columns in Excel." lightbox="./media/inbound-provisioning-api-powershell/columns.png"::: 1. In Notepad++ or a source code editor like Visual Studio Code, open the PowerShell data file `Samples/AttributeMapping.psd1` that enables mapping of CSV file columns to SCIM standard schema attributes. The file that's shipped out-of-the-box already has pre-configured mapping of CSV file columns to corresponding SCIM schema attributes.
+ ```powershell
+ @{
+ externalId = 'WorkerID'
+ name = @{
+ familyName = 'LastName'
+ givenName = 'FirstName'
+ }
+ active = { $_.'WorkerStatus' -eq 'Active' }
+ userName = 'UserID'
+ displayName = 'FullName'
+ nickName = 'UserID'
+ userType = 'WorkerType'
+ title = 'JobTitle'
+ addresses = @(
+ @{
+ type = { 'work' }
+ streetAddress = 'StreetAddress'
+ locality = 'City'
+ postalCode = 'ZipCode'
+ country = 'CountryCode'
+ }
+ )
+ phoneNumbers = @(
+ @{
+ type = { 'work' }
+ value = 'OfficePhone'
+ }
+ )
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User" = @{
+ employeeNumber = 'WorkerID'
+ costCenter = 'CostCenter'
+ organization = 'Company'
+ division = 'Division'
+ department = 'Department'
+ manager = @{
+ value = 'ManagerID'
+ }
+ }
+ }
+ ```
1. Open PowerShell and change to the directory **CSV2SCIM\src**. 1. Run the following command to initialize the `AttributeMapping` variable.
active-directory On Premises Powershell Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-powershell-connector.md
If you have already downloaded the provisioning agent and configured it for anot
## Configure the On-premises ECMA app
- 1. Sign in to the Azure portal as an administrator.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator.
2. Go to **Enterprise applications** and select **New application**. 3. Search for the **On-premises ECMA app** application, give the app a name, and select **Create** to add it to your tenant. 4. Navigate to the **Provisioning** page of your application.
Follow these steps to confirm that the connector host has started and has identi
1. Return to the web browser window where you were configuring the application provisioning in the portal. >[!NOTE] >If the window had timed out, then you need to re-select the agent.
- 1. Sign in to the Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. Go to **Enterprise applications** and the **On-premises ECMA app** application. 3. Click on **Provisioning**. 4. If **Get started** appears, then change the mode to **Automatic**, on the **On-Premises Connectivity** section, select the agent that you just deployed and select **Assign Agent(s)**, and wait 10 minutes. Otherwise go to **Edit Provisioning**.
Return to the web browser window where you were configuring the application prov
>[!NOTE] >If the window had timed out, then you need to re-select the agent.
- 1. Sign in to the Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. Go to **Enterprise applications** and the **On-premises ECMA app** application. 3. Select on **Provisioning**. 4. If **Get started** appears, then change the mode to **Automatic**, on the **On-Premises Connectivity** section, select the agent that you deployed and select **Assign Agent(s)**. Otherwise go to **Edit Provisioning**.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
It's recommended, but not required, that you support multiple secrets for easy r
#### How to set up OAuth code grant flow
-1. Sign in to the Azure portal, go to **Enterprise applications** > **Application** > **Provisioning** and select **Authorize**.
+1. Sign in to the [Azure portal](https://portal.azure.com), go to **Enterprise applications** > **Application** > **Provisioning** and select **Authorize**.
1. Azure portal redirects user to the Authorization URL (sign in page for the third party app).
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Get-AzureADUser -ObjectId 0ccf8df6-62f1-4175-9e55-73da9e742690 | Select -ExpandP
## Create an extension attribute using cloud sync Cloud sync will automatically discover your extensions in on-premises Active Directory when you go to add a new mapping. Use the steps below to auto-discover these attributes and set up a corresponding mapping to Azure AD.
-1. Sign in to the Azure portal with a hybrid administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with a hybrid administrator account.
2. Select Azure AD Connect. 3. Select **Manage Azure AD cloud sync**. 4. Select the configuration you wish to add the extension attribute and mapping.
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
Let's say you want to retrieve the following data sets from Workday and use them
The above data sets aren't included by default. To retrieve these data sets:
-1. Sign in to the Azure portal and open your Workday to AD/Azure AD user provisioning app.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open your Workday to AD/Azure AD user provisioning app.
1. In the Provisioning blade, edit the mappings and open the Workday attribute list from the advanced section. 1. Add the following attributes definitions and mark them as "Required". These attributes aren't mapped to any attribute in AD or Azure AD. They serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information.
Use the steps to retrieve attributes associated with international job assignmen
* [Learn how to configure Workday to Active Directory provisioning](../saas-apps/workday-inbound-tutorial.md) * [Learn how to configure write back to Workday](../saas-apps/workday-writeback-tutorial.md) * [Learn more about supported Workday Attributes for inbound provisioning](workday-attribute-reference.md)-
active-directory Workday Retrieve Pronoun Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md
Once you confirm that pronoun data is available in the *Get_Workers* response, g
To retrieve pronouns from Workday, update your Azure AD provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production.
-1. Sign in to the Azure portal as an administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator.
1. Open your *Workday to AD User provisioning* app OR *Workday to Azure AD User provisioning* app. 1. In the **Admin Credentials** section, update the **Tenant URL** to include the Workday Web Service version v38.1 as shown.
active-directory Application Proxy Qlik https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md
To publish QlikSense, you will need to publish two applications in Azure.
Follow these steps to publish your app. For a more detailed walkthrough of steps 1-8, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
-1. Sign in to the Azure portal as a global administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
2. Select **Azure Active Directory** > **Enterprise applications**. 3. Select **Add** at the top of the blade. 4. Select **On-premises application**.
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
For the first test scenario, configure the authentication policy where the Issue
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/single-factor.png" alt-text="Screenshot of the Authentication policy configuration showing single-factor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/single-factor.png":::
-1. Sign in to the Azure portal as the test user by using CBA. The authentication policy is set where Issuer subject rule satisfies single-factor authentication.
+1. Sign in to the [Azure portal](https://portal.azure.com) as the test user by using CBA. The authentication policy is set where Issuer subject rule satisfies single-factor authentication.
1. After sign-in was succeeds, click **Azure Active Directory** > **Sign-in logs**. Let's look closer at some of the entries you can find in the **Sign-in logs**.
For the next test scenario, configure the authentication policy where the **poli
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png" alt-text="Screenshot of the Authentication policy configuration showing multifactor authentication required." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/multifactor.png":::
-1. Sign in to the Azure portal using CBA. Since the policy was set to satisfy multifactor authentication, the user sign-in is successful without a second factor.
+1. Sign in to the [Azure portal](https://portal.azure.com) using CBA. Since the policy was set to satisfy multifactor authentication, the user sign-in is successful without a second factor.
1. Click **Azure Active Directory** > **Sign-ins**. You'll see several entries in the Sign-in logs, including an entry with **Interrupted** status.
For more information about how to enable **Trust multi-factor authentication fro
- [How to migrate federated users](concept-certificate-based-authentication-migration.md) - [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)-
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
You can configure CAs by using the Azure portal or PowerShell.
To enable the certificate-based authentication and configure user bindings in the Azure portal, complete the following steps:
-1. Sign in to the Azure portal as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
1. Click **Azure Active Directory** > **Security**. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/certificate-authorities.png" alt-text="Screenshot of certification authorities.":::
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
There are some optional settings on the **Configure** tab to help manage how sec
To remove a FIDO2 key associated with a user account, delete the key from the userΓÇÖs authentication method.
-1. Sign in to the Azure portal and search for the user account from which the FIDO key is to be removed.
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for the user account from which the FIDO key is to be removed.
1. Select **Authentication methods** > right-click **FIDO2 security key** and click **Delete**. ![View Authentication Method details](media/howto-authentication-passwordless-deployment/security-key-view-details.png)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
These roles can perform the following actions related to a Temporary Access Pass
- Authentication Administrators can create, delete, and view a Temporary Access Pass on members (except themselves) - Global Reader can view the Temporary Access Pass details on the user (without reading the code itself).
-1. Sign in to the Azure portal by using one of the preceding roles.
+1. Sign in to the [Azure portal](https://portal.azure.com) by using one of the preceding roles.
1. Select **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**. 1. If needed, select the option to **Try the new user authentication methods experience**. 1. Select the option to **Add authentication methods**.
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **ESTS_TOKEN_ERROR** | Follow the instructions in [Troubleshooting the MFA NPS extension](howto-mfa-nps-extension.md#troubleshooting) to investigate client cert and security token problems. | | **HTTPS_COMMUNICATION_ERROR** | The NPS server is unable to receive responses from Azure AD MFA. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and that TLS 1.2 is enabled (default). If TLS 1.2 is disabled, user authentication will fail and event ID 36871 with source SChannel is entered in the System log in Event Viewer. To verify TLS 1.2 is enabled, see [TLS registry settings](/windows-server/security/tls/tls-registry-settings#tls-dtls-and-ssl-protocol-version-settings). | | **HTTP_CONNECT_ERROR** | On the server that runs the NPS extension, verify that you can reach `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com/`. If those sites don't load, troubleshoot connectivity on that server. |
-| **NPS Extension for Azure AD MFA:** <br> NPS Extension for Azure AD MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User username with response state AccessReject, ignoring request. | This error usually reflects an authentication failure in AD or that the NPS server is unable to receive responses from Azure AD. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com` using ports 80 and 443. It is also important to check that on the DIAL-IN tab of Network Access Permissions, the setting is set to "control access through NPS Network Policy". This error can also trigger if the user is not assigned a license. |
+| **NPS Extension for Azure AD MFA (AccessReject):** <br> NPS Extension for Azure AD MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User username with response state AccessReject, ignoring request. | This error usually reflects an authentication failure in AD or that the NPS server is unable to receive responses from Azure AD. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com` using ports 80 and 443. It is also important to check that on the DIAL-IN tab of Network Access Permissions, the setting is set to "control access through NPS Network Policy". This error can also trigger if the user is not assigned a license. |
+| **NPS Extension for Azure AD MFA (AccessChallenge):** <br> NPS Extension for Azure AD MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User username with response state AccessChallenge, ignoring request. | This response is used when additional information is required from the user to complete the authentication or authorization process. The NPS server sends a challenge to the user, requesting further credentials or information. It usually preceeds an Access-Accept or Access-Reject response. |
| **REGISTRY_CONFIG_ERROR** | A key is missing in the registry for the application, which may be because the [PowerShell script](howto-mfa-nps-extension.md#install-the-nps-extension) wasn't run after installation. The error message should include the missing key. Make sure you have the key under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa. | | **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. |
active-directory Tutorial Enable Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
First, sign in to a resource that doesn't require MFA:
You configured the Conditional Access policy to require additional authentication for the Azure portal. Because of that configuration, you're prompted to use Azure AD Multi-Factor Authentication or to configure a method if you haven't yet done so. Test this new requirement by signing in to the Azure portal:
-1. Open a new browser window in InPrivate or incognito mode and browse to [https://portal.azure.com](https://portal.azure.com).
+1. Open a new browser window in InPrivate or incognito mode and sign in to the [Azure portal](https://portal.azure.com).
1. Sign in with your non-administrator test user, such as *testuser*. Be sure to include `@` and the domain name for the user account.
You configured the Conditional Access policy to require additional authenticatio
1. Complete the instructions on the screen to configure the method of multi-factor authentication that you've selected.
-1. Close the browser window, and log in again at [https://portal.azure.com](https://portal.azure.com) to test the authentication method that you configured. For example, if you configured a mobile app for authentication, you should see a prompt like the following.
+1. Close the browser window, and sign in to the [Azure portal](https://portal.azure.com) again to test the authentication method that you configured. For example, if you configured a mobile app for authentication, you should see a prompt like the following.
![To sign in, follow the prompts in your browser and then the prompt on the device that you registered for multi-factor authentication.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
active-directory Howto Reactivate Disabled Acs Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-reactivate-disabled-acs-namespaces.md
Further extensions will no longer be automatically approved. If you need additio
### To request an extension
-1. Sign in to the Azure portal and create a [new support request](https://portal.azure.com/#create/Microsoft.Support).
+1. Sign in to the [Azure portal](https://portal.azure.com) and create a [new support request](https://portal.azure.com/#create/Microsoft.Support).
1. Fill in the new support request form as shown in the following example. | Support request field | Value |
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
If a user with a token with a one hour lifetime performs an interactive sign-in
Not all applications should validate tokens. Only in specific scenarios should applications validate a token: -- Web APIs must validate access tokens sent to them by a client. They must only accept tokens containing their `aud` claim.-- Confidential web applications like ASP.NET Core must validate ID tokens sent to them by using the user's browser in the hybrid flow, before allowing access to a user's data or establishing a session.
+- Web APIs must validate access tokens sent to them by a client. They must only accept tokens containing one of their AppId URIs as the `aud` claim.
+- Web apps must validate ID tokens sent to them by using the user's browser in the hybrid flow, before allowing access to a user's data or establishing a session.
-If none of the above scenarios apply, there's no need to validate the token, and may present a security and reliability risk when basing decisions on the validity of the token. Public clients like native or single-page applications don't benefit from validating tokens because the application communicates directly with the IDP where SSL protection ensures the tokens are valid.
+If none of the above scenarios apply, there's no need to validate the token, and this may present a security and reliability risk when basing decisions on the validity of the token. Public clients like native, desktop or single-page applications don't benefit from validating ID tokens because the application communicates directly with the IDP where SSL protection ensures the ID tokens are valid. They shouldn't validate the access tokens, as these are for the web API to validate, not the client.
APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, you can't validate tokens for Microsoft Graph according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem. If the application needs to validate an ID token or an access token, it should first validate the signature of the token and the issuer against the values in the OpenID discovery document.
-The Azure AD middleware has built-in capabilities for validating access tokens, see [samples](sample-v2-code.md) to find one in the appropriate language. There are also several third-party open-source libraries available for JWT validation. For more information about Azure AD authentication libraries and code samples, see the [authentication libraries](reference-v2-libraries.md).
+The Azure AD middleware has built-in capabilities for validating access tokens, see [samples](sample-v2-code.md) to find one in the appropriate language. There are also several third-party open-source libraries available for JWT validation. For more information about Azure AD authentication libraries and code samples, see the [authentication libraries](reference-v2-libraries.md). If you web app or web API is on ASP.NET or ASP.NET Core, use Microsoft.Identity.Web which handles the validation for you.
-### Validate the issuer
-
-[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the iss (issuer) Claim." For applications which use a tenant-specific metadata endpoint (like [https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration) or [https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration)), this is all that is needed.
-Azure AD makes available a tenant-independent version of the document for multi-tenant apps at [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration). This endpoint returns an issuer value `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use this tenant-independent endpoint to validate tokens from every tenant with the following modifications:
- 1. Instead of expecting the issuer claim in the token to exactly match the issuer value from metadata, the application should replace the `{tenantid}` value in the issuer metadata with the tenant ID that is the target of the current request, and then check the exact match.
-
- 1. The application should use the `issuer` property returned from the keys endpoint to restrict the scope of keys.
- - Keys that have an issuer value like `https://login.microsoftonline.com/{tenantid}/v2.0` may be used with any matching token issuer.
- - Keys that have an issuer value like `https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0` should only be used with exact match.
- Azure AD's tenant-independent key endpoint ([https://login.microsoftonline.com/common/discovery/v2.0/keys](https://login.microsoftonline.com/common/discovery/v2.0/keys)) returns a document like:
- ```
- {
- "keys":[
- {"kty":"RSA","use":"sig","kid":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","x5t":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","n":"spv...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
- {"kty":"RSA","use":"sig","kid":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","x5t":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","n":"wEM...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
- {"kty":"RSA","use":"sig","kid":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","x5t":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","n":"rv0...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0"}
- ]
- }
- ```
+### v1.0 and v2.0 tokens
-1. Applications that use Azure AD's tenant ID (`tid`) claim as a trust boundary instead of the standard issuer claim should ensure that the tenant-id claim is a GUID and that the issuer and tenant ID match.
+- When your web app/API is validating a v1.0 token (`ver` claim ="1.0"), it needs to read the OpenID Connect metadata document from the v1.0 endpoint (`https://login.microsoftonline.com/{example-tenant-id}/.well-known/openid-configuration`), even if the authority configured for your web API is a v2.0 authority.
+- When your web app/API is validating a v2.0 token (`ver` claim ="2.0"), it needs to read the OpenID Connect metadata document from the v2.0 endpoint (`https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration`), even if the authority configured for your web API is a v1.0 authority.
-Using tenant-independent metadata is more efficient for applications which accept tokens from many tenants.
-> [!NOTE]
-> With Azure AD tenant-independent metadata, claims should be interpreted within the tenant, just as under standard OpenID Connect, claims are interpreted within the issuer. That is, `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{example-tenant-id}/v2.0","tid":"{example-tenant-id}"}` and `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{another-tenand-id}/v2.0","tid":"{another-tenant-id}"}` describe different users, even though the `sub` is the same, because claims like `sub` are interpreted within the context of the issuer/tenant.
+The examples below suppose that your application is validating a v2.0 access token (and therefore reference the v2.0 versions of the OIDC metadata documents and keys). Just remove the "/v2.0" in the URL if you validate v1.0 tokens.
### Validate the signature
Doing signature validation is outside the scope of this document. There are many
If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID. For validation, use `jwks_uri` that points to the signing key information of the application. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
+### Validate the issuer
+
+Web apps validating ID tokens, and web APIs validating access tokens need to validate the issuer of the token (`iss` claim) against:
+1. the issuer available in the OpenID connect metadata document associated with the application configuration (authority). The metadata document to verify against depends on:
+ - the version of the token
+ - the accounts supported by your application.
+1. the tenant ID (`tid` claim) of the token,
+1. the issuer of the signing key.
+
+#### Single tenant applications
+
+[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the `iss` (issuer) Claim." For applications that use a tenant-specific metadata endpoint (like [https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration) or [https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration)), this is all that is needed.
+
+This applies to applications which support:
+- Accounts in one organizational directory (**example-tenant-id** only): `https://login.microsoftonline.com/{example-tenant-id}`
+- Personal Microsoft accounts only: `https://login.microsoftonline.com/consumers` (**consumers** being a nickname for the tenant 9188040d-6c67-4c5b-b112-36a304b66dad)
++
+#### Multi-tenant applications
+
+Azure AD also supports multi-tenant applications. These applications support:
+- Accounts in any organizational directory (any Azure AD directory): `https://login.microsoftonline.com/organizations`
+- Accounts in any organizational directory (any Azure AD directory) and personal Microsoft accounts (e.g. Skype, XBox): `https://login.microsoftonline.com/common`
+
+For these applications Azure AD exposes tenant-independent versions of the OIDC document at [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration) and [https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration) respectively. These endpoints return an issuer value, which is a template parametrized by the `tenantid`: `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use these tenant-independent endpoints to validate tokens from every tenant with the following modifications: instead of expecting the issuer claim in the token to exactly match the issuer value from metadata, the application should replace the `{tenantid}` value in the issuer metadata with the tenant ID that is the target of the current request, and then check the exact match (`tid` claim of the token).
+
+### Validate the signing key issuer
+
+In addition to the issuer of the token, applications using the v2.0 tenant-independant metadata need to validate the signing key issuer.
+
+#### Keys document and signing key issuer
+
+As discussed, from the OpenID Connect document, your application accesses the keys used to sign the tokens. It gets the corresponding keys document by accessing the URL exposed in the **jwks_uri** property of the OpenIdConnect document.
+
+```json
+ "jwks_uri": "https://login.microsoftonline.com/{example-tenant-id}/discovery/v2.0/keys",
+```
+
+As above, `{example-tenant-id}` can be replaced by a GUID, a domain name, or **common**, **organizations** and **consumers**
+
+The "keys" documents exposed by Azure AD v2.0 contains, for each key, the issuer that uses this signing key. See, for instance, the
+tenant-independent "common" key endpoint [https://login.microsoftonline.com/common/discovery/v2.0/keys](https://login.microsoftonline.com/common/discovery/v2.0/keys) returns a document like:
+
+ ```json
+ {
+ "keys":[
+ {"kty":"RSA","use":"sig","kid":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","x5t":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","n":"spv...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
+ {"kty":"RSA","use":"sig","kid":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","x5t":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","n":"wEM...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
+ {"kty":"RSA","use":"sig","kid":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","x5t":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","n":"rv0...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0"}
+ ]
+ }
+ ```
+
+#### Validation of the signing key issuer
+
+The application should use the `issuer` property of the keys document, associated with the key used to sign the token, in order to restrict the scope of keys:
+- Keys that have an issuer value with a GUID like `https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0` should only be used when the `iss` claim in the token matches the value exactly.
+- Keys that have a templated issuer value like `https://login.microsoftonline.com/{tenantid}/v2.0` need to ensure that:
+ - the `tid` claim is a GUID and the `iss` claim is of the form `https://login.microsoftonline.com/{tid}/v2.0` where `{tid}` is the exact `tid` claim. This ties the tenant back to the issuer. back to the scope of the signing key creating a chain of trust.
+ - Multi-tenant applications must use `tid` claim when they locate data associated with the subject of the claim. In other words, the `tid` claim must be part of the key used to access the user's data.
+
+Using tenant-independent metadata is more efficient for applications that accept tokens from many tenants.
+> [!NOTE]
+> With Azure AD tenant-independent metadata, claims should be interpreted within the tenant, just as under standard OpenID Connect, claims are interpreted within the issuer. That is, `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{example-tenant-id}/v2.0","tid":"{example-tenant-id}"}` and `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{another-tenand-id}/v2.0","tid":"{another-tenant-id}"}` describe different users, even though the `sub` is the same, because claims like `sub` are interpreted within the context of the issuer/tenant.
++
+#### Recap
+
+Here is some pseudo code that recapitulates how to validate issuer and signing key issuer:
+
+1. Fetch keys from configured metadata URL
+1. Check token if signed with one of the published keys, fail if not
+1. Identify key in the metadata based on the kid header. Check the "issuer" property attached to the key in the metadata document:
+ ```c
+ var issuer = metadata["kid"].issuer;
+ if (issuer.contains("{tenantId}", CaseInvariant)) issuer = issuer.Replace("{tenantid}", token["tid"], CaseInvariant);
+ if (issuer != token["iss"]) throw validationException;
+ if (configuration.allowedIssuer != "*" && configuration.allowedIssuer != issuer) throw validationException;
+ var issUri = new Uri(token["iss"]);
+ if (issUri.Segments.Count < 1) throw validationException;
+ if (issUri.Segments[1] != token["tid"]) throw validationException;
+ ```
+ ## Token revocation Refresh tokens are invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
- # Microsoft Enterprise SSO plug-in for Apple devices
-The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [enterprise single sign-on](https://developer.apple.com/documentation/authenticationservices) feature. The plug-in provides SSO for even old applications that your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection available.
+The **Microsoft Enterprise SSO plug-in for Apple devices** provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [enterprise single sign-on](https://developer.apple.com/documentation/authenticationservices) feature. The plug-in provides SSO for even old applications that your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection available.
The Enterprise SSO plug-in is currently a built-in feature of the following apps:
Try this configuration only for applications that have unexpected sign-in failur
| `Enable_SSO_On_All_ManagedApps` | Integer | `1` to enable SSO for all managed apps, `0` to disable SSO for all managed apps. | | `AppAllowList` | String<br/>*(comma-delimited list)* | Bundle IDs of applications allowed to participate in SSO. | | `AppBlockList` | String<br/>*(comma-delimited list)* | Bundle IDs of applications not allowed to participate in SSO. |
-| `AppPrefixAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO. For iOS, the default value would be set to `com.apple.` and that would enable SSO for all Apple apps. For macOS, the default value would be set to `com.apple.` and `com.microsoft.` and that would enable SSO for all Apple and Microsoft apps. Developers , Customers or Admins could override the default value or add apps to `AppBlockList` to prevent them from participating in SSO. |
+| `AppPrefixAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO. For iOS, the default value would be set to `com.apple.` and that would enable SSO for all Apple apps. For macOS, the default value would be set to `com.apple.` and `com.microsoft.` and that would enable SSO for all Apple and Microsoft apps. Developers, Customers, or Admins could override the default value or add apps to `AppBlockList` to prevent them from participating in SSO. |
| `AppCookieSSOAllowList` | String<br/>*(comma-delimited list)* | Bundle ID prefixes of applications allowed to participate in SSO but that use special network settings and have trouble with SSO using the other settings. Apps you add to `AppCookieSSOAllowList` must also be added to `AppPrefixAllowList`. Please note that this key is to be used only for iOS apps and not for macOS apps. | #### Settings for common scenarios
The Microsoft Enterprise SSO plug-in relies on the [Apple Enterprise SSO framewo
Native applications can also implement custom operations and communicate directly with the SSO plug-in. For more information, see this [2019 Worldwide Developer Conference video from Apple](https://developer.apple.com/videos/play/tech-talks/301/).
+> [!TIP]
+> Learn more about how the SSO plug-in works and how to troubleshoot the Microsoft Enterprise SSO Extension with the [SSO troubleshooting guide for Apple devices](../devices/troubleshoot-mac-sso-extension-plugin.md).
+ ### Applications that use MSAL [MSAL for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) versions 1.1.0 and later supports the Microsoft Enterprise SSO plug-in for Apple devices natively for work and school accounts.
The end user sees the familiar experience and doesn't have to sign in again in e
## Next steps Learn about [Shared device mode for iOS devices](msal-ios-shared-devices.md).+
+Learn about [troubleshooting the Microsoft Enterprise SSO Extension](../devices/troubleshoot-mac-sso-extension-plugin.md).
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
For example, if the name of your tenant was `contoso.onmicrosoft.com` then a val
## Update your code to send requests to `/common`
-With a multi-tenant application, because the application can't immediately tell which tenant the user is from, requests can't be sent to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`.
+With a multi-tenant application, because the application can't immediately tell which tenant the user is from, requests can't be sent to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`.
Edit your code and change the value for your tenant to `/common`. It's important to note that this endpoint isn't a tenant or an issuer itself. When the Microsoft identity platform receives a request on the `/common` endpoint, it signs the user in, thereby discovering which tenant the user is from. This endpoint works with all of the authentication protocols supported by the Azure AD (OpenID Connect, OAuth 2.0, SAML 2.0, WS-Federation). The sign-in response to the application then contains a token representing the user. The issuer value in the token tells an application what tenant the user is from. When a response returns from the `/common` endpoint, the issuer value in the token corresponds to the userΓÇÖs tenant.
-## Update your code to handle multiple issuer values
-
-Web applications and web APIs receive and validate tokens from the Microsoft identity platform. Native client applications don't validate access tokens and must treat them as opaque. They instead request and receive tokens from the Microsoft identity platform, and do so to send them to APIs, where they're then validated. Multi-tenant applications canΓÇÖt validate tokens by matching the issuer value in the metadata with the `issuer` value in the token. A multi-tenant application needs logic to decide which issuer values are valid and which aren't based on the tenant ID portion of the issuer value.
-
-For example, if a multi-tenant application only allows sign-in from specific tenants who have signed up for their service, then it must check either the `issuer` value or the `tid` claim value in the token to make sure that tenant is in their list of subscribers. If a multi-tenant application only deals with individuals and doesnΓÇÖt make any access decisions based on tenants, then it can ignore the issuer value altogether.
-
-In the [multi-tenant samples][AAD-Samples-MT], issuer validation is disabled to enable any Azure AD tenant to sign in. Because the `/common` endpoint doesnΓÇÖt correspond to a tenant and isnΓÇÖt an issuer, when you examine the issuer value in the metadata for `/common`, it has a templated URL instead of an actual value:
-
-```http
-https://sts.windows.net/{tenantid}/
-```
-To ensure your app can support multiple tenants, modify the relevant section of your code to ensure that your issuer value is set to `{tenantid}`.
-
-In contrast, single-tenant applications normally take endpoint values to construct metadata URLs such as:
+> [!NOTE]
+> There are, in reality 2 authorities for multi-tenant applications:
+> - `https://login.microsoftonline.com/common` for applications processing accounts in any organizational directory (any Azure AD directory) and personal Microsoft accounts (e.g. Skype, XBox).
+> - `https://login.microsoftonline.com/organizations` for applications processing accounts in any organizational directory (any Azure AD directory):
+>
+> The explanations in this document use `common`. But you can replace it by `organizations` if your application doesn't support Microsoft personal accounts.
-```http
-https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration
-```
-
-to download two critical pieces of information that are used to validate tokens: the tenantΓÇÖs signing keys and issuer value.
-
-Each Azure AD tenant has a unique issuer value of the form:
-
-```http
-https://sts.windows.net/31537af4-6d77-4bb9-a681-d2394888ea26/
-```
+## Update your code to handle multiple issuer values
-...where the GUID value is the rename-safe version of the tenant ID of the tenant.
+Web applications and web APIs receive and validate tokens from the Microsoft identity platform. Native client applications don't validate access tokens and must treat them as opaque. They instead request and receive tokens from the Microsoft identity platform, and do so to send them to APIs, where they're then validated.
-When a single-tenant application validates a token, it checks the signature of the token against the signing keys from the metadata document. This test allows it to make sure the issuer value in the token matches the one that was found in the metadata document.
+Multi-tenant applications must perform additional checks when validating a token. A multi-tenant application is configured to consume keys metadata from `/organizations` or `/common` keys URLs. The application must validate that the `issuer` property in the published metadata matches the `iss` claim in the token, in addition to the usual check that the `iss` claim in the token contains the tenant ID (`tid`) claim. For more information see [Validate tokens](access-tokens.md#validate-tokens).
## Understand user and admin consent and make appropriate code changes
Multi-tenant applications can also get access tokens to call APIs that are prote
* [Integrating applications with Azure Active Directory][AAD-Integrating-Apps] * [Overview of the Consent Framework][AAD-Consent-Overview] * [Microsoft Graph API permission scopes][MSFT-Graph-permission-scopes]
+* [Access tokens](access-tokens.md)
## Next steps
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
The server can revoke refresh tokens because of a change in credentials, user ac
- [Access tokens in the Microsoft identity platform](access-tokens.md) - [ID tokens in the Microsoft identity platform](id-tokens.md)-- [Invalidate refresh token](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken)
+- [Invalidate refresh token](https://learn.microsoft.com/powershell/module/microsoft.graph.beta.users.actions/invoke-mgbetainvalidateuserrefreshtoken?view=graph-powershell-beta.md)
- [Single sign-out](v2-protocols-oidc.md#single-sign-out) ## Next steps
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md
Replace *{tenant}* with your tenant ID, *{your_client_ID}* with the client ID of
Your tenant likely has a conditional access policy that [requires multifactor authentication (MFA) for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md), as recommended by Microsoft. MFA won't work with ROPC, so you'll need to exempt your test applications and test users from this requirement. To exclude user accounts:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in to your tenant. Select **Azure Active Directory**. Select **Security** in the left navigation pane and then select **Conditional Access**.
+1. Sign in to the [Azure portal](https://portal.azure.com) to access your tenant. Select **Azure Active Directory**. Select **Security** in the left navigation pane and then select **Conditional Access**.
1. In **Policies**, select the conditional access policy that requires MFA. 1. Select **Users or workload identities**. 1. Select the **Exclude** tab and then the **Users and groups** checkbox.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
You can enable Azure AD login for any of the [supported Linux distributions](#su
For example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD login:
-1. Sign in to the Azure portal by using an account that has access to create VMs, and then select **+ Create a resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) by using an account that has access to create VMs, and then select **+ Create a resource**.
1. Select **Create** under **Ubuntu Server 18.04 LTS** in the **Popular** view. 1. On the **Management** tab: 1. Select the **Login with Azure Active Directory** checkbox.
The application that appears in the Conditional Access policy is called *Azure L
If the Azure Linux VM Sign-In application is missing from Conditional Access, make sure the application isn't in the tenant:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **Virtual Machine**. If you don't see Microsoft Azure Linux Virtual Machine Sign-In as a result, the service principal is missing from the tenant.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Share your feedback about this feature or report problems with using it on the [
If the Azure Windows VM Sign-In application is missing from Conditional Access, make sure that the application is in the tenant:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **VM**. If you don't see **Azure Windows VM Sign-In** as a result, the service principal is missing from the tenant.
active-directory Troubleshoot Mac Sso Extension Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md
Previously updated : 02/02/2023 Last updated : 07/05/2023
Apple supports two types of SSO Extensions that are part of its framework: **Red
| Extension type | Best suited for | How it works | Key differences | ||||| | Redirect | Modern authentication methods such as OpenID Connect, OAUTH2, and SAML (Azure Active Directory)| Operating System intercepts the authentication request from the application to the Identity provider URLs defined in the extension MDM configuration profile. Redirect extensions receive: URLs, headers, and body.| Request credentials before requesting data. Uses URLs in MDM configuration profile. |
-| Credential | Challenge and response authentication types like **Kerberos** (on-premises Active Directory Domain Services)| Request is sent from the application to the authentication server (AD domain controller). Credential extensions are configured with HOSTS in the MDM configuration profile. If the authentication server returns a challenge that matches a host listed in the profile, the operating system will route the challenge to the extension. The extension has the choice of handling or rejecting the challenge. If handled, the extension returns the authorization headers to complete the request, and authentication server will return response to the caller. | Request data then get challenged for authentication. Use HOSTs in MDM configuration profile. |
+| Credential | Challenge and response authentication types like **Kerberos** (on-premises Active Directory Domain Services)| Request is sent from the application to the authentication server (AD domain controller). Credential extensions are configured with HOSTS in the MDM configuration profile. If the authentication server returns a challenge that matches a host listed in the profile, the operating system routes the challenge to the extension. The extension has the choice of handling or rejecting the challenge. If handled, the extension returns the authorization headers to complete the request, and the authentication server returns a response to the caller. | Request data then get challenged for authentication. Use HOSTs in MDM configuration profile. |
Microsoft has implementations for brokered authentication for the following client operating systems:
All Microsoft broker applications use a key artifact known as a Primary Refresh
## Troubleshooting model
-The following flowchart outlines a logical flow for approaching troubleshooting the SSO Extension. The rest of this article will go into detail on the steps depicted in this flowchart. The troubleshooting can be broken down into two separate focus areas: [Deployment](#deployment-troubleshooting) and [Application Auth Flow](#application-auth-flow-troubleshooting).
+The following flowchart outlines a logical flow for approaching troubleshooting the SSO Extension. The rest of this article goes into detail on the steps depicted in this flowchart. The troubleshooting can be broken down into two separate focus areas: [Deployment](#deployment-troubleshooting) and [Application Auth Flow](#application-auth-flow-troubleshooting).
+# [iOS](#tab/flowchart-ios)
++
+# [macOS](#tab/flowchart-macos)
+++ ## Deployment troubleshooting
-Most issues that customers encounter stem from either improper Mobile Device Management (MDM) configuration(s) of the SSO extension profile, or an inability for the Apple device to receive the configuration profile from the MDM. This section will cover the steps you can take to ensure that the MDM profile has been deployed to a Mac and that it has the correct configuration.
+Most issues that customers encounter stem from either improper Mobile Device Management (MDM) configuration(s) of the SSO extension profile, or an inability for the Apple device to receive the configuration profile from the MDM. This section covers the steps you can take to ensure that the MDM profile has been deployed to a Mac and that it has the correct configuration.
### Deployment requirements - macOS operating system: **version 10.15 (Catalina)** or greater. - iOS operating system: **version 13** or greater.-- Device is managed by any MDM vendor that supports [Apple macOS and/or iOS](https://support.apple.com/guide/deployment/dep1d7afa557/web) (MDM Enrollment).
+- Device managed by any MDM vendor that supports [Apple macOS and/or iOS](https://support.apple.com/guide/deployment/dep1d7afa557/web) (MDM Enrollment).
- Authentication Broker Software installed: [**Microsoft Intune Company Portal**](/mem/intune/apps/apps-company-portal-macos) or [**Microsoft Authenticator for iOS**](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a). #### Check macOS operating system version
-Use the following steps to check the operating system (OS) version on the macOS device. Apple SSO Extension profiles will only be deployed to devices running **macOS 10.15 (Catalina)** or greater. You can check the macOS version from either the [User Interface](#user-interface) or from the [Terminal](#terminal).
+Use the following steps to check the operating system (OS) version on the macOS device. Apple SSO Extension profiles are only deployed to devices running **macOS 10.15 (Catalina)** or greater. You can check the macOS version from either the [User Interface](#user-interface) or from the [Terminal](#terminal).
##### User interface
-1. From the macOS device, click on the Apple icon in the top left corner and select **About This Mac**.
+1. From the macOS device, select on the Apple icon in the top left corner and select **About This Mac**.
-1. The Operating system version will be listed beside **macOS**.
+1. The Operating system version is listed beside **macOS**.
##### Terminal
-1. From the macOS device, open Terminal from the **Applications** -> **Utilities** folder.
+1. From the macOS device, double-click on the **Applications** folder, then double-click on the **Utilities** folder.
+1. Double-click on the **Terminal** application.
1. When the Terminal opens type **sw_vers** at the prompt, look for a result like the following: ```zsh
Use the following steps to check the operating system (OS) version on the macOS
BuildVersion: 22A400 ```
+#### Check iOS operating system version
+
+Use the following steps to check the operating system (OS) version on the iOS device. Apple SSO Extension profiles are only deployed to devices running **iOS 13** or greater. You can check the iOS version from the **Settings app**. Open the **Settings app**:
++
+Navigate to **General** and then **About**. This screen lists information about the device, including the iOS version number:
++ #### MDM deployment of SSO extension configuration profile Work with your MDM administrator (or Device Management team) to ensure that the extension configuration profile is deployed to the Apple devices. The extension profile can be deployed from any MDM that supports macOS or iOS devices.
Assuming the MDM administrator has followed the steps in the previous section [M
##### Locate SSO extension MDM configuration profile
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears type **Profiles** and hit **return**.
-1. This action should bring up the **Profiles** panel within the **System Settings**.
+1. From the macOS device, select on the **System Settings**.
+1. When the **System Settings** appears type **Profiles** and hit **return**.
+1. This action should bring up the **Profiles** panel.
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/profiles-within-system-settings.png" alt-text="Screenshot showing configuration profiles."::: | Screenshot callout | Description | |::|| |**1**| Indicates that the device is under **MDM** Management. |
- |**2**| There will likely be multiple profiles to choose from. In this example, the Microsoft Enterprise SSO Extension Profile is called **Extensible Single Sign On Profile-32f37be3-302e-4549-a3e3-854d300e117a**. |
+ |**2**| There may be multiple profiles to choose from. In this example, the Microsoft Enterprise SSO Extension Profile is called **Extensible Single Sign On Profile-32f37be3-302e-4549-a3e3-854d300e117a**. |
> [!NOTE] > Depending on the type of MDM being used, there could be several profiles listed and their naming scheme is arbitrary depending on the MDM configuration. Select each one and inspect that the **Settings** row indicates that it is a **Single Sign On Extension**. 1. Double-click on the configuration profile that matches a **Settings** value of **Single Sign On Extension**.
- :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/sso-extension-config-profile.png" alt-text="Screenshot showing sso extension configuration profile.":::
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/sso-extension-config-profile.png" alt-text="Screenshot showing SSO extension configuration profile.":::
| Screenshot callout | Configuration profile setting | Description | |::|:|| |**1**|**Signed**| Signing authority of the MDM provider. | |**2**|**Installed**| Date/Timestamp showing when the extension was installed (or updated). | |**3**|**Settings: Single Sign On Extension**|Indicates that this configuration profile is an **Apple SSO Extension** type.|
- |**4**|**Extension**| Identifier that maps to the **bundle ID** of the application that is running the **Microsoft Enterprise Extension Plugin**. The identifier must **always** be set to **`com.microsoft.CompanyPortalMac.ssoextension`** and the Team Identifier must appear as **(UBF8T346G9)** if the profile is installed on a macOS device. *Note: if any values differ, then the MDM won't invoke the extension correctly.*|
+ |**4**|**Extension**| Identifier that maps to the **bundle ID** of the application that is running the **Microsoft Enterprise Extension Plugin**. The identifier must **always** be set to **`com.microsoft.CompanyPortalMac.ssoextension`** and the Team Identifier must appear as **(UBF8T346G9)** if the profile is installed on a macOS device. If any values differ, then the MDM doesn't invoke the extension correctly.|
|**5**|**Type**| The **Microsoft Enterprise SSO Extension** must **always** be set to a **Redirect** extension type. For more information, see [Redirect vs Credential Extension Types](#extension-types). | |**6**|**URLs**| The login URLs belonging to the Identity Provider **(Azure AD)**. See list of [supported URLs](../develop/apple-sso-plugin.md#manual-configuration-for-other-mdm-services). |
If the SSO extension configuration profile doesn't appear in the **Profiles** li
###### Collect MDM specific console logs
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears, type **Console** and hit **return**.
+1. From the macOS device, double-click on the **Applications** folder, then double-click on the **Utilities** folder.
+1. Double-click on the **Console** application.
1. Click the **Start** button to enable the Console trace logging. :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/console-window-start-button.png" alt-text="Screenshot showing the Console app and the start button being clicked.":::
Once deployed the **Microsoft Enterprise SSO Extension for Apple devices** suppo
| Application type | Interactive auth | Silent auth | Description | Examples | | | :: | :: | | :: |
-| [**Native MSAL App**](../develop/apple-sso-plugin.md#applications-that-use-msal) |X|X| MSAL (Microsoft Authentication Library) is an application developer framework tailored for building applications with the Microsoft Identity platform (Azure AD).<br>Apps built on **MSAL version 1.1 or greater** are able to integrate with the Microsoft Enterprise SSO Extension.<br>*If the application is SSO extension (broker) aware it will utilize the extension without any further configuration* for more information, see our [MSAL developer sample documentation](https://github.com/AzureAD/microsoft-authentication-library-for-objc). | Microsoft To Do |
+| [**Native MSAL App**](../develop/apple-sso-plugin.md#applications-that-use-msal) |X|X| MSAL (Microsoft Authentication Library) is an application developer framework tailored for building applications with the Microsoft Identity platform (Azure AD).<br>Apps built on **MSAL version 1.1 or greater** are able to integrate with the Microsoft Enterprise SSO Extension.<br>*If the application is SSO extension (broker) aware it utilizes the extension without any further configuration* for more information, see our [MSAL developer sample documentation](https://github.com/AzureAD/microsoft-authentication-library-for-objc). | Microsoft To Do |
| [**Non-MSAL Native/Browser SSO**](../develop/apple-sso-plugin.md#applications-that-dont-use-msal) ||X| Applications that use Apple networking technologies or webviews can be configured to obtain a shared credential from the SSO Extension<br>Feature flags must be configured to ensure that the bundle ID for each app is allowed to obtain the shared credential (PRT). | Microsoft Word<br>Safari<br>Microsoft Edge<br>Visual Studio | > [!IMPORTANT]
Once deployed the **Microsoft Enterprise SSO Extension for Apple devices** suppo
#### How to find the bundle ID for an application on macOS
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears type **Terminal** and hit **return**.
+1. From the macOS device, double-click on the **Applications** folder, then double-click on the **Utilities** folder.
+1. Double-click on the **Terminal** application.
1. When the Terminal opens type **`osascript -e 'id of app "<appname>"'`** at the prompt. See some examples follow: ```zsh
Once deployed the **Microsoft Enterprise SSO Extension for Apple devices** suppo
### Bootstrapping
-By default, only MSAL apps invoke the SSO Extension, and then in turn the Extension acquires a shared credential (PRT) from Azure AD. However, the **Safari** browser application or other **Non-MSAL** applications can be configured to acquire the PRT. See [Allow users to sign in from applications that don't use MSAL and the Safari browser](../develop/apple-sso-plugin.md#allow-users-to-sign-in-from-applications-that-dont-use-msal-and-the-safari-browser). After the SSO extension acquires a PRT, it will store the credential in the user's login Keychain. Next, check to ensure that the PRT is present in the user's keychain:
+By default, only MSAL apps invoke the SSO Extension, and then in turn the Extension acquires a shared credential (PRT) from Azure AD. However, the **Safari** browser application or other **Non-MSAL** applications can be configured to acquire the PRT. See [Allow users to sign in from applications that don't use MSAL and the Safari browser](../develop/apple-sso-plugin.md#allow-users-to-sign-in-from-applications-that-dont-use-msal-and-the-safari-browser). After the SSO extension acquires a PRT, it will store the credential in the user login Keychain. Next, check to ensure that the PRT is present in the user's keychain:
#### Checking keychain access for PRT
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears, type **Keychain Access** and hit **return**.
+1. From the macOS device, double-click on the **Applications** folder, then double-click on the **Utilities** folder.
+1. Double-click on the **Keychain Access** application.
1. Under **Default Keychains** select **Local Items (or iCloud)**. - Ensure that the **All Items** is selected.
- - In the search bar, on the right-hand side, type **primaryrefresh** (To filter).
+ - In the search bar, on the right-hand side, type `primaryrefresh` (To filter).
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/prt-located-in-keychain-access.png" alt-text="screenshot showing how to find the PRT in Keychain access app.":::
By default, only MSAL apps invoke the SSO Extension, and then in turn the Extens
|**3** |**Kind**|Refers to the type of credential. The Azure AD PRT credential is an **Application Password** credential type| |**4** |**Account**|Displays the Azure AD User Account, which owns the PRT in the format: **`UserObjectId.TenantId-login.windows.net`** | |**5** |**Where**|Displays the full name of the credential. The Azure AD PRT credential begins with the following format: **`primaryrefreshtoken-29d9ed98-a469-4536-ade2-f981bc1d605`** The **29d9ed98-a469-4536-ade2-f981bc1d605** is the Application ID for the **Microsoft Authentication Broker** service, responsible for handling PRT acquisition requests|
- |**6** |**Modified**|Shows when the credential was last updated. For the Azure AD PRT credential, anytime the credential is either bootstrapped or updated by an interactive sign-on event will update the date/timestamp|
- |**7** |**Keychain** |Indicates which Keychain the selected credential resides. The Azure AD PRT credential will either reside in the **Local Items** or **iCloud** Keychain. *Note: When iCloud is enabled on the macOS device, the **Local Items** Keychain will become the **iCloud** keychain*|
+ |**6** |**Modified**|Shows when the credential was last updated. For the Azure AD PRT credential, anytime the credential is bootstrapped or updated by an interactive sign-on event it updates the date/timestamp|
+ |**7** |**Keychain** |Indicates which Keychain the selected credential resides. The Azure AD PRT credential resides in the **Local Items** or **iCloud** Keychain. When iCloud is enabled on the macOS device, the **Local Items** Keychain will become the **iCloud** keychain|
1. If the PRT isn't found in Keychain Access, do the following based on the application type:
One of the most useful tools to troubleshoot various issues with the SSO extensi
#### Save SSO extension logs from Company Portal app
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears, type **Company Portal** and hit **return**.
-1. When the **Company Portal** loads (Note: no need to Sign into the app), navigate to the top menu bar: **Help**->**Save diagnostic report**.
+1. From the macOS device, double-click on the **Applications** folder.
+1. Double-click on the **Company Portal** application.
+1. When the **Company Portal** loads, navigate to the top menu bar: **Help**->**Save diagnostic report**. There's no need to Sign into the app.
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/company-portal-help-save-diagnostic.png" alt-text="Screenshot showing how to navigate the Help top menu to Save the diagnostic report.":::
One of the most useful tools to troubleshoot various issues with the SSO extensi
> [!TIP] > A handy way to view the logs is using [**Visual Studio Code**](https://code.visualstudio.com/download) and installing the [**Log Viewer**](https://marketplace.visualstudio.com/items?itemName=berublan.vscode-log-viewer) extension.
-#### Tailing SSO extension logs with terminal
+#### Tailing SSO extension logs on macOS with terminal
During troubleshooting it may be useful to reproduce a problem while tailing the SSOExtension logs in real time:
-1. From the macOS device, click on the **spotlight icon**.
-1. When the **Spotlight Search** appears type: **Terminal** and hit **return**.
+1. From the macOS device, double-click on the **Applications** folder, then double-click on the **Utilities** folder.
+1. Double-click on the **Terminal** application.
1. When the Terminal opens type: ```zsh
During troubleshooting it may be useful to reproduce a problem while tailing the
1. As you reproduce the issue, keep the **Terminal** window open to observe the output from the tailed **SSOExtension** logs.
+#### Exporting SSO extension logs on iOS
+
+It isn't possible to view iOS SSO Extension logs in real time, as it is on macOS. The iOS SSO extension logs can be exported from the Microsoft Authenticator app, and then reviewed from another device:
+
+1. Open the Microsoft Authenticator app:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app.jpg" alt-text="Screenshot showing the icon of the Microsoft Authenticator app on iOS.":::
+
+1. Press the menu button in the upper left:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app-menu-button.png" alt-text="Screenshot showing the location of the menu button in the Microsoft Authenticator app.":::
+
+1. Choose the "Send feedback" option:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app-send-feedback.png" alt-text="Screenshot showing the location of the send feedback option in the Microsoft Authenticator app.":::
+
+1. Choose the "Having trouble" option:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app-having-trouble.png" alt-text="Screenshot showing the location of having trouble option in the Microsoft Authenticator app.":::
+
+1. Press the View diagnostic data option:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app-view-diagnostic-data.png" alt-text="Screenshot showing the view diagnostic data button in the Microsoft Authenticator app.":::
+
+ > [!TIP]
+ > If you are working with Microsoft Support, at this stage you can press the **Send** button to send the logs to support. This will provide you with an Incident ID, which you can provide to your Microsoft Support contact.
+
+1. Press the "Copy all" button to copy the logs to your iOS device's clipboard. You can then save the log files elsewhere for review or send them via email or other file sharing methods:
+
+ :::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/auth-app-copy-all-logs.png" alt-text="Screenshot showing the Copy all logs option in the Microsoft Authenticator app.":::
+ ### Understanding the SSO extension logs Analyzing the SSO extension logs is an excellent way to troubleshoot the authentication flow from applications sending authentication requests to Azure AD. Any time the SSO extension Broker is invoked, a series of logging activities results, and these activities are known as **Authorization Requests**. The logs contain the following useful information for troubleshooting:
The SSO extension logs are broken down into columns. The following screenshot s
#### Feature flag configuration
-During the MDM configuration of the Microsoft Enterprise SSO Extension, an optional extension specific data can be sent as instructions to change how the SSO extension behaves. These configuration specific instructions are known as **Feature Flags**. The Feature Flag configuration is especially important for Non-MSAL/Browser SSO authorization requests types, as the Bundle ID can determine if the Extension will be invoked or not. See [Feature Flag documentation](../develop/apple-sso-plugin.md#more-configuration-options). Every authorization request begins with a Feature Flag configuration report. The following screenshot will walk through an example feature flag configuration:
+During the MDM configuration of the Microsoft Enterprise SSO Extension, an optional extension specific data can be sent as instructions to change how the SSO extension behaves. These configuration specific instructions are known as **Feature Flags**. The Feature Flag configuration is especially important for Non-MSAL/Browser SSO authorization requests types, as the Bundle ID can determine if the Extension is invoked or not. See [Feature Flag documentation](../develop/apple-sso-plugin.md#more-configuration-options). Every authorization request begins with a Feature Flag configuration report. The following screenshot walks through an example feature flag configuration:
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/feature-flag-configuration.png" alt-text="Screenshot showing an example feature flag configuration of the Microsoft SSO Extension."::: | Callout | Feature flag | Description | |::|:|:| |**1**|**[browser_sso_interaction_enabled](../develop/apple-sso-plugin.md#allow-users-to-sign-in-from-applications-that-dont-use-msal-and-the-safari-browser)**|Non-MSAL or Safari browser can bootstrap a PRT |
-|**2**|**browser_sso_disable_mfa**|(Now deprecated) During bootstrapping of the PRT credential, by default MFA is required. Notice this configuration is set to **null** which means that the default configuration will be enforced|
+|**2**|**browser_sso_disable_mfa**|(Now deprecated) During bootstrapping of the PRT credential, by default MFA is required. Notice this configuration is set to **null** which means that the default configuration is enforced|
|**3**|**[disable_explicit_app_prompt](../develop/apple-sso-plugin.md#disable-oauth-2-application-prompts)**|Replaces **prompt=login** authentication requests from applications to reduce prompting| |**4**|**[AppPrefixAllowList](../develop/apple-sso-plugin.md#enable-sso-for-all-apps-with-a-specific-bundle-id-prefix)**|Any Non-MSAL application that has a Bundle ID that starts with **`com.micorosoft.`** can be intercepted and handled by the SSO extension broker |
During the MDM configuration of the Microsoft Enterprise SSO Extension, an optio
#### MSAL native application sign-in flow
-The following section will walk through how to examine the SSO extension logs for the Native MSAL Application auth flow. For this example, we're using the [MSAL macOS/iOS sample application](https://github.com/AzureAD/microsoft-authentication-library-for-objc) as the client application, and the application is making a call to the Microsoft Graph API to display the sign-in user's information.
+The following section walks through how to examine the SSO extension logs for the Native MSAL Application auth flow. For this example, we're using the [MSAL macOS/iOS sample application](https://github.com/AzureAD/microsoft-authentication-library-for-objc) as the client application, and the application is making a call to the Microsoft Graph API to display the sign-in user's information.
##### MSAL native: Interactive flow walkthrough The following actions should take place for a successful interactive sign-on:
-1. The User will sign-in to the MSAL macOS sample app.
-1. The Microsoft SSO Extension Broker will be invoked and handle the request.
-1. Microsoft SSO Extension Broker will undergo the bootstrapping process to acquire a PRT for the signed in user.
+1. The user signs in to the MSAL macOS sample app.
+1. The Microsoft SSO Extension Broker is invoked and handles the request.
+1. Microsoft SSO Extension Broker undergoes the bootstrapping process to acquire a PRT for the signed in user.
1. Store the PRT in the Keychain. 1. Check for the presence of a Device Registration object in Azure AD (WPJ). 1. Return an access token to the client application to access the Microsoft Graph with a scope of User.Read.
The following actions should take place for a successful interactive sign-on:
> [!IMPORTANT] > The sample log snippets that follows, have been annotated with comment headers // that are not seen in the logs. They are used to help illustrate a specific action being undertaken. We have documented the log snippets this way to assist with copy and paste operations. In addition, the log examples have been trimmed to only show lines of significance for troubleshooting.
-The User clicks on the **Call Microsoft Graph API** button to invoke the sign-in process.
+The user clicks on the **Call Microsoft Graph API** button to invoke the sign-in process.
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/msal-macos-example-click-call-microsoft-graph.png" alt-text="Screenshot showing MSAL example app for macOS launched with Call Microsoft Graph API button.":::
Finished SSO request.
At this point in the authentication/authorization flow, the PRT has been bootstrapped and it should be visible in the macOS keychain access. See [Checking Keychain Access for PRT](#checking-keychain-access-for-prt). The **MSAL macOS sample** application uses the access token received from the Microsoft SSO Extension Broker to display the user's information.
-Next, examine server-side [Azure AD Sign-in logs](../reports-monitoring/reference-basic-info-sign-in-logs.md#correlation-id) based on the correlation ID collected from the client-side SSO extension logs . For more information, see [Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md).
+Next, examine server-side [Azure AD Sign-in logs](../reports-monitoring/reference-basic-info-sign-in-logs.md#correlation-id) based on the correlation ID collected from the client-side SSO extension logs. For more information, see [Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md).
###### View Azure AD Sign-in logs by correlation ID filter
For the MSAL Interactive Login Flow, we expect to see an interactive sign-in for
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/msal-interactive-azure-ad-details-interactive.png" alt-text="Screenshot showing the interactive User Sign-ins from Azure AD showing an interactive sign into the Microsoft Authentication Broker Service.":::
-There will also be non-interactive sign-in events, due to the fact the PRT is used to acquire the access token for the client application's request. Follow the [View Azure AD Sign-in logs by Correlation ID Filter](#view-azure-ad-sign-in-logs-by-correlation-id-filter) but in step 2, select **User sign-ins (non-interactive)**.
+There are also non-interactive sign-in events, due to the fact the PRT is used to acquire the access token for the client application's request. Follow the [View Azure AD Sign-in logs by Correlation ID Filter](#view-azure-ad-sign-in-logs-by-correlation-id-filter) but in step 2, select **User sign-ins (non-interactive)**.
:::image type="content" source="media/troubleshoot-mac-sso-extension-plugin/msal-interactive-azure-ad-details-non-interactive-microsoft-graph.png" alt-text="Screenshot showing how the SSO extension uses the PRT to acquire an access token for the Microsoft Graph.":::
There will also be non-interactive sign-in events, due to the fact the PRT is us
##### MSAL Native: Silent flow walkthrough
-After a period of time, the access token will no longer be valid. So, if the user reclicks on the **Call Microsoft Graph API** button. The SSO extension will attempt to refresh the access token with the already acquired PRT.
+After a period of time, the access token will no longer be valid. So, if the user reclicks on the **Call Microsoft Graph API** button. The SSO extension attempts to refresh the access token with the already acquired PRT.
``` SSOExtensionLogs
The logging sample can be broken down into two segments:
|Segment |Description | |::|| |**`refresh`** | Broker handles the request for Azure AD:<br> - **Handling silent SSO request...**: Denotes a silent request<br> - **correlation_id**: Useful for cross referencing with the Azure AD server-side sign-in logs <br> - **scope**: **User.Read** API permission scope being requested from the Microsoft Graph<br> - **client_version**: version of MSAL that the application is running<br> - **redirect_uri**: MSAL apps use the format **`msauth.com.<Bundle ID>://auth`**<br><br>**Refresh** has notable differences to the request payload:<br> - **authority**: Contains the Azure AD tenant URL endpoint as opposed to the **common** endpoint<br> - **home_account_id**: Show the User account in the format **\<UserObjectId\>.\<TenantID\>**<br> - **username**: hashed UPN format **auth.placeholder-XXXXXXXX__domainname.com** |
-|**PRT Refresh and Acquire Access Token** | This operation will revalidate the PRT and refresh it if necessary, before returning the access token back to the calling client application. |
+|**PRT Refresh and Acquire Access Token** | This operation revalidates the PRT and refreshes it if necessary, before returning the access token back to the calling client application. |
We can again take the **correlation Id** obtained from the client-side **SSO Extension** logs and cross reference with the server-side Azure AD Sign-in logs.
The Azure AD Sign-in shows identical information to the Microsoft Graph resource
#### Non-MSAL/Browser SSO application login flow
-The following section will walk through how to examine the SSO extension logs for the Non-MSAL/Browser Application auth flow. For this example, we're using the Apple Safari browser as the client application, and the application is making a call to the Office.com (OfficeHome) web application.
+The following section walks through how to examine the SSO extension logs for the Non-MSAL/Browser Application auth flow. For this example, we're using the Apple Safari browser as the client application, and the application is making a call to the Office.com (OfficeHome) web application.
##### Non-MSAL/Browser SSO flow walkthrough The following actions should take place for a successful sign-on: 1. Assume that User who already has undergone the bootstrapping process has an existing PRT.
-1. On a device, with the **Microsoft SSO Extension Broker** deployed, the configured **feature flags** will be checked to ensure that the application can be handled by the SSO Extension.
-1. Since the Safari browser adheres to the **Apple Networking Stack**, the SSO extension will try to intercept the Azure AD auth request.
-1. The PRT will be used to acquire a token for the resource being requested.
-1. If the device is Azure AD Registered, it will pass the Device ID along with the request.
-1. The SSO extension will populate the header of the Browser request to sign-in to the resource.
+1. On a device, with the **Microsoft SSO Extension Broker** deployed, the configured **feature flags** are checked to ensure that the application can be handled by the SSO Extension.
+1. Since the Safari browser adheres to the **Apple Networking Stack**, the SSO extension tries to intercept the Azure AD auth request.
+1. The PRT is used to acquire a token for the resource being requested.
+1. If the device is Azure AD Registered, it passes the Device ID along with the request.
+1. The SSO extension populates the header of the Browser request to sign-in to the resource.
The following client-side **SSO Extension** logs show the request being handled transparently by the SSO extension broker to fulfill the request.
Next, use the correlation ID obtained from the Browser SSO extension logs to cr
|**Managed**| Indicates that device is under management. | |**Join Type**| macOS and iOS, if registered, can only be of type: **Azure AD Registered**. |
-#### Delete PRT using Company Portal
-The following steps can be used to remove a PRT of the device with the Company Portal:
-1. From the macOS device, select the spotlight icon.
-1. When the Spotlight Search appears, type "Company Portal" and press Return.
-1. When the Company Portal page loads, select the account logged in at the top right corner.
-1. On this page, select the **Remove account from this device** button.
-1. On the keychain access window, refresh the search and validate that the PRT has been removed.
+> [!TIP]
+> If you use Jamf Connect, it is recommended that you follow the [latest Jamf guidance on integrating Jamf Connect with Azure AD](https://learn.jamf.com/bundle/jamf-connect-documentation-current/page/Jamf_Connect_and_Microsoft_Conditional_Access.html). The recommended integration pattern ensures that Jamf Connect works properly with your Conditional Access policies and Azure AD Identity Protection.
## Next steps
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
There are a few recommended patterns that are effective at monitoring and cleani
Use the following instructions to learn how to enhance monitoring of inactive guest accounts at scale and create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Monitor guest accounts at scale with inactive guest insights (Preview)
-1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
2. Access the inactive guest account report by navigating to "Guest access governance" card and click on "View inactive guests"
Use the following instructions to learn how to enhance monitoring of inactive gu
Guest users who don't sign into the tenant for the number of days you configured are disabled for 30 days, then deleted. After deletion, you can restore guests for up to 30 days, after which a new invitation is
-needed.
+needed.
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
A few enterprise applications can't be deleted in the Azure portal and might blo
`Get-MsolServicePrincipal | Set-MsolServicePrincipal -AccountEnabled $false`
-9. Sign in to the Azure portal again, and remove any new admin account that you created in step 3.
+9. Sign in to the [Azure portal](https://portal.azure.com) again, and remove any new admin account that you created in step 3.
10. Retry tenant deletion from the Azure portal.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
This feature can be used in the Azure portal, Microsoft Graph, and in PowerShell
### Steps to create a memberOf dynamic group
-1. Sign in to the Azure portal with an account that has Global Administrator, Intune Administrator, or User Administrator role permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has Global Administrator, Intune Administrator, or User Administrator role permissions.
1. Select **Azure Active Directory** > **Groups**, and then select **New group**. 1. Fill in group details. The group type can be Security or Microsoft 365, and the membership type can be set to **Dynamic User** or **Dynamic Device**. 1. Select **Add dynamic query**.
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
You're not required to assign licenses to the users for them to be members in dy
First, you'll create a group for your guest users who all are from a single partner company. They need special licensing, so it's often more efficient to create a group for this purpose.
-1. Sign in to the Azure portal (https://portal.azure.com) with an account that is the global administrator for your organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is the global administrator for your organization.
2. Select **Azure Active Directory** > **Groups** > **New group**. ![select command to start a new group](./media/groups-dynamic-tutorial/new-group.png) 3. On the **Group** blade:
In this tutorial, you learned how to:
Advance to the next article to learn more group-based licensing basics > [!div class="nextstepaction"] > [Group licensing basics](../fundamentals/active-directory-licensing-whatis-azure-portal.md)---
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
Next, you can check to see that the users you deleted exist in the Azure AD orga
## Verify deleted users in the Azure portal
-1. Sign in to the Azure portal with an account that is a User administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User administrator in the organization.
1. In the navigation pane, select **Azure Active Directory**. 1. Under **Manage**, select **Users**. 1. Under **Show**, select **All users** only and verify that the users you deleted are no longer listed.
active-directory Auditing And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/auditing-and-reporting.md
You can dive into each of these events to get the details. For example, let's lo
You can also export these logs from Azure AD and use the reporting tool of your choice to get customized reports.
+## Sponsor field for B2B users (preview)
+
+You can also manage and track your guest users in the organization using the sponsor feature (preview). The **Sponsor** field on the user account displays who is responsible for the guest user. A sponsor can be a user or a group. To learn more about the sponsor feature (preview), see [Add sponsors to a guest user](b2b-sponsors.md).
+ ### Next steps - [B2B collaboration user properties](user-properties.md)
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
This article contains recommendations and best practices for business-to-busines
| When developing an app, use UserType to determine guest user experience | If you're developing an application and you want to provide different experiences for tenant users and guest users, use the UserType property. The UserType claim isn't currently included in the token. Applications should use the Microsoft Graph API to query the directory for the user to get their UserType. | | Change the UserType property *only* if the userΓÇÖs relationship to the organization changes | Although itΓÇÖs possible to use PowerShell to convert the UserType property for a user from Member to Guest (and vice-versa), you should change this property only if the relationship of the user to your organization changes. See [Properties of a B2B guest user](user-properties.md).| | Find out if your environment will be affected by Azure AD directory limits | Azure AD B2B is subject to Azure AD service directory limits. For details about the number of directories a user can create and the number of directories to which a user or guest user can belong, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).|
+| Manage the B2B account lifecycle with the Sponsor (preview) feature | A sponsor is a user or group responsible for their guest users. For more details about this new feature see [Sponsor field for B2B users (preview)](b2b-sponsors.md).|
## Next steps
active-directory B2b Sponsors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-sponsors.md
+
+ Title: Add sponsors to a guest user in the Azure portal - Azure AD (preview)
+description: Shows how an admin can add sponsors to guest users in Azure Active Directory (Azure AD) B2B collaboration.
+++++ Last updated : 07/24/2023++++++
+# Customer intent: As a tenant administrator, I want to know how to add sponsors to guest users in Azure AD.
+
+# Sponsors field for B2B users (preview)
+
+To ensure proper governance of B2B users in their directory, organizations need to have a system in place for tracking who oversees each guest user. Currently, Entitlement Management provides this capability for guests within specified domains, but it doesn't extend to guests outside of these domains.
+By implementing the sponsor feature, you can identify a responsible individual or group for each guest user. This allows you to track who invited the guest user and to help with accountability.
+
+This article provides an overview of the sponsor feature and explains how to use it in B2B scenarios.
+
+## Sponsors field on the user object
+
+The **Sponsors** field on the user object refers to the person or a group who invited the guest user to the organization. You can use this field to track who invited the guest user and to help with accountability.
+Being a sponsor doesn't grant administrative powers for the sponsor user or the group, but it can be used for approval processes in Entitlement Management. You can also use it for custom solutions, but it doesn't provide any other built-in directory powers.
+
+## Who can be a sponsor?
+
+If you send an invitation to a guest user, you'll automatically become the sponsor of that guest user, unless you specify another user in the invite process as a sponsor. Your name will be added to the **Sponsors** field on the user object automatically. If you want to add a different sponsor, you can also specify the sponsor user or group when sending an invitation to a guest user.
+You can also assign multiple people or groups when inviting the guest user. You can assign a maximum of five sponsors to a single guest user.
+When a sponsor leaves the organization, as part of the offboarding process the tenant administrator can change the **Sponsors** field on the user object to a different person or group. With this transition, they can ensure that the guest user's account remains properly tracked and accounted for.
+
+## Other scenarios using the B2B sponsors feature
+
+The Azure Active Directory B2B collaboration sponsor feature serves as a foundation for other scenarios that aim to provide a full governance lifecycle for external partners. These scenarios aren't part of the sponsor feature but rely on it for managing guest users:
+
+- Administrators can transfer sponsorship to another user or group, if the guest user starts working on a different project.
+- When requesting new access packages, sponsors can be added as approvers to provide additional support in Entitlement Management, which can help reduce the workload on existing reviewers.
+
+## Add sponsors when inviting a new guest user
+
+You can add up to five sponsors when inviting a new guest user. If you donΓÇÖt specify a sponsor, the inviter will be added as a sponsor. To invite a guest user, you need to have the Global Administrator role or a limited administrator directory role such as Guest Inviter or User Administrator.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) .
+1. Navigate to **Azure Active Directory** > **Users**.
+1. Select **Invite external user** from the menu.
+1. Entered the details on the Basics tab and select **Next: Properties**.
+1. You can add sponsors under ΓÇ»**Job information** on the **Properties** tab,
+ :::image type="content" source="media/b2b-sponsors/add-sponsors.png" alt-text="Screenshot showing the Add sponsor option.":::
+
+1. Select the **Review and invite** button to finalize the process.
+
+You can also add sponsors with the Microsoft Graph API, using invitation manager for any new guest users, by passing through the payload. If there are no sponsors in the payload, the inviter will be stamped as the sponsor. To learn more about adding guest users with the Microsoft Graph API, see [Assign sponsors](/graph/api/user-post-sponsors).
+
+
+## Edit the Sponsors field
+
+When you invite a guest user, you became their sponsor by default. If you need to manually change the guest user's sponsor, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator.
+2. Search for and select **Azure Active Directory** from any page.
+3. Under **Manage**, select **Users**.
+4. In the list, select the user's name to open their user profile
+5. Under **Properties** > **Job information** check the **Sponsors** field. If the guest user already has a sponsor, you can select **View** to see the sponsor's name.
+ :::image type="content" source="media/b2b-sponsors/sponsors-under-properties.png" alt-text="Screenshot of the sponsors field under the job information.":::
+
+6. Close the window with the sponsor name list, if you want to edit the **Sponsors** field.
+7. There are two ways to edit the **Sponsors** field. Either select the pencil icon next to the **Job Information**, or select **Edit properties** from the top of the page and go to the **Job Information** tab.
+8. If the user has only one sponsor, you can see the sponsor's name:
+ :::image type="content" source="media/b2b-sponsors/single-sponsor.png" alt-text="Screenshot of the sponsors 'name.":::
+
+ If the user has multiple sponsors, you can't see the individual names:
+ :::image type="content" source="media/b2b-sponsors/multiple-sponsors.png" alt-text="Screenshot of multiple sponsors option.":::
+
+ To add or remove sponsors, select **Edit**, select or remove the users or groups and select **Save** on the **Job Information** tab.
+
+9. If the guest user doesn't have a sponsor, select **Add sponsors**.
+ :::image type="content" source="media/b2b-sponsors/add-sponsors-existing-user.png" alt-text="Screenshot of adding a sponsor to an existing user.":::
+
+10. Once you selected sponsor users or groups, save the changes on the **Job Information** tab.
+
+## Next steps
+
+- [Add and invite guest users](add-users-administrator.md)
+- [Crete a new access package](/azure/active-directory/governance/entitlement-management-access-package-create#approval)
+- [Manage user profile info](/azure/active-directory/fundamentals/how-to-manage-user-profile-info)
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
If you use Azure Active Directory (Azure AD) B2B collaboration to work with exte
## Invite guest users in bulk
-1. Sign in to the Azure portal with an account that is a global administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a global administrator in the organization.
2. In the navigation pane, select **Azure Active Directory**. 3. Under **Manage**, select **All Users**. 4. Select **Bulk operations** > **Bulk invite**.
Check to see that the guest users you added exist in the directory either in the
### View guest users in the Azure portal
-1. Sign in to the Azure portal with an account that is a User administrator in the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that is a User administrator in the organization.
2. In the navigation pane, select **Azure Active Directory**. 3. Under **Manage**, select **Users**. 4. Under **Show**, select **Guest users only** and verify the users you added are listed.
For example: `Remove-MgUser -UserId "lstokes_fabrikam.com#EXT#@contoso.onmicroso
- [Bulk invite guest users via PowerShell](bulk-invite-powershell.md) - [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md) - [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)-
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Now, let's see what an Azure AD B2B collaboration user looks like in Azure AD.
### Before invitation redemption
-B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Identities** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the invited userΓÇÖs profile will show an **External user state** of **PendingAcceptance**. Querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
+B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Identities** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. The user sending the invitation is added as a default value for the **Sponsor** (preview) attribute on the guest user account. In the portal, the invited userΓÇÖs profile will show an **External user state** of **PendingAcceptance**. Querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
![Screenshot of user profile before redemption.](media/user-properties/before-redemption.png)
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Use a naming convention that clarifies policy purpose. External access examples
You can block external users from accessing resources with Conditional Access policies.
-1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator.
2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 3. Select **New policy**. 4. Enter a policy a name.
There are scenarios when it's necessary to allow access for a small, specific gr
Before you begin, we recommend you create a security group, which contains external users who access resources. See, [Quickstart: Create a group with members and view all groups and members in Azure AD](active-directory-groups-view-azure-portal.md).
-1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator.
2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 3. Select **New policy**. 4. Enter a policy name.
active-directory Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md
Before you can add a custom domain name, create your domain name with a domain r
## Create your directory in Azure AD
-After you get your domain name, you can create your first Azure AD directory. Sign in to the Azure portal for your directory, using an account with the **Owner** role for the subscription.
+After you get your domain name, you can create your first Azure AD directory. Sign in to the [Azure portal](https://portal.azure.com) for your directory, using an account with the **Owner** role for the subscription.
Create your new directory by following the steps in [Create a new tenant for your organization](active-directory-access-create-new-tenant.md#create-a-new-tenant-for-your-organization).
active-directory Properties Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/properties-area.md
You add your organization's privacy information in the **Properties** area of Az
### To access the Properties area and add your privacy information
-1. Sign in to the Azure portal as a tenant administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a tenant administrator.
2. On the left navbar, select **Azure Active Directory**, and then select **Properties**.
You add your organization's privacy information in the **Properties** area of Az
## Next steps - [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md)-- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
+- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
active-directory Complete Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
You can track the progress of access reviews as they're completed.
-1. Sign in to the Azure portal and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
1. In the left menu, select **Access reviews**.
Denied B2B direct connect users and teams lose access to all shared channels in
- [Manage access reviews](manage-access-review.md) - [Create an access review of groups or applications](create-access-review.md) - [Create an access review of users in an Azure AD administrative role](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)-
active-directory Conditional Access Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/conditional-access-exclusion.md
Follow these steps to create a new Azure AD group and a Conditional Access polic
### Create an exclusion group
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the left navigation, select **Azure Active Directory** and then select **Groups**.
As an IT administrator, you know that managing exclusion groups to your policies
## Next steps - [Create an access review of groups or applications](create-access-review.md)-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Create Access Review Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md
For more information, see [License requirements](access-reviews-overview.md#lice
## Create a PIM for Groups access review ### Scope
-1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
2. On the left menu, select **Access reviews**.
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
If you're reviewing access to an application, then before creating the review, s
## Create a single-stage access review ### Scope
-1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
2. On the left menu, select **Access reviews**.
B2B direct connect users and teams are included in access reviews of the Teams-e
Use the following instructions to create an access review on a team with shared channels:
-1. Sign in to the Azure portal as a Global Administrator, User Admin or Identity Governance Admin.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator, User Admin or Identity Governance Admin.
1. Open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
Use the following instructions to create an access review on a team with shared
The prerequisite role is a Global or User administrator.
-1. Sign in to the Azure portal and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
1. On the menu on the left, under **Access reviews**, select **Settings**.
After one or more access reviews have started, you might want to modify or updat
- [Create an access review of PIM for Groups (preview)](create-access-review-pim-for-groups.md) - [Review access to groups or applications](perform-access-review.md) - [Review access for yourself to groups or applications](review-your-access.md)---
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub
**Prerequisite role**: Global Administrator
-1. Sign in to the Azure portal as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace.
1. Select **Azure Active Directory** then select **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace.
$bResponse.Results |ft
``` ## Next steps-- [Create interactive reports with Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md)
+- [Create interactive reports with Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md)
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
After setting up custom extensibility in the catalog, administrators can create
With Azure, you're able to use [Azure Key Vault](/azure/key-vault/secrets/about-secrets) to store application secrets such as passwords. To register an application with secrets within the Azure portal, follow these steps:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and select Azure Active Directory.
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
Using Azure Automation requires you to have an Azure subscription.
**Prerequisite role**: Azure subscription or resource group owner
-1. Sign in to the Azure portal. Make sure you have access to the subscription or resource group where the Azure Automation account will be located.
+1. Sign in to the [Azure portal](https://portal.azure.com). Make sure you have access to the subscription or resource group where the Azure Automation account will be located.
1. Select the subscription or resource group, and select **Create**. Type **Automation**, select the **Automation** Azure service from Microsoft, then select **Create**.
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
The pre-hire scenario can be broken down into the following:
## Create a workflow using prehire template Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Azure portal.
- 1. Sign in to the Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**. 4. Select **Lifecycle workflows**.
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
Some of the attributes required for the pre-hire onboarding tutorial are exposed
For the tutorial, the **mail** attribute only needs to be set on the manager account and the **manager** attribute set on the employee account. Use the following steps:
- 1. Sign in to the Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the right, select **Azure Active Directory**. 3. Select **Users**. 4. Select **Melva Prince**.
In this scenario, we use this feature of Azure AD to generate a temporary access
To use this feature, it must be enabled on our Azure AD tenant. To do this, use the following steps.
-1. Sign in to the Azure portal as a Global Administrator and select **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator and select **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**
2. Select **Yes** to enable the policy and add Britta Simon and select which users have the policy applied, and any **General** settings. ## Additional steps for leaver scenario
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
The scheduled leaver scenario can be broken down into the following:
## Create a workflow using scheduled leaver template Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
- 1. Sign in to the Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**. 4. Select **Lifecycle workflows**.
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install-pshell.md
The Windows server must have TLS 1.2 enabled before you install the Azure AD Con
## Install the Azure AD Connect provisioning agent by using PowerShell cmdlets 1. Sign in to the server you use with enterprise admin permissions.
- 2. Sign in to the Azure portal, and then go to **Azure Active Directory**.
+ 2. Sign in to the [Azure portal](https://portal.azure.com), and then go to **Azure Active Directory**.
3. On the menu on the left, select **Azure AD Connect**. 4. Select **Manage cloud sync**. [![Screenshot that shows manage cloud sync](media/how-to-install/new-install-1.png)](media/how-to-install/new-install-1.png#lightbox)</br>
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-troubleshoot.md
You can verify these items in the Azure portal and on the local server that's ru
To verify that Azure detects the agent, and that the agent is healthy, follow these steps:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. On the left, select **Azure Active Directory** > **Azure AD Connect**. In the center, select **Manage sync**. 1. On the **Azure AD Connect cloud sync** screen, select **Review all agents**.
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-existing-forest.md
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
## Configure Azure AD Connect cloud sync Use the following steps to configure provisioning
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Select **Azure Active Directory** 3. Select **Azure AD Connect** 4. Select **Manage cloud sync**
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/tutorial-single-forest.md
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
Use the following steps to configure and start the provisioning:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory** 1. Select **Azure AD Connect** 1. Select **Manage cloud sync**
active-directory How To Connect Post Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-post-installation.md
Now that your users have been synchronized to the cloud, you need to assign them
### To assign an Azure AD Premium or Enterprise Mobility Suite License
-1. Sign in to the Azure portal as an admin.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an admin.
2. On the left, select **Active Directory**. 3. On the **Active Directory** page, double-click the directory that has the users you want to set up. 4. At the top of the directory page, select **Licenses**.
Now that your users have been synchronized to the cloud, you need to assign them
Use the Azure portal to check the status of a synchronization. ### To verify the scheduled synchronization task
-1. Sign in to the Azure portal as an admin.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an admin.
2. On the left, select **Active Directory**. 3. On the left, select **Azure AD Connect** 4. At the top of the page, note the last synchronization.
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Further prompts can be expected in various scenarios:
To ensure the permissions granted for the application are up-to-date, you can compare the permissions that are being requested by the application with the permissions already granted in the tenant.
-1. Sign in to the Azure portal with an administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
2. Navigate to **Enterprise applications**. 3. Select the application in question from the list. 4. Under Security in the left-hand navigation, choose **Permissions**
To ensure the permissions granted for the application are up-to-date, you can co
If the application requires assignment, individual users can't consent for themselves. To check if assignment is required for the application, do the following:
-1. Sign in to the Azure portal with an administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
2. Navigate to **Enterprise applications**. 3. Select the application in question from the list. 4. Under Manage in the left-hand navigation, choose **Properties**.
If the application requires assignment, individual users can't consent for thems
Determining whether an individual user can consent to an application can be configured by every organization, and may differ from directory to directory. Even if every permission doesn't require admin consent by default, your organization may have disabled user consent entirely, preventing an individual user to consent for themselves for an application. To view your organization's user consent settings, do the following:
-1. Sign in to the Azure portal with an administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
2. Navigate to **Enterprise applications**. 3. Under Security in the left-hand navigation, choose **Consent and permissions**. 4. View the user consent settings. If set to *Do not allow user consent*, users will never be able to consent on behalf of themselves for an application.
active-directory Datawiza Sso Mfa Oracle Ebs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-mfa-oracle-ebs.md
Configuration on the management console is complete. You're prompted to deploy D
To provide more security for sign-ins, you can enable Multi-Factor Authentication in the Azure portal:
-1. Sign in to the Azure portal as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
2. Select **Azure Active Directory** > **Manage** > **Properties**. 3. Under **Properties**, select **Manage security defaults**.
active-directory Datawiza Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-oracle-jde.md
To provide more security for sign-ins, you can enforce MFA for user sign-in.
See, [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md).
-1. Sign in to the Azure portal as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
2. Select **Azure Active Directory** > **Manage** > **Properties**. 3. Under **Properties**, select **Manage security defaults**. 4. Under **Enable Security defaults**, select **Yes**.
active-directory Datawiza Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-oracle-peoplesoft.md
To provide more security for sign-ins, you can enforce Azure AD Multi-Factor Aut
Learn more: [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md)
-1. Sign in to the Azure portal as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
2. Select **Azure Active Directory** > **Manage** > **Properties**. 3. Under **Properties**, select **Manage security defaults**. 4. Under **Enable Security defaults**, select **Yes**
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
As an admin, you can choose to try out new app launcher features while they are
To enable or disable previews for your app launchers: -- Sign in to the Azure portal as a global administrator, application administrator or cloud application administrator for your directory.
+- Sign in to the [Azure portal](https://portal.azure.com) as a global administrator, application administrator or cloud application administrator for your directory.
- Search for and select **Azure Active Directory**, then select **Enterprise applications**. - On the left menu, select **App launchers**, then select **Settings**. - Under **Preview settings**, toggle the checkboxes for the previews you want to enable or disable. To opt into a preview, toggle the associated checkbox to the checked state. To opt out of a preview, toggle the associated checkbox to the unchecked state.
active-directory F5 Passwordless Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-passwordless-vpn.md
To improve the tutorial experience, you can learn industry-standard terminology
Set up a SAML federation trust between the BIG-IP to allow the Azure AD BIG-IP to hand off the pre-authentication and [Conditional Access](../conditional-access/overview.md) to Azure AD, before it grants access to the published VPN service.
-1. Sign in to the Azure portal with application admin rights.
+1. Sign in to the [Azure portal](https://portal.azure.com) with application admin rights.
2. From the left navigation pane, select the **Azure Active Directory service**. 3. Go to **Enterprise Applications** and from the top ribbon select **New application**. 4. In the gallery, search for F5 and select **F5 BIG-IP APM Azure AD integration**.
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
To access the audit log for a tenant, you must have one of the following roles:
- Global Reader - Global Administrator
-Sign in to the Azure portal and go to **Azure AD** and select **Audit log** from the **Monitoring** section.
+Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure AD** and select **Audit log** from the **Monitoring** section.
The audit activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview). See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
active-directory Whosoff Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whosoff-tutorial.md
+
+ Title: Azure Active Directory SSO integration with WhosOff
+description: Learn how to configure single sign-on between Azure Active Directory and WhosOff.
++++++++ Last updated : 07/14/2023++++
+# Azure Active Directory SSO integration with WhosOff
+
+In this article, you'll learn how to integrate WhosOff with Azure Active Directory (Azure AD). WhosOff is an online leave management platform. Azure's WhosOff integration allows customers to sign in to their WhosOff account using Azure as a single sign-on provider. When you integrate WhosOff with Azure AD, you can:
+
+* Control in Azure AD who has access to WhosOff.
+* Enable your users to be automatically signed-in to WhosOff with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for WhosOff in a test environment. WhosOff supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with WhosOff, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* WhosOff single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the WhosOff application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add WhosOff from the Azure AD gallery
+
+Add WhosOff from the Azure AD application gallery to configure single sign-on with WhosOff. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **WhosOff** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://app.whosoff.com/int/<Integration_ID>/sso/azure/`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign on URL. Contact [WhosOff support team](mailto:support@whosoff.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up WhosOff** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure WhosOff SSO
+
+To configure single sign-on on **WhosOff** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [WhosOff support team](mailto:support@whosoff.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create WhosOff test user
+
+In this section, you create a user called Britta Simon at WhosOff SSO. Work with [WhosOff support team](mailto:support@whosoff.com) to add the users in the WhosOff SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to WhosOff Sign-on URL where you can initiate the login flow.
+
+* Go to WhosOff Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the WhosOff for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the WhosOff tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the WhosOff for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure WhosOff you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
az identity federated-credential show --name $ficId --identity-name $uaId --reso
Run the [az identity federated-credential delete](/cli/azure/identity/federated-credential#az-identity-federated-credential-delete) command to delete a federated identity credential under an existing user assigned identity.
-```azure cli
+```azurecli
az login # Set variables
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/language-support.md
Use this article to learn which natural languages are supported by the PII and c
| Language | Language code | Starting with model version | Notes | |:-|:-:|:-:|::| | English | `en` | 2022-05-15-preview | |
-| French | `fr` | XXXX-XX-XX-preview | |
-| German | `de` | XXXX-XX-XX-preview | |
-| Spanish | `es` | XXXX-XX-XX-preview | |
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to the Azure OpenAI service in the desired Azure subscription
+- Access granted to the Azure OpenAI Service in the desired Azure subscription
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)
ai-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-structure.md
Attribute values must be enclosed by double or single quotation marks. For examp
## Speak root element
-The `speak` element is the root element that's required for all SSML documents. The `speak` element contains information such as version, language, and the markup vocabulary definition.
+The `speak` element contains information such as version, language, and the markup vocabulary definition. The `speak` element is the root element that's required for all SSML documents. You must specify the default language within the `speak` element, whether or not the language is adjusted elsewhere such as within the [`lang`](speech-synthesis-markup-voice.md#adjust-speaking-languages) element.
Here's the syntax for the `speak` element:
ai-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md
This example uses a custom voice named "my-custom-voice". The custom voice speak
By default, all neural voices are fluent in their own language and English without using the `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
-You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
+The `<lang xml:lang>` element is primarily intended for multilingual neural voices. You can adjust the speaking language for the multilingual neural voice at the sentence level and word level. The supported languages for multilingual voices are [provided in a table](#multilingual-voices-with-the-lang-element) following the `<lang>` syntax and attribute definitions.
Usage of the `lang` element's attributes are described in the following table.
Usage of the `lang` element's attributes are described in the following table.
> [!NOTE] > The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
+### Multilingual voices with the lang element
+ Use this table to determine which speaking languages are supported for each neural voice. If the voice doesn't speak the language of the input text, the Speech service won't output synthesized audio.
-| Voice | Primary and default locale | Secondary locales |
-| - | - | - |
-| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
+| Voice | Supported locales |
+| - | - |
+| `en-US-JennyMultilingualNeural`<sup>1</sup> | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
+| `en-US-JennyMultilingualV2Neural`<sup>2</sup> | `ar-EG`, `ar-SA`, `ca-ES`, `cs-CZ`, `da-DK`, `de-AT`, `de-CH`, `de-DE`, `en-AU`, `en-CA`, `en-GB`, `en-HK`, `en-IE`, `en-IN`, `en-US`, `es-ES`, `es-MX`, `fi-FI`, `fr-BE`, `fr-CA`, `fr-CH`, `fr-FR`, `hi-IN`, `hu-HU`, `id-ID`, `it-IT`, `ja-JP`, `ko-KR`, `nb-NO`, `nl-BE`, `nl-NL`, `pl-PL`, `pt-BR`, `pt-PT`, `ru-RU`, `sv-SE`, `th-TH`, `tr-TR`, `zh-CN`, `zh-HK`, `zh-TW`. |
+| `en-US-RyanMultilingualNeural` | `ar-EG`, `ar-SA`, `ca-ES`, `cs-CZ`, `da-DK`, `de-AT`, `de-CH`, `de-DE`, `en-AU`, `en-CA`, `en-GB`, `en-HK`, `en-IE`, `en-IN`, `en-US`, `es-ES`, `es-MX`, `fi-FI`, `fr-BE`, `fr-CA`, `fr-CH`, `fr-FR`, `hi-IN`, `hu-HU`, `id-ID`, `it-IT`, `ja-JP`, `ko-KR`, `nb-NO`, `nl-BE`, `nl-NL`, `pl-PL`, `pt-BR`, `pt-PT`, `ru-RU`, `sv-SE`, `th-TH`, `tr-TR`, `zh-CN`, `zh-HK`, `zh-TW`. |
+
+<sup>1</sup> In order to speak in a language other than English, the current implementation of the `en-US-JennyMultilingualNeural` voice requires that you set the `<lang xml:lang>` element. We anticipate that during Q4 calendar year 2023, the `en-US-JennyMultilingualNeural` voice will be updated to speak in the language of the input text without the `<lang xml:lang>` element. This will be in parity with the `en-US-JennyMultilingualV2Neural` voice.
+
+<sup>2</sup> The `en-US-JennyMultilingualV2Neural` voice is provided temporarily in public preview soley for evaluation purposes. It will be removed in the future.
+
+> [!NOTE]
+> Multilingual voices don't fully support certain SSML elements, such as break, emphasis, silence, and sub.
### Lang examples The supported values for attributes of the `lang` element were [described previously](#adjust-speaking-languages).
-The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere.
+You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere. In this example, the primary language for `en-US-JennyMultilingualNeural` is `en-US`.
-This SSML snippet shows how to use the `lang` element (and `xml:lang` attribute) to speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
+This SSML snippet shows how to use `<lang xml:lang>` to speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Use the following instructions to migrate your Ubuntu nodes to Azure Linux nodes
> [!NOTE] > When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
-2. [Cordon the existing Ubuntu nodes][cordon-and-drain.md].
+2. [Cordon the existing Ubuntu nodes][cordon-and-drain].
3. [Drain the existing Ubuntu nodes][drain-nodes]. 4. Remove the existing Ubuntu nodes using the `az aks delete` command.
az group delete --name myResourceGroup2 --yes --no-wait
[vm-sizes]: ../virtual-machines/sizes.md [use-system-pool]: use-system-pools.md [reduce-latency-ppg]: reduce-latency-ppg.md
-[[use-tags]: use-tags.md
+[use-tags]: use-tags.md
[use-labels]: use-labels.md [cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes [internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
API Management helps you import, manage, protect, test, publish, and monitor Gra
* GraphQL APIs are supported in all API Management service tiers * Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway
+* Synthetic GraphQL APIs currently aren't supported in API Management [workspaces](workspaces-overview.md)
* Support for GraphQL subscriptions in synthetic GraphQL APIs is currently in preview and isn't available in the Consumption tier ## What is GraphQL?
For more information about setting up a resolver, see [Configure a GraphQL resol
## Next steps - [Import a GraphQL API](graphql-api.md)-- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md)
+- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md)
api-management Import Api From Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-odata.md
In this article, you learn how to:
> * Secure the OData API > [!NOTE]
-> Importing an OData service as an API is in preview. Currently, testing OData APIs isn't supported in the test console of the Azure portal or in the API Management developer portal.
+> Importing an OData service as an API from its metadata description is in preview. Currently, testing OData APIs isn't supported in the test console of the Azure portal or in the API Management developer portal.
## Prerequisites
In this article, you learn how to:
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
-## Import OData metadata
-
-1. In the left menu, select **APIs** > **+ Add API**.
-1. Under **Create from definition**, select **OData**.
-
- :::image type="content" source="media/import-api-from-odata/odata-api.png" alt-text="Screenshot of creating an API from an OData description in the portal." :::
-1. Enter API settings. You can update your settings later by going to the **Settings** tab of the API.
-
- 1. In **OData specification**, enter a URL for an OData metadata endpoint, typically the URL to the service root, appended with `/$metadata`. Alternatively, select a local OData XML file to import.
-
- 1. Enter remaining settings to configure your API. These settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-1. Select **Create**.
-
- The API is added to the **APIs** list. The entity sets and functions that are exposed in the OData metadata description appear on the API's **Schema** tab.
-
- :::image type="content" source="media/import-api-from-odata/odata-schema.png" alt-text="Screenshot of schema of OData API in the portal." :::
-
-## Update the OData schema
-
-You can access an editor in the portal to view your API's OData schema. If the API changes, you can also update the schema in API Management from a file or an OData service endpoint.
-
-1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
-1. In the left menu, select **APIs** > your OData API.
-1. On the **Schema** tab, select the edit (**\</>**) icon.
-1. Review the schema. If you want to update it, select **Update from file** or **Update schema from endpoint**.
-
- :::image type="content" source="media/import-api-from-odata/odata-schema-update.png" alt-text="Screenshot of schema editor for OData API in the portal." :::
-
-## Secure your OData API
-
-Secure your OData API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and an [OData validation policy](validate-odata-request-policy.md) to protect against attacks through OData API requests.
-
-> [!TIP]
-> In the portal, configure policies for your OData API on the **API policies** tab.
[!INCLUDE [api-management-append-apis.md](../../includes/api-management-append-apis.md)] [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
Title: Import an SAP API using the Azure portal | Microsoft Docs
-description: Learn how to import OData metadata from SAP as an API to Azure API Management
+description: Learn how to import OData metadata from SAP as an API to Azure API Management, either directly or by converting the metadata to an OpenAPI specification.
Previously updated : 06/06/2023 Last updated : 07/21/2023 # Import SAP OData metadata as an API
-This article shows how to import an OData service using its metadata description. In this article, [SAP Gateway](https://help.sap.com/viewer/product/SAP_GATEWAY) serves as an example. However, you can apply the approach to any OData-compliant service.
+This article shows how to import an OData service using its metadata description. In this article, [SAP Gateway Foundation](https://help.sap.com/viewer/product/SAP_GATEWAY) serves as an example.
In this article, you'll: > [!div class="checklist"]
-> * Convert OData metadata to an OpenAPI specification
-> * Import the OpenAPI specification to API Management
+> * Retrieve OData metadata from your SAP service
+> * Import OData metadata to API Management, either directly or after conversion to an OpenAPI specification
> * Complete API configuration > * Test the API in the Azure portal > [!NOTE]
-> In preview, API Management can now directly import an OData API from its metadata description, without requiring conversion to an OpenAPI specification. [Learn more](import-api-from-odata.md).
+> Importing an OData API to API Management from its metadata description is in preview. [Learn more](import-api-from-odata.md).
## Prerequisites
In this article, you'll:
> [!NOTE] > For production scenarios, use proper certificates for end-to-end SSL verification.
+## Retrieve OData metadata from your SAP service
-## Convert OData metadata to OpenAPI JSON
+Retrieve metadata XML from your SAP service, using one of the following methods. If you plan to convert the metadata XML to an OpenAPI specification, save the file locally.
+
+* Use the SAP Gateway Client (transaction `/IWFND/GW_CLIENT`), or
+* Make a direct HTTP call to retrieve the XML:
+`http://<OData server URL>:<port>/<path>/$metadata`
+++
+## Import API to API Management
+
+Choose one of the following methods to import your API to API Management: import the metadata XML as an OData API directly, or convert the metadata XML to an OpenAPI specification.
-1. Retrieve metadata XML from your SAP service. Use one of these methods:
+#### [OData metadata](#tab/odata)
- * Use the SAP Gateway Client (transaction `/IWFND/GW_CLIENT`), or
- * Make a direct HTTP call to retrieve the XML:
- `http://<OData server URL>:<port>/<path>/$metadata`.
+
+#### [OpenAPI specification](#tab/openapi)
+
+## Convert OData metadata to OpenAPI JSON
1. Convert the OData XML to OpenAPI JSON format. Use an OASIS open-source tool for [OData v2](https://github.com/oasis-tcs/odata-openapi/tree/main/tools) or [OData v4](https://github.com/oasis-tcs/odata-openapi/tree/main/lib), depending on your metadata XML.
In this article, you'll:
1. Save the `openapi-spec.json` file locally for import to API Management. - ## Import and publish backend API 1. From the side navigation menu, under the **APIs** section, select **APIs**.
Also, configure authentication to your backend using an appropriate method for y
1. View the response. To troubleshoot, [trace](api-management-howto-api-inspector.md) the call. 1. When testing is complete, exit the test console. ++ ## Production considerations * See an [example end-to-end scenario](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) to integrate API Management with an SAP gateway.
-* Control access to an SAP backend using API Management policies. See policy snippets for [SAP principal propagation](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml) and [fetching an X-CSRF token](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Get%20X-CSRF%20token%20from%20SAP%20gateway%20using%20send%20request.policy.xml).
+* Control access to an SAP backend using API Management policies. For example, if the API is imported as an OData API, use the [validate OData request](validate-odata-request-policy.md) policy. See also policy snippets for [SAP principal propagation](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml) and [fetching an X-CSRF token](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Get%20X-CSRF%20token%20from%20SAP%20gateway%20using%20send%20request.policy.xml).
* For guidance to deploy, manage, and migrate APIs at scale, see: * [Automated API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops) * [CI/CD for API Management using Azure Resource Manager templates](devops-api-development-templates.md). [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
-## Next steps
-> [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
If your certificate authority gives you multiple certificates in the certificate
Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file. > [!NOTE]
-> OpenSSL v3 changed default cipher from 3DES to AES256, but this can be overridden on the command line -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -machalg SHA1.
+> OpenSSL v3 changed default cipher from 3DES to AES256, but this can be overridden on the command line -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -macalg SHA1.
> OpenSSL v1 uses 3DES as default, so the PFX files generated are supported without any special modifications. 1. To export your certificate to a PFX file, run the following command, but replace the placeholders _&lt;private-key-file>_ and _&lt;merged-certificate-file>_ with the paths to your private key and your merged certificate file.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
description: Learn how to migrate your App Service Environment to App Service En
Previously updated : 12/5/2022 Last updated : 7/24/2023 zone_pivot_groups: app-service-cli-portal
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021
## 4. Update dependent resources with new IPs
-Using the new IPs, update any of your resources or networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't migrate until you've completed this step.
+Using the new IPs, update any of your resources or networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't migrate until you've completed this step.
## 5. Delegate your App Service Environment subnet
Under **Get new IP addresses**, confirm you understand the implications and star
## 3. Update dependent resources with new IPs
-When the previous step finishes, you'll be shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't move on to the next step until you confirm that you have made these updates.
+When the previous step finishes, you'll be shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirm that you have made these updates.
:::image type="content" source="./media/migration/ip-sample.png" alt-text="Screenshot that shows sample IPs generated during pre-migration.":::
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
When completed, you'll be given the new IPs that your future App Service Environ
### Update dependent resources with new IPs
-Once the new IPs are created, you have the new default outbound to the internet public addresses. In preparation for the migration, you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs. For ELB App Service Environment, you also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+Once the new IPs are created, you have the new default outbound to the internet public addresses. In preparation for the migration, you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs. For ELB App Service Environment, you also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80.
### Delegate your App Service Environment subnet
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 06/19/2023 Last updated : 07/24/2023 # Migrate to App Service Environment v3
If your App Service Environment [isn't supported for migration](migrate.md#migra
Scenario: An existing app running on an App Service Environment v1 or App Service Environment v2 and you need that app to run on an App Service Environment v3.
-For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs.
+For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs as well as account for inbound dependency changes such as the Azure Load Balancer port.
Multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There is application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
Title: 'App Service Environment version comparison' description: This article provides an overview of the App Service Environment versions and feature differences between them. Previously updated : 3/30/2023 Last updated : 7/24/2023
App Service Environment has three versions. App Service Environment v3 is the la
|Feature |[App Service Environment v1](app-service-app-service-environment-intro.md) |[App Service Environment v2](intro.md) |[App Service Environment v3](overview.md) | |||||
-|Networking dependencies |Must [manage all inbound and outbound traffic](app-service-app-service-environment-network-architecture-overview.md). Network security groups must allow management traffic. |Must [manage all inbound and outbound traffic](network-info.md). Network security groups must allow management traffic. |No [networking dependencies](networking.md) on the customer's virtual network |
+|Networking dependencies |Must [manage all inbound and outbound traffic](app-service-app-service-environment-network-architecture-overview.md). Network security groups must allow management traffic. |Must [manage all inbound and outbound traffic](network-info.md). Network security groups must allow management traffic. Ensure that [Azure Load Balancer is able to connect to the subnet on port 16001](network-info.md#inbound-dependencies). |No [networking dependencies](networking.md) on the customer's virtual network. Ensure that [Azure Load Balancer is able to connect to the subnet on port 80](networking.md#ports-and-network-restrictions). |
|Private endpoint support |No |No |Yes, [must be explicitly enabled](networking.md#private-endpoint) | |Reach apps in an internal-VIP App Service Environment across global peering |No |No |Yes | |SMTP traffic |Yes |Yes |Yes |
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
This document helps set up an example application that uses the _Ingress_ resour
# [ALB managed deployment](#tab/alb-managed) 1. Create an Ingress
- ```bash
+```bash
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress
spec:
name: echo port: number: 80
- EOF
- ```
+EOF
+```
# [Bring your own (BYO) deployment](#tab/byo)
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md
In an environment that includes two or more components on multiple VMs supportin
1. Preview the action and make any necessary changes before implementing against production VMs. When ready, manually execute the **monitoring-and-diagnostics/monitoring-action-groupsrunbook** with the parameter set to **False**. Alternatively, let the Automation schedules **Sequenced-StartVM** and **Sequenced-StopVM** run automatically following your prescribed schedule.
-## <a name="cpuutil"></a>Scenario 3: Start or stop automatically based on CPU utilization
+## <a name="cpuutil"></a>Scenario 3: Stop automatically based on CPU utilization
Start/Stop VMs during off-hours can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
Last updated 05/05/2021+
App Configuration uses the [AACHttpRequest Table](/azure/azure-monitor/refere
## See Also * See [Monitoring Azure App Configuration](monitor-app-configuration.md) for a description of monitoring Azure App Configuration.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
+ Last updated 05/05/2021
The following table lists common and recommended alert rules for App C
* See [Monitoring App Configuration data reference](./monitor-app-configuration-reference.md) for a reference of the metrics, logs, and other important values created by App Configuration.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
To prepare for this new offer, you need to plan and prepare to onboard your mach
We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support.
+> [!NOTE]
+> In order to purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE).
+>
## Next steps
+* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
+ * Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). * Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent. * Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers.
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
Use the following table to identify and resolve issues when configuring the Azur
| AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. | | AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. | | AZCM0026 | There is an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints aren't blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |
-| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). |
+| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Azure Active Directory tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Azure Active Directory and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). |
| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For permission issues, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. | | AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group ΓÇö see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions).<br> If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. | | AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. |
If you don't see your problem here or you can't resolve your issue, try one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
# StylesObject Schema reference guide for dynamic Maps
- The `StylesObject` is a `StyleObject` array representing stateset styles. Use the Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature-state) to apply your stateset styles to indoor map data features. Once you've created your stateset styles and associated them with indoor map features, you can use them to create dynamic indoor maps. For more information on creating dynamic indoor maps, see [Implement dynamic styling for Creator indoor maps](indoor-map-dynamic-styling.md).
+ The `StylesObject` is a `StyleObject` array representing stateset styles. Use the Azure Maps Creator [Feature State service] to apply your stateset styles to indoor map data features. Once you've created your stateset styles and associated them with indoor map features, you can use them to create dynamic indoor maps. For more information on creating dynamic indoor maps, see [Implement dynamic styling for Creator indoor maps].
## StyleObject A `StyleObject` is one of the following style rules:
-* [`BooleanTypeStyleRule`](#booleantypestylerule)
-* [`NumericTypeStyleRule`](#numerictypestylerule)
-* [`StringTypeStyleRule`](#stringtypestylerule)
+* [`BooleanTypeStyleRule`]
+* [`NumericTypeStyleRule`]
+* [`StringTypeStyleRule`]
-The JSON below shows example usage of each of the three style types. The `BooleanTypeStyleRule` is used to determine the dynamic style for features whose `occupied` property is true and false. The `NumericTypeStyleRule` is used to determine the style for features whose `temperature` property falls within a certain range. Finally, the `StringTypeStyleRule` is used to match specific styles to `meetingType`.
+The following JSON shows example usage of each of the three style types. The `BooleanTypeStyleRule` is used to determine the dynamic style for features whose `occupied` property is true and false. The `NumericTypeStyleRule` is used to determine the style for features whose `temperature` property falls within a certain range. Finally, the `StringTypeStyleRule` is used to match specific styles to `meetingType`.
```json "styles": [
The JSON below shows example usage of each of the three style types. The `Boole
## NumericTypeStyleRule
- A `NumericTypeStyleRule` is a [`StyleObject`](#styleobject) and consists of the following properties:
+ A `NumericTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
| Property | Type | Description | Required | |--|-|-|-| | `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string | Value is "numeric". | Yes |
-| `rules` | [`NumberRuleObject`](#numberruleobject)[]| An array of numeric style ranges with associated colors. Each range defines a color that's to be used when the *state* value satisfies the range.| Yes |
+| `type` | string | Value is `numeric`. | Yes |
+| `rules` | [`NumberRuleObject`][]| An array of numeric style ranges with associated colors. Each range defines a color that's to be used when the *state* value satisfies the range.| Yes |
### NumberRuleObject
-A `NumberRuleObject` consists of a [`RangeObject`](#rangeobject) and a `color` property. If the *state* value falls into the range, its color for display will be the color specified in the `color` property.
+A `NumberRuleObject` consists of a [`RangeObject`](#rangeobject) and a `color` property. If the *state* value falls into the range, its color for display is the color specified in the `color` property.
If you define multiple overlapping ranges, the color chosen will be the color that's defined in the first range that is satisfied.
-In the following JSON sample, both ranges will hold true when the *state* value is between 50-60. However, the color that will be used is `#343deb` because it's the first range in the list that has been satisfied.
+In the following JSON sample, both ranges hold true when the *state* value is between 50-60. However, the color that is used is `#343deb` because it's the first range in the list that has been satisfied.
```json
In the following JSON sample, both ranges will hold true when the *state* value
| Property | Type | Description | Required | |--|-|-|-|
-| `range` | [RangeObject](#rangeobject) | The [RangeObject](#rangeobject) defines a set of logical range conditions, which, if `true`, change the display color of the *state* to the color specified in the `color` property. If `range` is unspecified, then the color defined in the `color` property will always be used. | No |
+| `range` | [RangeObject] | The [RangeObject] defines a set of logical range conditions, which, if `true`, change the display color of the *state* to the color specified in the `color` property. If `range` is unspecified, then the color defined in the `color` property is always used. | No |
| `color` | string | The color to use when state value falls into the range. The `color` property is a JSON string in any one of following formats: <ul><li> HTML-style hex values </li><li> RGB ("#ff0", "#ffff00", "rgb(255, 255, 0)")</li><li> RGBA ("rgba(255, 255, 0, 1)")</li><li> HSL("hsl(100, 50%, 50%)")</li><li> HSLA("hsla(100, 50%, 50%, 1)")</li><li> Predefined HTML colors names, like yellow, and blue.</li></ul> | Yes | ### RangeObject
-The `RangeObject` defines a numeric range value of a [`NumberRuleObject`](#numberruleobject). For the *state* value to fall into the range, all defined conditions must hold true.
+The `RangeObject` defines a numeric range value of a [`NumberRuleObject`]. For the *state* value to fall into the range, all defined conditions must hold true.
| Property | Type | Description | Required | |--|-|-|-|
The `RangeObject` defines a numeric range value of a [`NumberRuleObject`](#numbe
### Example of NumericTypeStyleRule
-The following JSON illustrates a `NumericTypeStyleRule` *state* named `temperature`. In this example, the [`NumberRuleObject`](#numberruleobject) contains two defined temperature ranges and their associated color styles. If the temperature range is 50-69, the display should use the color `#343deb`. If the temperature range is 31-70, the display should use the color `#eba834`.
+The following JSON illustrates a `NumericTypeStyleRule` *state* named `temperature`. In this example, the [`NumberRuleObject`] contains two defined temperature ranges and their associated color styles. If the temperature range is 50-69, the display should use the color `#343deb`. If the temperature range is 31-70, the display should use the color `#eba834`.
```json {
The following JSON illustrates a `NumericTypeStyleRule` *state* named `temperatu
## StringTypeStyleRule
-A `StringTypeStyleRule` is a [`StyleObject`](#styleobject) and consists of the following properties:
+A `StringTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
| Property | Type | Description | Required | |--|-|-|-| | `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string |Value is "string". | Yes |
-| `rules` | [`StringRuleObject`](#stringruleobject)[]| An array of N number of *state* values.| Yes |
+| `type` | string |Value is `string`. | Yes |
+| `rules` | [`StringRuleObject`][]| An array of N number of *state* values.| Yes |
### StringRuleObject
A `StringRuleObject` consists of up to N number of state values that are the pos
The string value matching is case-sensitive.
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `stateValue1` | string | The color when value string is stateValue1. | No |
+| Property | Type | Description | Required |
+||--|--|-|
+| `stateValue1` | string | The color when value string is stateValue1.| No |
| `stateValue2` | string | The color when value string is stateValue. | No |
-| `stateValueN` | string | The color when value string is stateValueN. | No |
+| `stateValueN` | string | The color when value string is stateValueN.| No |
### Example of StringTypeStyleRule
The following JSON illustrates a `StringTypeStyleRule` that defines styles assoc
## BooleanTypeStyleRule
-A `BooleanTypeStyleRule` is a [`StyleObject`](#styleobject) and consists of the following properties:
+A `BooleanTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
| Property | Type | Description | Required | |--|-|-|-| | `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string |Value is "boolean". | Yes |
-| `rules` | [`BooleanRuleObject`](#booleanruleobject)[1]| A boolean pair with colors for `true` and `false` *state* values.| Yes |
+| `type` | string |Value is `boolean`. | Yes |
+| `rules` | [`BooleanRuleObject`]| A boolean pair with colors for `true` and `false` *state* values.| Yes |
### BooleanRuleObject
A `BooleanRuleObject` defines colors for `true` and `false` values.
### Example of BooleanTypeStyleRule
-The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`. The [`BooleanRuleObject`](#booleanruleobject) defines colors for `true` and `false` values.
+The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`. The [`BooleanRuleObject`] defines colors for `true` and `false` values.
```json {
The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`
Learn more about Creator for indoor maps by reading: > [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
+> [Creator for indoor maps]
+
+[`BooleanRuleObject`]: #booleanruleobject
+[`BooleanTypeStyleRule`]: #booleantypestylerule
+[`NumberRuleObject`]: #numberruleobject
+[`NumericTypeStyleRule`]: #numerictypestylerule
+[`StringRuleObject`]: #stringruleobject
+[`StringTypeStyleRule`]: #stringtypestylerule
+[`StyleObject`]: #styleobject
+[Creator for indoor maps]: creator-indoor-maps.md
+[Feature State service]: /rest/api/maps/v2/feature-state
+[Implement dynamic styling for Creator  indoor maps]: indoor-map-dynamic-styling.md
+[RangeObject]: #rangeobject
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md
Title: Set a map style in Android maps | Microsoft Azure Maps
+ Title: Set a map style in Android maps
+ description: Learn two ways of setting the style of a map. See how to use the Azure Maps Android SDK in either the layout file or the activity class to adjust the style.
zone_pivot_groups: azure-maps-android
# Set map style (Android SDK)
-This article shows you two ways to set map styles using the Azure Maps Android SDK. Azure Maps has six different maps styles to choose from. For more information about supported map styles, see [supported map styles in Azure Maps](supported-map-styles.md).
+This article shows you two ways to set map styles using the Azure Maps Android SDK. Azure Maps has six different maps styles to choose from. For more information about supported map styles, see [supported map styles in Azure Maps].
## Prerequisites
-Be sure to complete the steps in the [Quickstart: Create an Android app](quick-android-map.md) document.
+Be sure to complete the steps in the Quickstart: [Create an Android app].
>[!IMPORTANT]
->The procedure in this section requires an Azure Maps account in Gen 1 or Gen 2 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps](choose-pricing-tier.md).
-
+>The procedure in this section requires an Azure Maps account in Gen 1 or Gen 2 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps].
## Set map style in the layout
map.setCamera(
::: zone-end
-The aspect ratio of a bounding box may not be the same as the aspect ratio of the map, as such the map will often show the full bounding box area, but will often only be tight vertically or horizontally.
+The aspect ratio of a bounding box may not be the same as the aspect ratio of the map, as such the map often show the full bounding box area, but are often only tight vertically or horizontally.
### Animate map view
When setting the camera options of the map, animation options can also be used t
| Option | Description | |--|-| | `animationDuration(Integer durationMs)` | Specifies how long the camera animates between the views in milliseconds (ms). |
-| `animationType(AnimationType animationType)` | Specifies the type of animation transition to perform.<br/><br/> - `JUMP` - an immediate change.<br/> - `EASE` - gradual change of the camera's settings.<br/> - `FLY` - gradual change of the camera's settings following an arc resembling flight. |
+| `animationType(AnimationType animationType)` | Specifies the type of animation transition to perform.<br><br> - `JUMP` - an immediate change.<br> - `EASE` - gradual change of the camera's settings.<br> - `FLY` - gradual change of the camera's settings that create an arc resembling flight. |
-The following code shows how to animate the map view using a `FLY` animation over a duration of three seconds.
+This code shows how to animate the map view using a `FLY` animation over a duration of three seconds:
::: zone pivot="programming-language-java-android"
map.setCamera(
::: zone-end
-The following demonstrates the above code animating the map view from New York to Seattle.
+The above code demonstrates animating the map view from New York to Seattle:
![Map animating the camera from New York to Seattle](media/set-android-map-styles/android-animate-camera.gif)
The following demonstrates the above code animating the map view from New York t
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Add a symbol layer](how-to-add-symbol-to-android-map.md)
+> [Add a symbol layer]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer-android.md)
+> [Add a bubble layer]
+
+[Add a bubble layer]: map-add-bubble-layer-android.md
+[Add a symbol layer]: how-to-add-symbol-to-android-map.md
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[Create an Android app]: quick-android-map.md
+[supported map styles in Azure Maps]: supported-map-styles.md
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Title: Drawing tools module
-description: In this article, you'll learn how to set drawing options data using the Microsoft Azure Maps Web SDK
+description: This article describes how to set drawing options data using the Microsoft Azure Maps Web SDK
Last updated 06/15/2023
# Use the drawing tools module
-The Azure Maps Web SDK provides a [drawing tools module]. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager#setoptions-drawingmanageroptions-). The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar) class.
+The Azure Maps Web SDK provides a [drawing tools module]. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager]. The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [drawing toolbar] class.
## Loading the drawing tools module in a webpage
-1. Create a new HTML file and [implement the map as usual](./how-to-use-map-control.md).
+1. Create a new HTML file and [implement the map as usual].
2. Load the Azure Maps drawing tools module. You can load it in one of two ways:
- - Use the globally hosted, Azure Content Delivery Network version of the Azure Maps services module. Add reference to the JavaScript and CSS Style Sheet in the `<head>` element of the file:
+ - Use the globally hosted, Azure Content Delivery Network version of the Azure Maps services module. Add reference to the JavaScript and CSS in the `<head>` element of the file:
```html <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/1/atlas-drawing.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/drawing/1/atlas-drawing.min.js"></script> ```
- - Or, you can load the drawing tools module for the Azure Maps Web SDK source code locally by using the [azure-maps-drawing-tools](https://www.npmjs.com/package/azure-maps-drawing-tools) npm package, and then host it with your app. This package also includes TypeScript definitions. Use this command:
+ - Or, you can load the drawing tools module for the Azure Maps Web SDK source code locally by using the [azure-maps-drawing-tools] npm package, and then host it with your app. This package also includes TypeScript definitions. Use this command:
`npm install azure-maps-drawing-tools`
The Azure Maps Web SDK provides a [drawing tools module]. This module makes it e
import * as drawing from "azure-maps-drawing-tools"; ```
- You would also need to embed the CSS Style Sheet for various controls to display correctly. If you're using a JavaScript bundler to bundle the dependencies and package your code, refer to your bundler's documentation on how it's done. For [Webpack], it's commonly done via a combination of `style-loader` and `css-loader` with documentation available at [style-loader].
+ You would also need to embed the CSS for various controls to display correctly. If you're using a JavaScript bundler to bundle the dependencies and package your code, refer to your bundler's documentation on how it's done. For [Webpack], it's commonly done via a combination of `style-loader` and `css-loader` with documentation available at [style-loader].
To begin, install style-loader and css-loader:
The Azure Maps Web SDK provides a [drawing tools module]. This module makes it e
## Use the drawing manager directly
-Once the drawing tools module is loaded in your application, you can enable drawing and editing capabilities using the [drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager#setoptions-drawingmanageroptions-). You can specify options for the drawing manager while instantiating it or alternatively use the `drawingManager.setOptions()` function.
+Once the drawing tools module is loaded in your application, you can enable drawing and editing capabilities using the [drawing manager]. You can specify options for the drawing manager while instantiating it or alternatively use the `drawingManager.setOptions()` function.
### Set the drawing mode
The following image is an example of drawing mode of the `DrawingManager`. Selec
The drawing manager supports three different ways of interacting with the map to draw shapes. - `click` - Coordinates are added when the mouse or touch is clicked.-- `freehand` - Coordinates are added when the mouse or touch is dragged on the map.
+- `freehand` - Coordinates are added when the mouse or touch is dragged on the map.
- `hybrid` - Coordinates are added when the mouse or touch is clicked or dragged. The following code enables the polygon drawing mode and sets the type of drawing interaction that the drawing manager should adhere to `freehand`.
The following table lists the type of editing supported by different types of sh
## Next steps
-Learn how to use additional features of the drawing tools module:
+Learn how to use more features of the drawing tools module:
> [!div class="nextstepaction"]
-> [Add a drawing toolbar](map-add-drawing-toolbar.md)
+> [Add a drawing toolbar]
> [!div class="nextstepaction"]
-> [Get shape data](map-get-shape-data.md)
+> [Get shape data]
> [!div class="nextstepaction"]
-> [React to drawing events](drawing-tools-events.md)
+> [React to drawing events]
> [!div class="nextstepaction"]
-> [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md)
+> [Interaction types and keyboard shortcuts]
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Map](/javascript/api/azure-maps-control/atlas.map)
+> [Map]
> [!div class="nextstepaction"]
-> [Drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager)
+> [Drawing manager]
> [!div class="nextstepaction"]
-> [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar)
+> [drawing toolbar]
-[Drawing manager options]: https://samples.azuremaps.com/drawing-tools-module/drawing-manager-options
-[Webpack]: https://webpack.js.org/
-[style-loader]: https://webpack.js.org/loaders/style-loader/
+[Add a drawing toolbar]: map-add-drawing-toolbar.md
+[azure-maps-drawing-tools]: https://www.npmjs.com/package/azure-maps-drawing-tools
[Drawing manager options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20manager%20options/Drawing%20manager%20options.html
+[Drawing manager options]: https://samples.azuremaps.com/drawing-tools-module/drawing-manager-options
+[drawing manager]: /javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager
+[drawing toolbar]: /javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar
[drawing tools module]: https://www.npmjs.com/package/azure-maps-drawing-tools
+[Get shape data]: map-get-shape-data.md
[How to use the Azure Maps map control npm package]: how-to-use-npm-package.md
+[implement the map as usual]: how-to-use-map-control.md
+[Interaction types and keyboard shortcuts]: drawing-tools-interactions-keyboard-shortcuts.md
+[Map]: /javascript/api/azure-maps-control/atlas.map
+[React to drawing events]: drawing-tools-events.md
+[style-loader]: https://webpack.js.org/loaders/style-loader/
+[Webpack]: https://webpack.js.org/
azure-maps Show Traffic Data Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/show-traffic-data-map-ios-sdk.md
Title: Show traffic data on iOS maps
-description: In this article you'll learn, how to display traffic data on a map using the Microsoft Azure Maps iOS SDK.
+description: This article describes how to display traffic data on a map using the Microsoft Azure Maps iOS SDK.
Previously updated : 11/18/2021 Last updated : 07/21/2023
Flow data and incidents data are the two types of traffic data that can be displ
## Prerequisites
-Be sure to complete the steps in the [Quickstart: Create an iOS app](quick-ios-app.md) document. Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`.
+Complete the [Create an iOS app] quickstart. Code blocks from this quickstart can be inserted into the `viewDidLoad` function of `ViewController`.
## Show traffic on the map
There are two types of traffic data available in Azure Maps:
- Incident data - consists of point and line-based data for things such as construction, road closures, and accidents. - Flow data - provides metrics on the flow of traffic on the roads. Often, traffic flow data is used to color the roads. The colors are based on how much traffic is slowing down the flow, relative to the speed limit, or another metric. There are four values that can be passed into the traffic `flow` option of the map.
- |Flow enum | Description|
- | :-- | :-- |
+ |Flow enum | Description |
+ | :-- | :-- |
| `TrafficFlow.none` | Doesn't display traffic data on the map | | `TrafficFlow.relative` | Shows traffic data that's relative to the free-flow speed of the road | | `TrafficFlow.relativeDelay` | Displays areas that are slower than the average expected delay |
The following screenshot shows the above code rendering real-time traffic inform
## Get traffic incident details
-Details about a traffic incident are available within the properties of the feature used to display the incident on the map. Traffic incidents are added to the map using the Azure Maps traffic incident vector tile service. The format of the data in these vector tiles can be found in the [Vector Incident Tiles](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles) article on the TomTom site. The following code adds a delegate to the map which handles a click event, retrieves the traffic incident feature that was clicked and displays an alert with some of the details.
+Details about a traffic incident are available within the properties of the feature used to display the incident on the map. Traffic incidents are added to the map using the Azure Maps traffic incident vector tile service. The format of the data in these vector tiles can be found in the [Vector Incident Tiles] article on the TomTom site. The following code adds a delegate to the map. This delegate handles a click event, retrieves the traffic incident feature that was selected and displays an alert with some of the details.
```swift // Show traffic information on the map.
On a typical day in most major cities, there can be an overwhelming number of tr
The following table shows all the traffic incident categories that can be used within the `incidentCategoryFilter` option.
-| Category enum | Description |
-|--|-|
-| `IncidentCategory.unknown` | An incident that either doesn't fit any of the defined categories or hasn't yet been classified. |
+| Category enum | Description |
+|--|-|
+| `IncidentCategory.unknown` | An incident that either doesn't fit any of the defined categories or hasn't yet been classified. |
| `IncidentCategory.accident` | Traffic accident. |
-| `IncidentCategory.fog` | Fog that reduces visibility, likely reducing traffic flow, and possibly increasing the risk of an accident. |
+| `IncidentCategory.fog` | Fog that reduces visibility, likely reducing traffic flow, and possibly increasing the risk of an accident. |
| `IncidentCategory.dangerousConditions` | Dangerous situation on the road, such as an object on the road. | | `IncidentCategory.rain` | Heavy rain that may be reducing visibility, making driving conditions difficult, and possibly increasing the risk of an accident. |
-| `IncidentCategory.ice` | Icy road conditions that may make driving difficult or dangerous. |
-| `IncidentCategory.jam` | Traffic jam resulting in slower moving traffic. |
+| `IncidentCategory.ice` | Icy road conditions that may make driving difficult or dangerous. |
+| `IncidentCategory.jam` | Traffic jam resulting in slower moving traffic. |
| `IncidentCategory.laneClosed` | A road lane is closed. | | `IncidentCategory.roadClosed` | A road is closed. | | `IncidentCategory.roadWorks` | Road works/construction in this area. |
The following table shows all the traffic incident categories that can be used w
The following table shows all the traffic incident magnitudes that can be used within the `incidentMagnitudeFilter` option.
-| Magnitude enum | Description |
+| Magnitude enum | Description |
|--|-| | `IncidentMagnitude.unknown` | An incident who's magnitude hasn't yet been classified. | | `IncidentMagnitude.minor` | A minor traffic issue that is often just for information and has minimal impact to traffic flow. |
The following screenshot shows a map of moderate traffic jams and incidents with
## Additional information
-View the following guides to learn how to add more data to your map:
-
-* [Add a tile layer](add-tile-layer-map-ios.md)
-* [Add a symbol layer](add-symbol-layer-ios.md)
-* [Add a bubble layer](add-bubble-layer-map-ios.md)
-* [Add a line layer](add-line-layer-map-ios.md)
-* [Add a polygon layer](add-polygon-layer-map-ios.md)
+The following articles describe different ways to add data to your map:
+
+- [Add a tile layer]
+- [Add a symbol layer]
+- [Add a bubble layer]
+- [Add a line layer]
+- [Add a polygon layer]
+
+[Add a bubble layer]: add-bubble-layer-map-ios.md
+[Add a line layer]: add-line-layer-map-ios.md
+[Add a polygon layer]: add-polygon-layer-map-ios.md
+[Add a symbol layer]: add-symbol-layer-ios.md
+[Add a tile layer]: add-tile-layer-map-ios.md
+[Create an iOS app]: quick-ios-app.md
+[Vector Incident Tiles]: https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
The spatial IO module provides a `SimpleDataLayer` class. This class makes it easy to render styled features on the map. It can even render data sets that have style properties and data sets that contain mixed geometry types. The simple data layer achieves this functionality by wrapping multiple rendering layers and using style expressions. The style expressions search for common style properties of the features inside these wrapped layers. The `atlas.io.read` function and the `atlas.io.write` function use these properties to read and write styles into a supported file format. After adding the properties to a supported file format, the file can be used for various purposes. For example, the file can be used to display the styled features on the map.
-In addition to styling features, the `SimpleDataLayer` provides a built-in popup feature with a popup template. The popup displays when a feature is clicked. The default popup feature can be disabled, if desired. This layer also supports clustered data. When a cluster is clicked, the map will zoom into the cluster and expand it into individual points and subclusters.
+In addition to styling features, the `SimpleDataLayer` provides a built-in popup feature with a popup template. The popup displays when a feature is clicked. The default popup feature can be disabled, if desired. This layer also supports clustered data. When a cluster is clicked, the map zooms into the cluster and expands it into individual points and subclusters.
The `SimpleDataLayer` class is intended to be used on large data sets with many geometry types and many styles applied on the features. When used, this class adds an overhead of six layers containing style expressions. So, there are cases when it's more efficient to use the core rendering layers. For example, use a core layer to render a couple of geometry types and a few styles on a feature ## Use a simple data layer
-The `SimpleDataLayer` class is used like the other rendering layers are used. The code below shows how to use a simple data layer in a map:
+The `SimpleDataLayer` class is used like the other rendering layers are used. The following code shows how to use a simple data layer in a map:
```javascript //Create a data source and add it to the map.
The real power of the simple data layer comes when:
- Features in the data set have several style properties individually set on them; or - You're not sure what the data set exactly contains.
-For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The [Simple data layer options] sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides. For the source code for this sample, see [Simple data layer options source code].
+For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The [Simple data layer options] sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides. For the source code for this sample, see [Simple data layer options.html] in the Azure Maps code samples in GitHub.
:::image type="content" source="./media/spatial-io-add-simple-data-layer/simple-data-layer-options.png"alt-text="A screenshot of map with a panel on the left showing the different simple data layer options.":::
For example when parsing XML data feeds, you may not know the exact styles and g
> > [!NOTE]
-> This simple data layer uses the [popup template](map-add-popup.md#add-popup-templates-to-the-map) class to display KML balloons or feature properties as a table. By default, all content rendered in the popup will be sandboxed inside of an iframe as a security feature. However, there are limitations:
+> This simple data layer uses the [popup template] class to display KML balloons or feature properties as a table. By default, all content rendered in the popup will be sandboxed inside of an iframe as a security feature. However, there are limitations:
> > - All scripts, forms, pointer lock and top navigation functionality is disabled. Links are allowed to open up in a new tab when clicked. > - Older browsers that don't support the `srcdoc` parameter on iframes will be limited to rendering a small amount of content.
For example when parsing XML data feeds, you may not know the exact styles and g
As mentioned earlier, the simple data layer wraps several of the core rendering layers: bubble, symbol, line, polygon, and extruded polygon. It then uses expressions to search for valid style properties on individual features.
-Azure Maps and GitHub style properties are the two main sets of supported property names. Most property names of the different Azure maps layer options are supported as style properties of features in the simple data layer. Expressions have been added to some layer options to support style property names that are commonly used by GitHub. These property names are defined by [GitHub's GeoJSON map support](https://help.github.com/en/github/managing-files-in-a-repository/mapping-geojson-files-on-github), and they're used to style GeoJSON files that are stored and rendered within the platform. All GitHub's styling properties are supported in the simple data layer, except the `marker-symbol` styling properties.
+Azure Maps and GitHub style properties are the two main sets of supported property names. Most property names of the different Azure maps layer options are supported as style properties of features in the simple data layer. Expressions have been added to some layer options to support style property names that are commonly used by GitHub. [GitHub's GeoJSON map support] defines these property names, and they're used to style GeoJSON files that are stored and rendered within the platform. All GitHub's styling properties are supported in the simple data layer, except the `marker-symbol` styling properties.
-If the reader comes across a less common style property, it will convert it to the closest Azure Maps style property. Additionally, the default style expressions can be overridden by using the `getLayers` function of the simple data layer and updating the options on any of the layers.
+If the reader comes across a less common style property, it converts it to the closest Azure Maps style property. Additionally, the default style expressions can be overridden by using the `getLayers` function of the simple data layer and updating the options on any of the layers.
-The following sections provide details on the default style properties that are supported by the simple data layer. The order of the supported property name is also the priority of the property. If two style properties are defined for the same layer option, then the first one in the list has higher precedence. Colors can be any CSS3 color value; HEX, RGB, RGBA, HSL, HSLA, or named color value.
+The following sections provide details on the default style properties supported by the simple data layer. The order of the supported property name is also the priority of the property. If two style properties are defined for the same layer option, then the first one in the list has higher precedence. Colors can be any CSS3 color value; HEX, RGB, RGBA, HSL, HSLA, or named color value.
### Bubble layer style properties
-If a feature is a `Point` or a `MultiPoint`, and the feature doesn't have an `image` property that would be used as a custom icon to render the point as a symbol, then the feature will be rendered with a `BubbleLayer`.
+If a feature is a `Point` or a `MultiPoint`, and the feature doesn't have an `image` property that would be used as a custom icon to render the point as a symbol, then the feature is rendered with a `BubbleLayer`.
| Layer option | Supported property name(s) | Default value | |--|-||
If a feature is a `Point` or a `MultiPoint`, and the feature doesn't have an `im
| `radius` | `size`<sup>1</sup>, `marker-size`<sup>2</sup>, `scale`<sup>1</sup> | `8` | | `strokeColor` | `strokeColor`, `stroke` | `'#FFFFFF'` |
-\[1\] The `size` and `scale` values are considered scalar values, and they'll be multiplied by `8`
+\[1\] The `size` and `scale` values are considered scalar values, and are multiplied by `8`
-\[2\] If the GitHub `marker-size` option is specified, then the following values will be used for the radius.
+\[2\] If the GitHub `marker-size` option is specified, then the following values are used for the radius.
| Marker size | Radius | |-|--|
If a feature is a `Point` or a `MultiPoint`, and the feature doesn't have an `im
| `medium` | `8` | | `large` | `12` |
-Clusters are also rendered using the bubble layer. By default the radius of a cluster is set to `16`. The color of the cluster varies depending on the number of points in the cluster, as defined below:
+Clusters are also rendered using the bubble layer. By default the radius of a cluster is set to `16`. The color of the cluster varies depending on the number of points in the cluster, as defined in the following table:
| # of points | Color | |-|-|
Clusters are also rendered using the bubble layer. By default the radius of a cl
### Symbol style properties
-If a feature is a `Point` or a `MultiPoint`, and the feature and has an `image` property that would be used as a custom icon to render the point as a symbol, then the feature will be rendered with a `SymbolLayer`.
+If a feature is a `Point` or a `MultiPoint`, and the feature and has an `image` property that would be used as a custom icon to render the point as a symbol, then the feature is rendered with a `SymbolLayer`.
| Layer option | Supported property name(s) | Default value | |--|-||
-| `image` | `image` | ``none`` |
-| `size` | `size`, `marker-size`<sup>1</sup> | `1` |
-| `rotation` | `rotation` | `0` |
-| `offset` | `offset` | `[0, 0]` |
-| `anchor` | `anchor` | `'bottom'` |
+| `image` | `image` | ``none`` |
+| `size` | `size`, `marker-size`<sup>1</sup> | `1` |
+| `rotation` | `rotation` | `0` |
+| `offset` | `offset` | `[0, 0]` |
+| `anchor` | `anchor` | `'bottom'` |
-\[1\] If the GitHub `marker-size` option is specified, then the following values will be used for the icon size option.
+\[1\] If the GitHub `marker-size` option is specified, then the following values are used for the icon size option.
| Marker size | Symbol size | |-|-|
If a feature is a `Point` or a `MultiPoint`, and the feature and has an `image`
| `medium` | `1` | | `large` | `2` |
-If the point feature is a cluster, the `point_count_abbreviated` property will be rendered as a text label. No image will be rendered.
+If the point feature is a cluster, the `point_count_abbreviated` property is rendered as a text label. No image is rendered.
### Line style properties
-If the feature is a `LineString`, `MultiLineString`, `Polygon`, or `MultiPolygon`, then the feature will be rendered with a `LineLayer`.
+If the feature is a `LineString`, `MultiLineString`, `Polygon`, or `MultiPolygon`, then the feature is rendered with a `LineLayer`.
-| Layer option | Supported property name(s) | Default value |
-|--|-||
-| `strokeColor` | `strokeColor`, `stroke` | `'#1E90FF'` |
-| `strokeWidth` | `strokeWidth`, `stroke-width`, `stroke-thickness` | `3` |
-| `strokeOpacity` | `strokeOpacity`, `stroke-opacity` | `1` |
+| Layer option | Supported property name(s) | Default value |
+|--|-||
+| `strokeColor` | `strokeColor`, `stroke` | `'#1E90FF'` |
+| `strokeWidth` | `strokeWidth`, `stroke-width`, `stroke-thickness` | `3` |
+| `strokeOpacity` | `strokeOpacity`, `stroke-opacity` | `1` |
### Polygon style properties
-If the feature is a `Polygon` or a `MultiPolygon`, and the feature either doesn't have a `height` property or the `height` property is zero, then the feature will be rendered with a `PolygonLayer`.
+If the feature is a `Polygon` or a `MultiPolygon`, and the feature either doesn't have a `height` property or the `height` property is zero, then the feature is rendered with a `PolygonLayer`.
| Layer option | Supported property name(s) | Default value | |--|-||
-| `fillColor` | `fillColor`, `fill` | `'#1E90FF'` |
-| `fillOpacity` | `fillOpacity`, '`fill-opacity` | `0.5` |
+| `fillColor` | `fillColor`, `fill` | `'#1E90FF'` |
+| `fillOpacity`|`fillOpacity`, '`fill-opacity`| `0.5` |
### Extruded polygon style properties
-If the feature is a `Polygon` or a `MultiPolygon`, and has a `height` property with a value greater than 0, the feature will be rendered with an `PolygonExtrusionLayer`.
+If the feature is a `Polygon` or a `MultiPolygon`, and has a `height` property with a value greater than zero, the feature is rendered with an `PolygonExtrusionLayer`.
| Layer option | Supported property name(s) | Default value | |--|-||
-| `base` | `base` | `0` |
-| `fillColor` | `fillColor`, `fill` | `'#1E90FF'` |
-| `height` | `height` | `0` |
+| `base` | `base` | `0` |
+| `fillColor` | `fillColor`, `fill` | `'#1E90FF'` |
+| `height` | `height` | `0` |
## Next steps Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [SimpleDataLayer](/javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer)
+> [SimpleDataLayer]
> [!div class="nextstepaction"]
-> [SimpleDataLayerOptions](/javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions)
+> [SimpleDataLayerOptions]
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Read and write spatial data](spatial-io-read-write-spatial-data.md)
+> [Read and write spatial data]
> [!div class="nextstepaction"]
-> [Add an OGC map layer](spatial-io-add-ogc-map-layer.md)
+> [Add an OGC map layer]
> [!div class="nextstepaction"]
-> [Connect to a WFS service](spatial-io-connect-wfs-service.md)
+> [Connect to a WFS service]
> [!div class="nextstepaction"]
-> [Leverage core operations](spatial-io-core-operations.md)
+> [Leverage core operations]
> [!div class="nextstepaction"]
-> [Supported data format details](spatial-io-supported-data-format-details.md)
-
+> [Supported data format details]
+
+[Add an OGC map layer]: spatial-io-add-ogc-map-layer.md
+[Connect to a WFS service]: spatial-io-connect-wfs-service.md
+[GitHub's GeoJSON map support]: https://docs.github.com/en/repositories/working-with-files/using-files/working-with-non-code-files#mapping-geojsontopojson-files-on-github
+[Leverage core operations]: spatial-io-core-operations.md
+[popup template]: map-add-popup.md#add-popup-templates-to-the-map
+[Read and write spatial data]: spatial-io-read-write-spatial-data.md
+[Simple data layer options.html]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Simple%20data%20layer%20options/Simple%20data%20layer%20options.html
[Simple data layer options]: https://samples.azuremaps.com/spatial-io-module/simple-data-layer-options
-[Simple data layer options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Simple%20data%20layer%20options/Simple%20data%20layer%20options.html
+[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
+[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
+[Supported data format details]: spatial-io-supported-data-format-details.md
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
# Connect to a WFS service
-A Web Feature Service (WFS) is a web service for querying spatial data that has a standardized API that is defined by the Open Geospatial Consortium (OGC). The `WfsClient` class in the spatial IO module lets developers connect to a WFS service and query data from the service.
+A Web Feature Service (WFS) is a web service for querying spatial data that has a standardized API defined by the Open Geospatial Consortium (OGC). The `WfsClient` class in the spatial IO module lets developers connect to a WFS service and query data from the service.
-The following features are supported by the `WfsClient` class:
+The `WfsClient` class supports the following features:
- Supported versions: `1.0.0`, `1.1.0`, and `2.0.0` - Supported filter operators: binary comparisons, logic, math, value, and `bbox`.
The [Simple WFS example] sample shows how to easily query a Web Feature Service
## Supported filters
-The specification for the WFS standard makes use of OGC filters. The filters below are supported by the WFS client, assuming that the service being called also supports these filters. Custom filter strings can be passed into the `CustomFilter` class.
+The specification for the WFS standard makes use of OGC filters. The WFS client supports the following filters, assuming that the service being called also supports these filters. Custom filter strings can be passed into the `CustomFilter` class.
**Logical operators**
The [WFS service explorer] sample is a simple tool for exploring WFS services on
</iframe> -->
-To access WFS services hosted on non-CORS enabled endpoints, a CORS enabled proxy service can be passed into the `proxyService` option of the WFS client as shown below.
+To access WFS services hosted on non-CORS enabled endpoints, a CORS enabled proxy service can be passed into the `proxyService` option of the WFS client as shown in the following example.
```JavaScript //Create the WFS client to access the service and use the proxy service settings
client = new atlas.io.ogc.WfsClient({
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [WfsClient](/JavaScript/api/azure-maps-spatial-io/atlas.io.ogc.wfsclient)
+> [WfsClient]
> [!div class="nextstepaction"]
-> [WfsServiceOptions](/JavaScript/api/azure-maps-spatial-io/atlas.wfsserviceoptions)
+> [WfsServiceOptions]
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Leverage core operations](spatial-io-core-operations.md)
+> [Leverage core operations]
> [!div class="nextstepaction"]
-> [Supported data format details](spatial-io-supported-data-format-details.md)
-
-[Simple WFS example]: https://samples.azuremaps.com/spatial-io-module/simple-wfs-example
-[WFS filter example]: https://samples.azuremaps.com/spatial-io-module/wfs-filter-examples
-[WFS service explorer]: https://samples.azuremaps.com/spatial-io-module/wfs-service-explorer
+> [Supported data format details]
+[Leverage core operations]: spatial-io-core-operations.md
[Simple WFS example source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Simple%20WFS%20example/Simple%20WFS%20example.html
+[Simple WFS example]: https://samples.azuremaps.com/spatial-io-module/simple-wfs-example
+[Supported data format details]: spatial-io-supported-data-format-details.md
[WFS filter example source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/WFS%20filter%20examples/WFS%20filter%20examples.html
+[WFS filter example]: https://samples.azuremaps.com/spatial-io-module/wfs-filter-examples
[WFS service explorer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/WFS%20service%20explorer/WFS%20service%20explorer.html
+[WFS service explorer]: https://samples.azuremaps.com/spatial-io-module/wfs-service-explorer
+[WfsClient]: /JavaScript/api/azure-maps-spatial-io/atlas.io.ogc.wfsclient
+[WfsServiceOptions]: /JavaScript/api/azure-maps-spatial-io/atlas.wfsserviceoptions
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md
Title: Core IO operations | Microsoft Azure Maps
+ Title: Core IO operations
+ description: Learn how to efficiently read and write XML and delimited data using core libraries from the spatial IO module.
Last updated 03/03/2020
- # Core IO operations In addition to providing tools to read spatial data files, the spatial IO module exposes core underlying libraries to read and write XML and delimited data fast and efficiently.
-The `atlas.io.core` namespace contains two low-level classes that can quickly read and write CSV and XML data. These base classes power the spatial data readers and writers in the Spatial IO module. Feel free to use them to add additional reading and writing support for CSV or XML files.
-
+The `atlas.io.core` namespace contains two low-level classes that can quickly read and write CSV and XML data. These base classes power the spatial data readers and writers in the Spatial IO module. Feel free to use them to add more reading and writing support for CSV or XML files.
+ ## Read delimited files The `atlas.io.core.CsvReader` class reads strings that contain delimited data sets. This class provides two methods for reading data: -- The `read` function will read the full data set and return a two-dimensional array of strings representing all cells of the delimited data set.
+- The `read` function reads the full data set and return a two-dimensional array of strings representing all cells of the delimited data set.
- The `getNextRow` function reads each line of text in a delimited data set and returns an array of string representing all cells in that line of data set. The user can process the row and dispose any unneeded memory from that row before processing the next row. So, function is more memory efficient.
-By default, the reader will use the comma character as the delimiter. However, the delimiter can be changed to any single character or set to `'auto'`. When set to `'auto'`, the reader will analyze the first line of text in the string. Then, it will select the most common character from the table below to use as the delimiter.
+By default, the reader uses the comma character as the delimiter. However, the delimiter can be changed to any single character or set to `'auto'`. When set to `'auto'`, the reader analyzes the first line of text in the string. Then, it selects the most common character from the following table to use as the delimiter.
| Delimiter | Character | | :-- | :-- |
This reader also supports text qualifiers that are used to handle cells that con
The `atlas.io.core.CsvWriter` writes an array of objects as a delimited string. Any single character can be used as a delimiter or a text qualifier. The default delimiter is comma (`','`) and the default text qualifier is the quote (`'"'`) character.
-To use this class, follow the steps below:
+Follow the steps to use this class:
- Create an instance of the class and optionally set a custom delimiter or text qualifier. - Write data to the class using the `write` function or the `writeRow` function. For the `write` function, pass a two-dimensional array of objects representing multiple rows and cells. To use the `writeRow` function, pass an array of objects representing a row of data with multiple columns.-- Call the `toString` function to retrieve the delimited string.
+- Call the `toString` function to retrieve the delimited string.
- Optionally, call the `clear` method to make the writer reusable and reduce its resource allocation, or call the `delete` method to dispose of the writer instance. > [!NOTE]
To use this class, follow the steps below:
## Read XML files
-The `atlas.io.core.SimpleXmlReader` class is faster at parsing XML files than `DOMParser`. However, the `atlas.io.core.SimpleXmlReader` class requires XML files to be well formatted. XML files that aren't well formatted, for example missing closing tags, will likely result in an error.
+The `atlas.io.core.SimpleXmlReader` class is faster at parsing XML files than `DOMParser`. However, the `atlas.io.core.SimpleXmlReader` class requires XML files to be well formatted. XML files that aren't well formatted, for example missing closing tags, may result in an error.
The following code demonstrates how to use the `SimpleXmlReader` class to parse an XML string into a JSON object and serialize it into a desired format.
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
# Read and write spatial data
-The table below lists the spatial file formats that are supported for reading and writing operations with the Spatial IO module.
+The following table lists the spatial file formats that are supported for reading and writing operations with the Spatial IO module.
| Data Format | Read | Write | |-||-|
These next sections outline all the different tools for reading and writing spat
The `atlas.io.read` function is the main function used to read common spatial data formats such as KML, GPX, GeoRSS, GeoJSON, and CSV files with spatial data. This function can also read compressed versions of these formats, as a zip file or a KMZ file. The KMZ file format is a compressed version of KML that can also include assets such as images. Alternatively, the read function can take in a URL that points to a file in any of these formats. URLs should be hosted on a CORS enabled endpoint, or a proxy service should be provided in the read options. The proxy service is used to load resources on domains that aren't CORS enabled. The read function returns a promise to add the image icons to the map, and processes data asynchronously to minimize impact to the UI thread.
-When reading a compressed file, either as a zip or a KMZ, it will be unzipped and scanned for the first valid file. For example, doc.kml, or a file with other valid extension, such as: .kml, .xml, geojson, .json, .csv, .tsv, or .txt. Then, images referenced in KML and GeoRSS files are preloaded to ensure they're accessible. Inaccessible image data may load an alternative fallback image or will be removed from the styles. Images extracted from KMZ files will be converted to data URIs.
+When reading a compressed file, either as a zip or a KMZ, it's unzipped and scanned for the first valid file. For example, doc.kml, or a file with other valid extension, such as: .kml, .xml, geojson, .json, .csv, .tsv, or .txt. Then, images referenced in KML and GeoRSS files are preloaded to ensure they're accessible. Inaccessible image data may load an alternative fallback image or removed from the styles. Images extracted from KMZ files are converted to data URIs.
The result from the read function is a `SpatialDataSet` object. This object extends the GeoJSON FeatureCollection class. It can easily be passed into a `DataSource` as-is to render its features on a map. The `SpatialDataSet` not only contains feature information, but it may also include KML ground overlays, processing metrics, and other details as outlined in the following table.
The result from the read function is a `SpatialDataSet` object. This object exte
## Examples of reading spatial data
-The [Load spatial data] sample shows how to read a spatial data set, and render it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source code].
+The [Load spatial data] sample shows how to read a spatial data set, and renders it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source code].
:::image type="content" source="./media/spatial-io-read-write-spatial-data/load-spatial-data.png"alt-text="A screenshot that shows the snap grid on map.":::
The [Load spatial data] sample shows how to read a spatial data set, and render
<iframe height='500' scrolling='no' title='Load Spatial Data Simple' src='//codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yLNXrZx/'>Load Spatial Data Simple</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe> >
-The next code demo shows how to read and load KML, or KMZ, to the map. KML can contain ground overlays, which will be in the form of an `ImageLyaer` or `OgcMapLayer`. These overlays must be added on the map separately from the features. Additionally, if the data set has custom icons, those icons need to be loaded to the maps resources before the features are loaded.
+The next code demo shows how to read and load KML, or KMZ, to the map. KML can contain ground overlays, which is in the form of an `ImageLyaer` or `OgcMapLayer`. These overlays must be added on the map separately from the features. Additionally, if the data set has custom icons, those icons need to be loaded to the maps resources before the features are loaded.
The [Load KML onto map] sample shows how to load KML or KMZ files onto the map. For the source code of this sample, see [Load KML onto map source code].
The [Load KML onto map] sample shows how to load KML or KMZ files onto the map.
<iframe height='500' scrolling='no' title='Load KML Onto Map' src='//codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbgwxX/'>Load KML Onto Map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe> >
-You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. The read function will try to access files on another domain using CORS first. After the first time it fails to access any resource on another domain using CORS it will only request additional files if a proxy service has been provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
+You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. The read function tries to access files on another domain using CORS first. After the first time it fails to access any resource on another domain using CORS it only requests more files if a proxy service has been provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
```javascript //Read a file from a URL or pass in a raw data as a string.
atlas.io.read('https://nonCorsDomain.example.com/mySuperCoolData.xml', {
```
-The following code snippet shows how to read a delimited file and render it on the map. In this case, the code uses a CSV file that has spatial data columns. Note that you must add a reference to the Azure Maps Spatial IO module.
+The following code snippet shows how to read a delimited file and render it on the map. In this case, the code uses a CSV file that has spatial data columns. You must add a reference to the Azure Maps Spatial IO module.
```javascript
function InitMap()
## Write spatial data
-There are two main write functions in the spatial IO module. The `atlas.io.write` function generates a string, while the `atlas.io.writeCompressed` function generates a compressed zip file. The compressed zip file would contain a text-based file with the spatial data in it. Both of these functions return a promise to add the data to the file. And, they both can write any of the following data: `SpatialDataSet`, `DataSource`, `ImageLayer`, `OgcMapLayer`, feature collection, feature, geometry, or an array of any combination of these data types. When writing using either functions, you can specify the wanted file format. If the file format isn't specified, then the data will be written as KML.
+There are two main write functions in the spatial IO module. The `atlas.io.write` function generates a string, while the `atlas.io.writeCompressed` function generates a compressed zip file. The compressed zip file would contain a text-based file with the spatial data in it. Both of these functions return a promise to add the data to the file. And, they both can write any of the following data: `SpatialDataSet`, `DataSource`, `ImageLayer`, `OgcMapLayer`, feature collection, feature, geometry, or an array of any combination of these data types. When writing using either functions, you can specify the wanted file format. If the file format isn't specified, then the data is written as KML.
-The [Spatial data write options] sample is a tool that demonstrates the majority of the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source code].
+The [Spatial data write options] sample is a tool that demonstrates most the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source code].
:::image type="content" source="./media/spatial-io-read-write-spatial-data/spatial-data-write-options.png"alt-text="A screenshot that shows The Spatial data write options sample that demonstrates most of the write options used with the atlas.io.write function.":::
atlas.io.read(data, {
## Read and write Well-Known Text (WKT)
-[Well-Known Text](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) (WKT) is an Open Geospatial Consortium (OGC) standard for representing spatial geometries as text. Many geospatial systems support WKT, such as Azure SQL and Azure PostgreSQL using the PostGIS plugin. Like most OGC standards, coordinates are formatted as "longitude latitude" to align with the "x y" convention. As an example, a point at longitude -110 and latitude 45 can be written as `POINT(-110 45)` using the WKT format.
+[Well-Known Text] (WKT) is an Open Geospatial Consortium (OGC) standard for representing spatial geometries as text. Many geospatial systems support WKT, such as Azure SQL and Azure PostgreSQL using the PostGIS plugin. Like most OGC standards, coordinates are formatted as "longitude latitude" to align with the "x y" convention. As an example, a point at longitude -110 and latitude 45 can be written as `POINT(-110 45)` using the WKT format.
Well-known text can be read using the `atlas.io.ogc.WKT.read` function, and written using the `atlas.io.ogc.WKT.write` function.
The [Read and write Well Known Text] sample demonstrates how to read and write W
GML is a spatial XML file specification that's often used as an extension to other XML specifications. GeoJSON data can be written as XML with GML tags using the `atlas.io.core.GmlWriter.write` function. The XML that contains GML can be read using the `atlas.io.core.GmlReader.read` function. The read function has two options: -- The `isAxisOrderLonLat` option - The axis order of coordinates "latitude, longitude" or "longitude, latitude" can vary between data sets, and it isn't always well defined. By default the GML reader reads the coordinate data as "latitude, longitude", but setting this option to true will read it as "longitude, latitude".-- The `propertyTypes` option - This option is a key value lookup table where the key is the name of a property in the data set. The value is the object type to cast the value to when parsing. The supported type values are: `string`, `number`, `boolean`, and `date`. If a property isn't in the lookup table or the type isn't defined, the property will be parsed as a string.
+- The `isAxisOrderLonLat` option - The axis order of coordinates "latitude, longitude" or "longitude, latitude" can vary between data sets, and it isn't always well defined. By default the GML reader reads the coordinate data as "latitude, longitude", but setting this option to `true` reads it as "longitude, latitude".
+- The `propertyTypes` option - This option is a key value lookup table where the key is the name of a property in the data set. The value is the object type to cast the value to when parsing. The supported type values are: `string`, `number`, `boolean`, and `date`. If a property isn't in the lookup table or the type isn't defined, the property is parsed as a string.
-The `atlas.io.read` function will default to the `atlas.io.core.GmlReader.read` function when it detects that the input data is XML, but the data isn't one of the other support spatial XML formats.
+The `atlas.io.read` function defaults to the `atlas.io.core.GmlReader.read` function when it detects that the input data is XML, but the data isn't one of the other support spatial XML formats.
-The `GmlReader` will parse coordinates that has one of the following SRIDs:
+The `GmlReader` parses coordinates that have one of the following SRIDs:
- EPSG:4326 (Preferred) - EPSG:4269, EPSG:4283, EPSG:4258, EPSG:4308, EPSG:4230, EPSG:4272, EPSG:4271, EPSG:4267, EPSG:4608, EPSG:4674 possibly with a small margin of error.
See the following articles for more code samples to add to your maps:
[Add an OGC map layer](spatial-io-add-ogc-map-layer.md)
-[Load spatial data]: https://samples.azuremaps.com/spatial-io-module/load-spatial-data-(simple)
-[Load spatial data source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20spatial%20data%20(simple)/Load%20spatial%20data%20(simple).html
-[Load KML onto map]: https://samples.azuremaps.com/spatial-io-module/load-kml-onto-map
-[Load KML onto map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20KML%20onto%20map/Load%20KML%20onto%20map.html
-[Spatial data write options]: https://samples.azuremaps.com/spatial-io-module/spatial-data-write-options
-[Spatial data write options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Spatial%20data%20write%20options/Spatial%20data%20write%20options.html
-[Drag and drop spatial files onto map]: https://samples.azuremaps.com/spatial-io-module/drag-and-drop-spatial-files-onto-map
[Drag and drop spatial files onto map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Drag%20and%20drop%20spatial%20files%20onto%20map/Drag%20and%20drop%20spatial%20files%20onto%20map.html
-[Read Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-well-known-text
-[Read Well Known Text source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20Well%20Known%20Text/Read%20Well%20Known%20Text.html
-[Read and write Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-and-write-well-known-text
---
+[Drag and drop spatial files onto map]: https://samples.azuremaps.com/spatial-io-module/drag-and-drop-spatial-files-onto-map
+[Load KML onto map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20KML%20onto%20map/Load%20KML%20onto%20map.html
+[Load KML onto map]: https://samples.azuremaps.com/spatial-io-module/load-kml-onto-map
+[Load spatial data source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20spatial%20data%20(simple)/Load%20spatial%20data%20(simple).html
+[Load spatial data]: https://samples.azuremaps.com/spatial-io-module/load-spatial-data-(simple)
[Read and write Well Known Text source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20and%20write%20Well%20Known%20Text/Read%20and%20write%20Well%20Known%20Text.html
+[Read and write Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-and-write-well-known-text
+[Read Well Known Text source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20Well%20Known%20Text/Read%20Well%20Known%20Text.html
+[Read Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-well-known-text
+[Spatial data write options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Spatial%20data%20write%20options/Spatial%20data%20write%20options.html
+[Spatial data write options]: https://samples.azuremaps.com/spatial-io-module/spatial-data-write-options
+[Well-Known Text]: https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md
The spatial IO module supports XML tags from the following namespaces.
## Supported XML elements
-The spatial IO module supports the following XML elements. Any XML tags that aren't supported will be converted into a JSON object. Then, each tag will be added as a property in the `properties` field of the parent shape or layer.
+The spatial IO module supports the following XML elements. Any XML tags that aren't supported are converted into a JSON object. Then, each tag is added as a property in the `properties` field of the parent shape or layer.
### KML elements
The spatial IO module supports the following KML elements.
| `east` | yes | yes | | | `end` | yes | yes | | | `ExtendedData` | yes | yes | Supports untyped `Data`, `SimpleData` or `Schema`, and entity replacements of the form `$[dataName]`. |
-| `extrude` | partial | partial | Only supported for polygons. MultiGeometry that have polygons of different heights will be broken out into individual features. Line styles aren't supported. Polygons with an altitude of 0 will be rendered as a flat polygon. When reading, the altitude of the first coordinate in the exterior ring will be added as a height property of the polygon. Then, the altitude of the first coordinate will be used to render the polygon on the map. |
+| `extrude` | partial | partial | Only supported for polygons. MultiGeometry that have polygons of different heights are broken out into individual features. Line styles aren't supported. Polygons with an altitude of 0 is rendered as a flat polygon. When reading, the altitude of the first coordinate in the exterior ring is added as a height property of the polygon. Then, the altitude of the first coordinate is used to render the polygon on the map. |
| `fill` | yes | yes | | | `Folder` | yes | yes | | | `GroundOverlay` | yes | yes | `color` isn't supported |
The spatial IO module supports the following KML elements.
| `hotSpot` | yes | partial | Only writes if data is stored in the property of the shape. Units are outputted as "pixels" only. | | `href` | yes | yes | | | `Icon` | partial | partial | Parsed but not rendered by `SimpleDataLayer`. Only writes the icon property of the shape if it contains a URI data. Only `href` is supported. |
-| `IconStyle` | partial | partial | `icon`, `heading`, `colorMode`, and `hotspots` values are parsed, but they aren't rendered by `SimpleDataLayer` |
+| `IconStyle` | partial | partial | `icon`, `heading`, `colorMode`, and `hotspots` values are parsed, but not rendered by `SimpleDataLayer` |
| `innerBoundaryIs` | yes | yes | | | `kml` | yes | yes | | | `LabelStyle` | no | no | |
The spatial IO module supports the following KML elements.
| `TimeSpan` | yes | yes | | | `TimeStamp` | yes | yes | | | `value` | yes | yes | |
-| `viewRefreshMode` | partial | no | If pointing to a WMS service, then only `onStop` is supported for ground overlays. Will append `BBOX={bboxWest},{bboxSouth},{bboxEast},{bboxNorth}` to the URL and update as the map moves. |
+| `viewRefreshMode` | partial | no | If pointing to a WMS service, then only `onStop` is supported for ground overlays. Appends `BBOX={bboxWest},{bboxSouth},{bboxEast},{bboxNorth}` to the URL and update as the map moves. |
| `visibility` | yes | yes | | | `west` | yes | yes | | | `when` | yes | yes | |
The spatial IO module supports the following GeoRSS elements.
| `georss:where` | yes | yes | | | `geourl:latitude` | yes | no | Written as a `georss:point`. | | `geourl:longitude` | yes | no | Written as a `georss:point`. |
-| `position` | yes | no | Some XML feeds will wrap GML with a position tag instead of wrapping it with a `georss:where` tag. Will read this tag, but will write using a `georss:where` tag. |
+| `position` | yes | no | Some XML feeds wrap GML with a position tag instead of wrapping it with a `georss:where` tag. Read this tag, but writes using a `georss:where` tag. |
| `rss` | yes | no | GeoRSS written in ATOM format. | | `rss:author` | yes | partial | Written as an `atom:author`. | | `rss:category` | yes | partial | Written as an `atom:category`. |
The spatial IO module supports the following GML elements.
| `gml:posList` | yes | yes | | | `gml:surfaceMember` | yes | yes | |
-#### additional notes
+#### More notes
-- Member elements will be searched for a geometry that may be buried within child elements. This search operation is necessary as many XML formats that extend from GML may not place a geometry as a direct child of a member element.-- `srsName` is partially supported for WGS84 coordinates and the following codes:[EPSG:4326](https://epsg.io/4326)), and web Mercator ([EPSG:3857](https://epsg.io/3857) or one of its alternative codes. Any other coordinate system will be parsed as WGS84 as-is.
+- Member elements are searched for a geometry that may be buried within child elements. This search operation is necessary as many XML formats that extend from GML may not place a geometry as a direct child of a member element.
+- `srsName` is partially supported for WGS84 coordinates and the following codes:[EPSG:4326]), and web Mercator ([EPSG:3857] or one of its alternative codes. Any other coordinate system is parsed as WGS84 as-is.
- Unless specified when reading an XML feed, the axis order is determined based on hints in the XML feed. A preference is given for the "latitude, longitude" axis order.-- Unless a custom GML namespace is specified for the properties when writing to a GML file, additional property information will not be added.
+- Unless a custom GML namespace is specified for the properties when writing to a GML file, other property information isn't added.
### GPX elements
The spatial IO module supports the following GPX elements.
| `gpx:desc` | yes | yes | Copied into a description property when read to align with other XML formats. | | `gpx:dgpsid` | yes | yes | | | `gpx:ele` | yes | yes | |
-| `gpx:extensions` | partial | partial | When read, style information is extracted. All other extensions will be flattened into a simple JSON object. Only shape style information is written. |
+| `gpx:extensions` | partial | partial | When read, style information is extracted. All other extensions are flattened into a simple JSON object. Only shape style information is written. |
| `gpx:geoidheight` | yes | yes | | | `gpx:gpx` | yes | yes | | | `gpx:hdop` | yes | yes | |
The spatial IO module supports the following GPX elements.
| `gpx_style:line` | partial | partial | `color`, `opacity`, `width`, `lineCap` are supported. | | `gpx_style:opacity` | yes | yes | | | `gpx_style:width` | yes | yes | |
-| `gpxx:DisplayColor` | yes | no | Used to specify the color of a shape. When writing, `gpx_style:line` color will be used instead.|
+| `gpxx:DisplayColor` | yes | no | Used to specify the color of a shape. If writing, `gpx_style:line` color is used instead.|
| `gpxx:RouteExtension` | partial | no | All properties are read into `properties`. Only `DisplayColor` is used. | | `gpxx:TrackExtension` | partial | no | All properties are read into `properties`. Only `DisplayColor` is used. | | `gpxx:WaypointExtension` | partial | no | All properties are read into `properties`. Only `DisplayColor` is used. | | `gpx:keywords` | yes | yes | | | `gpx:fix` | yes | yes | |
-#### additional notes
+#### More notes
When writing; -- MultiPoints will be broken up into individual waypoints.-- Polygons and MultiPolygons will be written as tracks.
+- MultiPoints is broken up into individual waypoints.
+- Polygons and MultiPolygons are written as tracks.
## Supported Well-Known Text geometry types
Delimited spatial data, such as comma-separated value files (CSV), often have co
### Spatial data column detection
-When reading a delimited file that contains spatial data, the header will be analyzed to determine which columns contain location fields. If the header contains type information, it will be used to cast the cell values to the appropriate type. If no header is specified, the first row will be analyzed and used to generate a header. When analyzing the first row, a check is executed to match column names with the following names in a case-insensitive way. The order of the names is the priority, in case two or more names exist in a file.
+When reading a delimited file that contains spatial data, the header is analyzed to determine which columns contain location fields. If the header contains type information, it's used to cast the cell values to the appropriate type. If no header is specified, the first row is analyzed to generate a header. When analyzing the first row, a check is executed to match column names with the following names in a case-insensitive way. The order of the names is the priority, in case two or more names exist in a file.
#### Latitude
When reading a delimited file that contains spatial data, the header will be ana
#### Geography
-The first row of data will be scanned for strings that are in Well-Known Text format.
+The first row of data is scanned for strings that are in Well-Known Text format.
### Delimited data column types
-When scanning the header row, any type information that is in the column name will be extracted and used to cast the cells in that column. Here is an example of a column name that has a type value: "ColumnName (typeName)". The following case-insensitive type names are supported:
+When scanning the header row, any type information that is in the column name is extracted and used to cast the cells in that column. Here's an example of a column name that has a type value: "ColumnName (typeName)". The following case-insensitive type names are supported:
#### Numbers
When scanning the header row, any type information that is in the column name wi
- text - string
-If no type information can be extracted from the header, and the dynamic typing option is enabled when reading, then each cell will be individually analyzed to determine what data type it is best suited to be cast as.
+If no type information can be extracted from the header, and the dynamic typing option is enabled when reading, then each cell is individually analyzed to determine what data type it's best suited to be cast as.
## Next steps See the following articles for more code samples to add to your maps:
-[Read and write spatial data](spatial-io-read-write-spatial-data.md)
+[Read and write spatial data]
+
+[EPSG:4326]: https://epsg.io/4326
+[EPSG:3857]: https://epsg.io/3857
+[Read and write spatial data]: spatial-io-read-write-spatial-data.md
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0| Comming Soon|
-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li></ul></li></ul>|1.17.0 |1.27.2|
+| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li></ul></li></ul>|1.17.0 |1.27.2|
| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2 1.26.3<sup>Hotfix</sup>| | Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon |
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
This article shows you how to configure the connection between a ServiceNow instance and the IT Service Management Connector (ITSMC) in Log Analytics, so you can centrally manage your IT Service Management (ITSM) work items.
+> [!NOTE]
+> As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow.
+ ## Prerequisites Ensure that you meet the following prerequisites for the connection.
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
To create an action group:
1. In the **Work Item** type field, select the type of work item. > [!NOTE]
- > As of September 2022, we are starting the 3-year process of deprecating support of using ITSM actions to send alerts and events to ServiceNow.
+ > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow.
1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert.
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Azure Monitor supports connections with the following ITSM tools:
- ServiceNow ITSM or IT Operations Management (ITOM) - BMC
-For information about legal terms and the privacy policy, see the [Microsoft privacy statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).
- ## ITSM integration workflow Depending on your integration, start connecting to your ITSM tool with these steps:
Depending on your integration, start connecting to your ITSM tool with these ste
- For ServiceNow ITSM, use the ITSM action:
+ > [!NOTE]
+ > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information about legal terms and the privacy policy, see the [Microsoft privacy statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).
++ 1. Connect to your ITSM. For more information, see the [ServiceNow connection instructions](./itsmc-connections-servicenow.md). 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/WUS2/US South Central, the customer can list the ActionGroup network tag only. 1. [Configure your Azure ITSM solution and create the ITSM connection](./itsmc-definition.md#install-it-service-management-connector).
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
To enable telemetry collection with Application Insights, only the application s
|App setting name | Definition | Value | |--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux |
+|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~3` |
|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. | |XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with the Application Insights SDK. Loads the extension side by side with the SDK and uses it to send telemetry. (Disables the Application Insights SDK.) |`1`|
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-troubleshoot.md
The metrics addon can be configured to run in debug mode by changing the configm
When enabled, all Prometheus metrics that are scraped are hosted at port 9090. Run the following command: ```
-kubectl port-forward <ama-metrics pod name> -n kube-system 9091
+kubectl port-forward <ama-metrics pod name> -n kube-system 9090
```
-Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This user interface can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
+Go to `127.0.0.1:9090/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This user interface can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
## Metric names, label names & label values
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> | secrets | **Yes** | **Yes** | No | > | volumes | **Yes** | **Yes** | No |
+## Microsoft.ServiceNetworking
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | -- |
+> | trafficcontrollers | No | No | No |
+> | associations | No | No | No |
+> | frontends | No | No | No |
+ ## Microsoft.Services > [!div class="mx-tableFixed"]
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md
Title: Create a Batch account in the Azure portal description: Learn how to use the Azure portal to create and manage an Azure Batch account for running large-scale parallel workloads in the cloud. Previously updated : 04/03/2023 Last updated : 07/18/2023
For detailed steps, see [Assign Azure roles by using the Azure portal](../role-b
### Create a key vault
-User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account.
+User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account and use a [Vault Access Policy](/azure/key-vault/general/assign-access-policy).
To create a new key vault: 1. Search for and select **key vaults** from the Azure Search box, and then select **Create** on the **Key vaults** page. 1. On the **Create a key vault** page, enter a name for the key vault, and choose an existing resource group or create a new one in the same region as your Batch account.
+1. On the **Access configuration** tab, select **Vault access policy** under **Permission model**.
1. Leave the remaining settings at default values, select **Review + create**, and then select **Create**. ### Create a Batch account in user subscription mode
communication-services European Union Data Boundary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md
Title: European Union Data Boundary compliance for Azure Communication Services description: Learn about how Azure Communication Services meets European Union data handling compliance laws-+
communication-services Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/government.md
Title: Azure Communication Services in Azure Government description: Learn about using Azure Communication Services in US Government regions-+
communication-services Exception Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/exception-policy.md
administration_client.create_exception_policy(
```java administrationClient.createExceptionPolicy(new CreateExceptionPolicyOptions("policy1",
- Map.of("rule1", new ExceptionRule()
- .setTrigger(new QueueLengthExceptionTrigger().setThreshold(1))
- .setActions(Map.of("cancelAction", new CancelExceptionAction())))
+ Map.of("rule1", new ExceptionRule(
+ new QueueLengthExceptionTrigger().setThreshold(1)
+ Map.of("cancelAction", new CancelExceptionAction())))
).setName("Max Queue Length Policy")); ```
administration_client.create_exception_policy(
```java administrationClient.createExceptionPolicy(new CreateExceptionPolicyOptions("policy2", Map.of(
- "rule1", new ExceptionRule()
- .setTrigger(new WaitTimeExceptionTrigger().setThresholdSeconds(60))
- .setActions(Map.of("increasePriority", new ManualReclassifyExceptionAction().setPriority(10))),
- "rule2", new ExceptionRule()
- .setTrigger(new WaitTimeExceptionTrigger().setThresholdSeconds(300))
- .setActions(Map.of("changeQueue", new ManualReclassifyExceptionAction().setQueueId("queue2"))))
+ "rule1", new ExceptionRule(
+ new WaitTimeExceptionTrigger(60),
+ Map.of("increasePriority", new ManualReclassifyExceptionAction().setPriority(10))),
+ "rule2", new ExceptionRule(
+ new WaitTimeExceptionTrigger(300),
+ Map.of("changeQueue", new ManualReclassifyExceptionAction().setQueueId("queue2"))))
).setName("Escalation Policy")); ```
communication-services Matching Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md
await client.createWorker("worker-1", {
client.create_worker(worker_id = "worker-1", router_worker = RouterWorker( total_capacity = 2, queue_assignments = {
- "queue2": QueueAssignment()
+ "queue2": RouterQueueAssignment()
}, channel_configurations = { "voice": ChannelConfiguration(capacity_cost_per_job = 2),
client.create_worker(worker_id = "worker-1", router_worker = RouterWorker(
```java client.createWorker(new CreateWorkerOptions("worker-1", 2) .setQueueAssignments(Map.of(
- "queue1", new QueueAssignment(),
- "queue2", new QueueAssignment()))
+ "queue1", new RouterQueueAssignment(),
+ "queue2", new RouterQueueAssignment()))
.setChannelConfigurations(Map.of(
- "voice", new ChannelConfiguration().setCapacityCostPerJob(2),
- "chat", new ChannelConfiguration().setCapacityCostPerJob(1)))
+ "voice", new ChannelConfiguration(2),
+ "chat", new ChannelConfiguration(1)))
.setLabels(Map.of( "Skill", new LabelValue(11), "English", new LabelValue(true),
client.create_job(job_id = "job1", router_job = RouterJob(
```java client.createJob(new CreateJobOptions("job1", "chat", "queue1") .setRequestedWorkerSelectors(List.of(
- new RouterWorkerSelector()
- .setKey("English")
- .setLabelOperator(LabelOperator.EQUAL)
- .setValue(new LabelValue(true)),
- new RouterWorkerSelector()
- .setKey("Skill")
- .setLabelOperator(LabelOperator.GREATER_THAN)
- .setValue(new LabelValue(10))))
+ new RouterWorkerSelector("English", LabelOperator.EQUAL, new LabelValue(true)),
+ new RouterWorkerSelector("Skill", LabelOperator.GREATER_THAN, new LabelValue(10))))
.setLabels(Map.of("name", new LabelValue("John")))); ```
communication-services Worker Capacity Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/worker-capacity-concepts.md
client.create_worker(worker_id = "worker1", router_worker = RouterWorker(
client.createWorker(new CreateWorkerOptions("worker1", 100) .setQueueAssignments(Map.of("queue1", new RouterQueueAssignment())) .setChannelConfigurations(Map.of(
- "voice", new ChannelConfiguration().setCapacityCostPerJob(100),
- "chat", new ChannelConfiguration().setCapacityCostPerJob(20))))
+ "voice", new ChannelConfiguration(100),
+ "chat", new ChannelConfiguration(20))));
``` ::: zone-end
client.create_worker(worker_id = "worker1", router_worker = RouterWorker(
client.createWorker(new CreateWorkerOptions("worker1", 100) .setQueueAssignments(Map.of("queue1", new RouterQueueAssignment())) .setChannelConfigurations(Map.of(
- "voice", new ChannelConfiguration().setCapacityCostPerJob(60),
- "chat", new ChannelConfiguration().setCapacityCostPerJob(10).setMaxNumberOfJobs(2),
- "email", new ChannelConfiguration().setCapacityCostPerJob(10).setMaxNumberOfJobs(2))))
+ "voice", new ChannelConfiguration(60),
+ "chat", new ChannelConfiguration(10).setMaxNumberOfJobs(2),
+ "email", new ChannelConfiguration(10).setMaxNumberOfJobs(2))));
``` ::: zone-end
communication-services Escalate Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/escalate-job.md
var classificationPolicy = await administrationClient.CreateClassificationPolicy
{ new ConditionalQueueSelectorAttachment( condition: new ExpressionRouterRule("job.Escalated = true"),
- labelSelectors: new List<RouterQueueSelector>
+ queueSelectors: new List<RouterQueueSelector>
{ new (key: "Id", labelOperator: LabelOperator.Equal, value: new LabelValue("XBOX_Escalation_Queue")) })
var classificationPolicy = await administrationClient.createClassificationPolicy
kind: "expression-rule", expression: 'job.Escalated = true' },
- labelSelectors: [{
+ queueSelectors: [{
key: "Id", labelOperator: "equal", value: "XBOX_Escalation_Queue"
classification_policy: ClassificationPolicy = administration_client.create_class
queue_selectors = [ ConditionalQueueSelectorAttachment( condition = ExpressionRouterRule(expression = 'job.Escalated = true'),
- label_selectors = [
+ queue_selectors = [
RouterQueueSelector(key = "Id", label_operator = LabelOperator.EQUAL, value = "XBOX_Escalation_Queue") ] )
classification_policy: ClassificationPolicy = administration_client.create_class
ClassificationPolicy classificationPolicy = administrationClient.createClassificationPolicy( new CreateClassificationPolicyOptions("Classify_XBOX_Voice_Jobs") .setName("Classify XBOX Voice Jobs")
- .setQueueSelectors(List.of(new ConditionalQueueSelectorAttachment()
- .setCondition(new ExpressionRouterRule().setExpression("job.Escalated = true"))
- .setLabelSelectors(List.of(
- new RouterQueueSelector().setKey("Id").setLabelOperator(LabelOperator.EQUAL).setValue("XBOX_Escalation_Queue"))
- )))
- .setPrioritizationRule(new ExpressionRouterRule().setExpression("If(job.Escalated = true, 10, 1)")));
+ .setQueueSelectors(List.of(new ConditionalQueueSelectorAttachment(
+ new ExpressionRouterRule("job.Escalated = true"),
+ List.of(new RouterQueueSelector("Id", LabelOperator.EQUAL, new LabelValue("XBOX_Escalation_Queue"))))))
+ .setPrioritizationRule(new ExpressionRouterRule("If(job.Escalated = true, 10, 1)")));
``` ::: zone-end
administration_client.create_exception_policy(
```java administrationClient.createExceptionPolicy(new CreateExceptionPolicyOptions("Escalate_XBOX_Policy",
- Map.of("Escalated_Rule", new ExceptionRule()
- .setTrigger(new WaitTimeExceptionTrigger().setThresholdSeconds(5 * 60))
- .setActions(Map.of("EscalateReclassifyExceptionAction", new ReclassifyExceptionAction()
+ Map.of("Escalated_Rule", new ExceptionRule(new WaitTimeExceptionTrigger(5 * 60),
+ Map.of("EscalateReclassifyExceptionAction", new ReclassifyExceptionAction()
.setClassificationPolicyId(classificationPolicy.getId()) .setLabelsToUpsert(Map.of("Escalated", new LabelValue(true)))))) ).setName("Add escalated label and reclassify XBOX Job requests after 5 minutes"));
When you submit the Job, it is added to the queue `XBOX_Queue` with the `voice`
```csharp await client.CreateJobAsync(new CreateJobOptions(jobId: "job1", channelId: "voice", queueId: defaultQueue.Value.Id) {
- RequestedWorkerSelectors = new List<RouterWorkerSelector>
+ RequestedWorkerSelectors =
{
- new(key: "XBOX_Hardware", labelOperator: LabelOperator.GreaterThanEqual, value: new LabelValue(7))
+ new RouterWorkerSelector(key: "XBOX_Hardware", labelOperator: LabelOperator.GreaterThanEqual, value: new LabelValue(7))
} }); ```
administration_client.create_job(
```java administrationClient.createJob(new CreateJobOptions("job1", "voice", defaultQueue.getId())
- .setRequestedWorkerSelectors(List.of(new RouterWorkerSelector()
- .setKey("XBOX_Hardware")
- .setLabelOperator(LabelOperator.GREATER_THAN_EQUAL)
- .setValue(new LabelValue(7)))));
+ .setRequestedWorkerSelectors(List.of(
+ new RouterWorkerSelector("XBOX_Hardware", LabelOperator.GREATER_THAN_EQUAL, new LabelValue(7)))));
``` ::: zone-end
communication-services Job Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md
classification_policy: ClassificationPolicy = administration_client.create_class
ClassificationPolicy classificationPolicy = administrationClient.createClassificationPolicy( new CreateClassificationPolicyOptions("XBOX_NA_QUEUE_Priority_1_10") .setName("Select XBOX Queue and set priority to 1 or 10")
- .setQueueSelectors(List.of(new ConditionalQueueSelectorAttachment()
- .setCondition(new ExpressionRouterRule().setExpression("job.Region = \"NA\""))
- .setQueueSelectors(List.of(
- new RouterQueueSelector().setKey("Id").setLabelOperator(LabelOperator.EQUAL).setValue("XBOX_NA_QUEUE"))
- )))
+ .setQueueSelectors(List.of(new ConditionalQueueSelectorAttachment(
+ new ExpressionRouterRule("job.Region = \"NA\""),
+ List.of(new RouterQueueSelector("Id", LabelOperator.EQUAL, new LabelValue("XBOX_NA_QUEUE"))))))
.setFallbackQueueId("XBOX_DEFAULT_QUEUE") .setPrioritizationRule(new ExpressionRouterRule().setExpression("If(job.Hardware_VIP = true, 10, 1)"))); ```
administration_client.create_classification_policy(
```java administrationClient.createClassificationPolicy(new CreateClassificationPolicyOptions("policy-1")
- .setWorkerSelectors(List.of(new StaticWorkerSelectorAttachment()
- .setWorkerSelector(new RouterWorkerSelector().setKey("Foo").setLabelOperator(LabelOperator.EQUAL).setValue("Bar")))));
+ .setWorkerSelectors(List.of(
+ new StaticWorkerSelectorAttachment(new RouterWorkerSelector("Foo", LabelOperator.EQUAL, new LabelValue("Bar"))))));
``` ::: zone-end
administration_client.create_classification_policy(
```java administrationClient.createClassificationPolicy(new CreateClassificationPolicyOptions("policy-1")
- .setWorkerSelectors(List.of(new ConditionalRouterWorkerSelectorAttachment()
- .setCondition(new ExpressionRouterRule().setExpression("job.Urgent = true"))
- .setWorkerSelectors(List.of(new RouterWorkerSelector().setKey("Foo").setLabelOperator(LabelOperator.EQUAL).setValue("Bar"))))));
+ .setWorkerSelectors(List.of(new ConditionalRouterWorkerSelectorAttachment(
+ new ExpressionRouterRule("job.Urgent = true"),
+ List.of(new RouterWorkerSelector("Foo", LabelOperator.EQUAL, new LabelValue("Bar")))))));
``` ::: zone-end
administration_client.create_classification_policy(
```java administrationClient.createClassificationPolicy(new CreateClassificationPolicyOptions("policy-1")
- .setWorkerSelectors(List.of(new PassThroughWorkerSelectorAttachment()
- .setKey("Foo").setLabelOperator(LabelOperator.EQUAL))));
+ .setWorkerSelectors(List.of(new PassThroughWorkerSelectorAttachment("Foo", LabelOperator.EQUAL))));
``` ::: zone-end
administration_client.create_classification_policy(
```java administrationClient.createClassificationPolicy(new CreateClassificationPolicyOptions("policy-1")
- .setWorkerSelectors(List.of(new WeightedAllocationWorkerSelectorAttachment()
- .setAllocations(List.of(new WorkerWeightedAllocation().setWeight(0.3).setWorkerSelectors(List.of(
- new RouterWorkerSelector().setKey("Vendor").setLabelOperator(LabelOperator.EQUAL).setValue("A"),
- new RouterWorkerSelector().setKey("Vendor").setLabelOperator(LabelOperator.EQUAL).setValue("B")
+ .setWorkerSelectors(List.of(new WeightedAllocationWorkerSelectorAttachment(
+ List.of(new WorkerWeightedAllocation(0.3, List.of(
+ new RouterWorkerSelector("Vendor", LabelOperator.EQUAL, new LabelValue("A")),
+ new RouterWorkerSelector("Vendor", LabelOperator.EQUAL, new LabelValue("B"))
))))))); ```
communication-services Preferred Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/preferred-worker.md
client.create_job(job_id = "job1", router_job = RouterJob(
```java client.createJob(new CreateJobOptions("job1", "Xbox_Chat_Channel", queue.getId())
- .setRequestedWorkerSelectors(List.of(new RouterWorkerSelector().setKey("Id")
- .setLabelOperator(LabelOperator.EQUAL)
- .setValue(new LabelValue("<preferred_worker_id>"))
- .setExpireAfterSeconds(45.0)
- .setExpedite(true))));
-```
+ .setRequestedWorkerSelectors(List.of(
+ new RouterWorkerSelector("Id", LabelOperator.EQUAL, new LabelValue("<preferred_worker_id>"))
+ .setExpireAfterSeconds(45.0)
+ .setExpedite(true))));
+ ```
::: zone-end
communication-services Scheduled Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/scheduled-jobs.md
In the following example, a job is created that will be scheduled 3 minutes from
```csharp await client.CreateJobAsync(new CreateJobOptions(jobId: "job1", channelId: "Voice", queueId: "Callback") {
- MatchingMode = new ScheduleAndSuspendMode(scheduleAt: DateTimeOffset.UtcNow.Add(TimeSpan.FromMinutes(3)))
+ MatchingMode = new JobMatchingMode(
+ new ScheduleAndSuspendMode(scheduleAt: DateTimeOffset.UtcNow.Add(TimeSpan.FromMinutes(3))))
}); ```
await client.createJob("job1", {
channelId: "Voice", queueId: "Callback", matchingMode: {
- modeType: "scheduleAndSuspendMode",
scheduleAndSuspendMode: { scheduleAt: new Date(Date.now() + 3 * 60000) }
client.create_job(job_id = "job1", router_job = RouterJob(
channel_id = "Voice", queue_id = "Callback", matching_mode = JobMatchingMode(
- mode_type = "scheduleAndSuspendMode",
schedule_and_suspend_mode = ScheduleAndSuspendMode(scheduled_at = datetime.utcnow() + timedelta(minutes = 3))))) ```
client.create_job(job_id = "job1", router_job = RouterJob(
```java client.createJob(new CreateJobOptions("job1", "Voice", "Callback")
- .setMatchingMode(new ScheduleAndSuspendMode(OffsetDateTime.now().plusMinutes(3))));
+ .setMatchingMode(new JobMatchingMode(new ScheduleAndSuspendMode(OffsetDateTime.now().plusMinutes(3)))));
``` ::: zone-end
if (eventGridEvent.EventType == "Microsoft.Communication.RouterJobWaitingForActi
await client.UpdateJobAsync(new UpdateJobOptions(jobId: eventGridEvent.Data.JobId) {
- MatchingMode = new QueueAndMatchMode(),
+ MatchingMode = new JobMatchingMode(new QueueAndMatchMode()),
Priority = 100 }); }
if (eventGridEvent.EventType == "Microsoft.Communication.RouterJobWaitingForActi
// Perform required actions here await client.updateJob(eventGridEvent.data.jobId, {
- matchingMode: { modeType: "queueAndMatchMode", queueAndMatchMode: {} },
+ matchingMode: { queueAndMatchMode: {} },
priority: 100 }); }
if (eventGridEvent.event_type == "Microsoft.Communication.RouterJobWaitingForAct
# Perform required actions here client.update_job(job_id = eventGridEvent.data.job_id,
- matching_mode = JobMatchingMode(mode_type = queueAndMatchMode, queue_and_match_mode = {}),
+ matching_mode = JobMatchingMode(queue_and_match_mode = {}),
priority = 100) } ```
if (eventGridEvent.EventType == "Microsoft.Communication.RouterJobWaitingForActi
// Perform required actions here client.updateJob(new UpdateJobOptions(eventGridEvent.Data.JobId)
- .setMatchingMode(new QueueAndMatchMode())
+ .setMatchingMode(new JobMatchingMode(new QueueAndMatchMode()))
.setPriority(100)); } ```
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 06/27/2023 Last updated : 07/24/2023 tags: connectors ## As a developer, I want to access my SQL database from my logic app workflow.
For more information, review the [SQL Server managed connector reference](/conne
`Server={your-server-address};Database={your-database-name};User Id={your-user-name};Password={your-password};`
+* In Standard workflows, to use the SQL built-in triggers, you must enable change tracking in the table where you want to use the trigger. For more information, see [Enable and disable change tracking](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server).
+ * The logic app workflow where you want to access your SQL database. To start your workflow with a SQL Server trigger, you have to start with a blank workflow. To use a SQL Server action, start your workflow with any trigger. <a name="multi-tenant-or-ise"></a>
container-apps Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/samples.md
Previously updated : 05/11/2022 Last updated : 07/24/2022
Refer to the following samples to learn how to use Azure Container Apps in diffe
| [Deploy a shopping cart Orleans app to Container Apps](https://github.com/Azure-Samples/orleans-blazor-server-shopping-cart-on-container-apps) | An end-to-end example shopping cart app built in ASP.NET Core Blazor Server with Orleans deployed to Azure Container Apps. | | [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps )<br /> | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. | | [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps )<br /> | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. |
+| [Deploy Drupal on Azure Container Apps](https://github.com/Azure-Samples/drupal-on-azure-container-apps) | Demonstrates how to deploy a Drupal site to Azure Container Apps, with Azure Database for MariaDB, and Azure Files to store static assets. |
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format. T
In an [upgrade scenario](upgrade-version.md), documents written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint.
-16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with the Azure Synapse Link feature and/or Continuous Backup.
+16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit only applies to collections created after this feature has been enabled. Once this feature is enabled for your database account, it can't be disabled.
Enabling 16 MB can be done in the features tab in the Azure portal or programmatically by [adding the "EnableMongo16MBDocumentSupport" capability](how-to-configure-capabilities.md).
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
Azure Cosmos DB for MongoDB supports documents that are encoded in MongoDB BSON
In an [upgrade scenario](upgrade-version.md), documents that were written prior to the upgrade to version 4.0+ won't benefit from the enhanced performance until they're updated via a write operation through the 4.0+ endpoint.
-16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit applies only to collections that are created after this feature is enabled. When this feature is enabled for your database account, it can't be disabled. This feature isn't compatible with Azure Synapse Link for Azure Cosmos DB or with continuous backup.
+16-MB document support raises the size limit for your documents from 2 MB to 16 MB. This limit applies only to collections that are created after this feature is enabled. When this feature is enabled for your database account, it can't be disabled.
To enable 16-MB document support, change the setting on the **Features** tab for the resource in the Azure portal or programmatically [add the `EnableMongo16MBDocumentSupport` capability](how-to-configure-capabilities.md).
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Capabilities are features that can be added or removed to your API for MongoDB a
| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields if the nested field isn't an array. | No | | `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. | No | | `EnablePartialUniqueIndex` | Enables support for a unique partial index, so you have more flexibility to specify exactly which fields in documents you'd like to index. | No |
+| `EnableUniqueIndexReIndex` | Enables support for unique index re-indexing for Cosmos DB for MongoDB RU. | No |
## Enable a capability
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
You can also create wildcard indexes using the Data Explorer in the Azure portal
Documents with many fields may have a high Request Unit (RU) charge for writes and updates. Therefore, if you have a write-heavy workload, you should opt to individually index paths as opposed to using wildcard indexes.
+> [!NOTE]
+> Support for unique index on existing collections with data is available in preview. This feature can be enabled for your database account by enabling the ['EnableUniqueIndexReIndex' capability](./how-to-configure-capabilities.md#available-capabilities).
+ ### Limitations Wildcard indexes do not support any of the following index types or properties:
cosmos-db Objecttoarray https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/objecttoarray.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
An array of elements with two fields, either `k` and `v` or custom-named fields.
This example demonstrates converting a static object to an array of field/value pairs using the default `k` and `v` identifiers.
-```sql
-SELECT VALUE
- ObjectToArray({
- "a": "12345",
- "b": "67890"
- })
-```
-```json
-[
- [
- {
- "k": "a",
- "v": "12345"
- },
- {
- "k": "b",
- "v": "67890"
- }
- ]
-]
-```
In this example, the field name is updated to use the `name` identifier.
-```sql
-SELECT VALUE
- ObjectToArray({
- "a": "12345",
- "b": "67890"
- }, "name")
-```
-```json
-[
- [
- {
- "name": "a",
- "v": "12345"
- },
- {
- "name": "b",
- "v": "67890"
- }
- ]
-]
-```
In this example, the value name is updated to use the `value` identifier and the field name uses the `key` identifier.
-```sql
-SELECT VALUE
- ObjectToArray({
- "a": "12345",
- "b": "67890"
- }, "key", "value")
-```
-```json
-[
- [
- {
- "key": "a",
- "value": "12345"
- },
- {
- "key": "b",
- "value": "67890"
- }
- ]
-]
-```
This final example uses an item within an existing container that stores data using fields within a JSON object. In this example, the function is used to break up the object into an array item for each field/value pair. ## Remarks
-If the input value isn't a valid Object, the result is Undefined\.
+- If the input value isn't a valid object, the result is `undefined`.
## See also
cosmos-db Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pi.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a numeric expression.
## Examples
-The following example returns the constant value of Pi.
+The following example returns the constant value of Pi.
-```sql
-SELECT VALUE
- PI()
-```
-
-```json
-[
- 3.141592653589793
-]
-```
+ ## Next steps
cosmos-db Power https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/power.md
Previously updated : 07/01/2023 Last updated : 07/20/2023
Returns a numeric expression.
The following example demonstrates raising a number to various powers.
-```sql
-SELECT VALUE {
- oneFirstPower: POWER(1, 1),
- twoSquared: POWER(2, 2),
- threeCubed: POWER(3, 3),
- fourFourthPower: POWER(4, 4),
- fiveFithPower: POWER(5, 5),
- zeroSquared: POWER(0, 2),
- nullCubed: POWER(null, 3),
- twoNullPower: POWER(2, null)
-}
-```
-```json
-[
- {
- "oneFirstPower": 1,
- "twoSquared": 4,
- "threeCubed": 27,
- "fourFourthPower": 256,
- "fiveFithPower": 3125,
- "zeroSquared": 0
- }
-]
-```
## Remarks -- This system function doesn't utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`SQRT`](sqrt.md)
cosmos-db Rand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rand.md
Title: RAND in Azure Cosmos DB query language
-description: Learn about SQL system function RAND in Azure Cosmos DB.
-
+ Title: RAND
+
+description: An Azure Cosmos DB for NoSQL system function that returns a randomly generated numeric value from zero to one.
+++ - Previously updated : 09/16/2019--+ Last updated : 07/21/2023+
-# RAND (Azure Cosmos DB)
+
+# RAND (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a randomly generated numeric value from [0,1).
-
+Returns a randomly generated numeric value from zero to one.
+ ## Syntax
-
+ ```sql
-RAND ()
-```
+RAND()
+```
## Return types
- Returns a numeric expression.
+Returns a numeric expression.
-## Remarks
+## Examples
- `RAND` is a nondeterministic function. Repetitive calls of `RAND` do not return the same results. This system function will not utilize the index.
+The following example returns randomly generated numeric values.
-## Examples
-
- The following example returns a randomly generated numeric value.
-
-```sql
-SELECT RAND() AS rand
-```
-
- Here is the result set.
-
-```json
-[{"rand": 0.87860053195618093}]
-```
+
+## Remarks
+
+- This function doesn't use the index.
+- This function is nondeterministic. Repetitive calls of this function don't return the same results.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_NUMBER`](is-number.md)
cosmos-db Regexmatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/regexmatch.md
Title: RegexMatch in Azure Cosmos DB query language
-description: Learn about the RegexMatch SQL system function in Azure Cosmos DB
-
+ Title: RegexMatch
+
+description: An Azure Cosmos DB for NoSQL system function that provides regular expression capabilities.
+++ - Previously updated : 08/12/2021---+ Last updated : 07/24/2023+
-# REGEXMATCH (Azure Cosmos DB)
+
+# RegexMatch (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Provides regular expression capabilities. Regular expressions are a concise and flexible notation for finding patterns of text. Azure Cosmos DB uses [PERL compatible regular expressions (PCRE)](http://www.pcre.org/).
+This function provides regular expression capabilities. Regular expressions are a concise and flexible notation for finding patterns of text.
+
+> [!NOTE]
+> Azure Cosmos DB for NoSQL uses [PERL compatible regular expressions (PCRE)](https://pcre2project.github.io/pcre2/).
## Syntax
-
+ ```sql
-RegexMatch(<str_expr1>, <str_expr2>, [, <str_expr3>])
+RegexMatch(<string_expr_1>, <string_expr_2>, [, <string_expr_3>])
```
-
+ ## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the regular expression.
-*str_expr3*
- Is the string of selected modifiers to use with the regular expression. This string value is optional. If you'd like to run RegexMatch with no modifiers, you can either add an empty string or omit entirely.
+| | Description |
+| | |
+| **`string_expr_1`** | A string expression to be searched. |
+| **`string_expr_2`** | A string expression with a regular expression defined to use when searching `string_expr_1`. |
+| **`string_expr_3`** *(Optional)* | An optional string expression with the selected modifiers to use with the regular expression (`string_expr_2`). If not provided, the default is to run the regular expression match with no modifiers. |
-You can learn about [syntax for creating regular expressions in Perl](https://perldoc.perl.org/perlre).
+> [!NOTE]
+> Providing an empty string for `string_expr_3` is functionally equivalent to omitting the argument.
-Azure Cosmos DB supports the following four modifiers:
+## Return types
-| Modifier | Description |
-| | -- |
-| `m` | Treat the string expression to be searched as multiple lines. Without this option, "^" and "$" will match at the beginning or end of the string and not each individual line. |
-| `s` | Allow "." to match any character, including a newline character. |
-| `i` | Ignore case when pattern matching. |
-| `x` | Ignore all whitespace characters. |
+Returns a boolean expression.
-## Return types
-
- Returns a Boolean expression. Returns undefined if the string expression to be searched, the regular expression, or the selected modifiers are invalid.
-
## Examples
-
-The following simple RegexMatch example checks the string "abcd" for regular expression match using a few different modifiers.
-
-```sql
-SELECT RegexMatch ("abcd", "ABC", "") AS NoModifiers,
-RegexMatch ("abcd", "ABC", "i") AS CaseInsensitive,
-RegexMatch ("abcd", "ab.", "") AS WildcardCharacter,
-RegexMatch ("abcd", "ab c", "x") AS IgnoreWhiteSpace,
-RegexMatch ("abcd", "aB c", "ix") AS CaseInsensitiveAndIgnoreWhiteSpace
-```
-
- Here is the result set.
-
-```json
-[
- {
- "NoModifiers": false,
- "CaseInsensitive": true,
- "WildcardCharacter": true,
- "IgnoreWhiteSpace": true,
- "CaseInsensitiveAndIgnoreWhiteSpace": true
- }
-]
-```
-
-With RegexMatch, you can use metacharacters to do more complex string searches that wouldn't otherwise be possible with the StartsWith, EndsWith, Contains, or StringEquals system functions. Here are some additional examples:
-
-> [!NOTE]
-> If you need to use a metacharacter in a regular expression and don't want it to have special meaning, you should escape the metacharacter using `\`.
-
-**Check items that have a description that contains the word "salt" exactly once:**
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, "salt{1}","")
-```
+The following example illustrates regular expression matches using a few different modifiers.
-**Check items that have a description that contain a number between 0 and 99:**
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, "[0-99]","")
-```
-**Check items that have a description that contain four letter words starting with "S" or "s":**
+The next example assumes that you have a container with items including a `name` field.
-```sql
-SELECT *
-FROM c
-WHERE RegexMatch (c.description, " s... ","i")
-```
+
+This example uses a regular expression match as a filter to return a subset of items.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) if the regular expression can be broken down into either StartsWith, EndsWith, Contains, or StringEquals system functions.
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy) only if the regular expression can be broken down into either `StartsWith`, `EndsWith`, `Contains`, or `StringEquals` equivalent system functions.
+- Returns `undefined` if the string expression to be searched (`string_expr_1`), the regular expression (`string_expr_2`), or the selected modifiers (`string_expr_3`) are invalid.
+- This function supports the following four modifiers:
+ | | Format | Description |
+ | | | |
+ | **Multiple lines** | `m` | Treat the string expression to be searched as multiple lines. Without this option, the characters `^` and `$` match at the beginning or end of the string and not each individual line. |
+ | **Match any string** | `s` | Allow "." to match any character, including a newline character. |
+ | **Ignore case** | `i` | Ignore case when pattern matching. |
+ | **Ignore whitespace** | `x` | Ignore all whitespace characters. |
+- If you'd like to use a meta-character in a regular expression and don't want it to have special meaning, you should escape the metacharacter using `\`.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_STRING`](is-string.md)
cosmos-db Replace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replace.md
Title: REPLACE in Azure Cosmos DB query language
-description: Learn about SQL system function REPLACE in Azure Cosmos DB.
-
+ Title: REPLACE
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string with all occurrences of a specified string replaced.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/24/2023+
-# REPLACE (Azure Cosmos DB)
+
+# REPLACE (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Replaces all occurrences of a specified string value with another string value.
-
+Replaces all occurrences of a specified string value with another string value.
+ ## Syntax
-
+ ```sql
-REPLACE(<str_expr1>, <str_expr2>, <str_expr3>)
-```
-
+REPLACE(<string_expr_1>, <string_expr_2>, <string_expr_3>)
+```
+ ## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to be found.
-
-*str_expr3*
- Is the string expression to replace occurrences of *str_expr2* in *str_expr1*.
-
+
+| | Description |
+| | |
+| **`string_expr_1`** | A string expression to be searched. |
+| **`string_expr_2`** | A string expression to be found within `string_expr_1`. |
+| **`string_expr_3`** | A string expression with the text to replace all occurrences of `string_expr_2` within `string_expr_1`. |
+ ## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example shows how to use `REPLACE` in a query.
-
-```sql
-SELECT REPLACE("This is a Test", "Test", "desk") AS replace
-```
-
- Here is the result set.
-
-```json
-[{"replace": "This is a desk"}]
-```
+
+The following example shows how to use this function to replace static values.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`SUBSTRING`](substring.md)
cosmos-db Replicate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/replicate.md
Title: REPLICATE in Azure Cosmos DB query language
-description: Learn about SQL system function REPLICATE in Azure Cosmos DB.
-
+ Title: REPLICATE
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string value repeated a specific number of times.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# REPLICATE (Azure Cosmos DB)
+
+# REPLICATE (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Repeats a string value a specified number of times.
-
+Repeats a string value a specified number of times.
+ ## Syntax
-
+ ```sql
-REPLICATE(<str_expr>, <num_expr>)
+REPLICATE(<string_expr>, <numeric_expr>)
```
-
+ ## Arguments
-
-*str_expr*
- Is a string expression.
-
-*num_expr*
- Is a numeric expression. If *num_expr* is negative or non-finite, the result is undefined.
-
+
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a string expression.
-
-## Remarks
- The maximum length of the result is 10,000 characters i.e. (length(*str_expr*) * *num_expr*) <= 10,000. This system function will not utilize the index.
+Returns a string expression.
## Examples
-
- The following example shows how to use `REPLICATE` in a query.
-
-```sql
-SELECT REPLICATE("a", 3) AS replicate
-```
-
- Here is the result set.
-
-```json
-[{"replicate": "aaa"}]
-```
+
+The following example shows how to use this function to build a repeating string.
+++
+## Remarks
+
+- This function doesn't use the index.
+- The maximum length of the result is **10,000** characters.
+ - `(length(string_expr) * numeric_expr) <= 10,000`
+- If `numeric_expr` is *negative* or *nonfinite*, the result is `undefined`.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`REPLACE`](replace.md)
cosmos-db Reverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/reverse.md
Title: REVERSE in Azure Cosmos DB query language
-description: Learn about SQL system function REVERSE in Azure Cosmos DB.
-
+ Title: REVERSE
+
+description: An Azure Cosmos DB for NoSQL system function that returns a reversed string.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# REVERSE (Azure Cosmos DB)
+
+# REVERSE (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the reverse order of a string value.
-
+Returns the reverse order of a string value.
+ ## Syntax
-
+ ```sql
-REVERSE(<str_expr>)
-```
-
+REVERSE(<string_expr>)
+```
+ ## Arguments
-
-*str_expr*
- Is a string expression.
-
+
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
+ ## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example shows how to use `REVERSE` in a query.
-
-```sql
-SELECT REVERSE("Abc") AS reverse
-```
-
- Here is the result set.
-
-```json
-[{"reverse": "cbA"}]
-```
+
+The following example shows how to use this function to reverse multiple strings.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`LENGTH`](length.md)
cosmos-db Round https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/round.md
Title: ROUND in Azure Cosmos DB query language
-description: Learn about SQL system function ROUND in Azure Cosmos DB.
-
+ Title: ROUND
+
+description: An Azure Cosmos DB for NoSQL system function that returns the number rounded to the closest integer.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/24/2023+
-# ROUND (Azure Cosmos DB)
+
+# ROUND (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a numeric value, rounded to the closest integer value.
-
+Returns a numeric value, rounded to the closest integer value.
+ ## Syntax
-
+ ```sql ROUND(<numeric_expr>)
-```
-
+```
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
-## Remarks
-
-The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression which falls exactly between two integers then the result will be the closest integer value away from zero. This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
-
-|<numeric_expr>|Rounded|
-|-|-|
-|-6.5000|-7|
-|-0.5|-1|
-|0.5|1|
-|6.5000|7|
-
+
+Returns a numeric expression.
+ ## Examples
-
-The following example rounds the following positive and negative numbers to the nearest integer.
-
-```sql
-SELECT ROUND(2.4) AS r1, ROUND(2.6) AS r2, ROUND(2.5) AS r3, ROUND(-2.4) AS r4, ROUND(-2.6) AS r5
-```
-
-Here is the result set.
-
-```json
-[{r1: 2, r2: 3, r3: 3, r4: -2, r5: -3}]
-```
+
+The following example rounds positive and negative numbers to the nearest integer.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- The rounding operation performed follows midpoint rounding away from zero. If the input is a numeric expression, which falls exactly between two integers then the result is the closest integer value away from `0`. Examples are provided here:
+ | | Rounded |
+ | | |
+ | **`-6.5000`** | `-7` |
+ | **`-0.5`** | `-1` |
+ | **`0.5`** | `1` |
+ | **`6.5000`** | `7` |
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`POWER`](power.md)
cosmos-db Rtrim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/rtrim.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns a string expression after it removes trailing whitespace or specified ch
```sql RTRIM(<string_expr_1> [, <string_expr_2>])
-```
+```
## Arguments
Returns a string expression.
The following example shows how to use this function with various parameters inside a query.
-```sql
-SELECT VALUE {
- whitespaceStart: RTRIM(" AdventureWorks"),
- whitespaceStartEnd: RTRIM(" AdventureWorks "),
- whitespaceEnd: RTRIM("AdventureWorks "),
- noWhitespace: RTRIM("AdventureWorks"),
- trimSuffix: RTRIM("AdventureWorks", "Works"),
- trimPrefix: RTRIM("AdventureWorks", "Adventure"),
- trimEntireTerm: RTRIM("AdventureWorks", "AdventureWorks"),
- trimEmptyString: RTRIM("AdventureWorks", "")
-}
-```
-
-```json
-[
- {
- "whitespaceStart": " AdventureWorks",
- "whitespaceStartEnd": " AdventureWorks",
- "whitespaceEnd": "AdventureWorks",
- "noWhitespace": "AdventureWorks",
- "trimSuffix": "Adventure",
- "trimPrefix": "AdventureWorks",
- "trimEntireTerm": "",
- "trimEmptyString": "AdventureWorks"
- }
-]
-```
+ ## Remarks -- This system function doesn't use the index.
+- This function doesn't use the index.
## Next steps
cosmos-db Setintersect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/setintersect.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns an array of expressions.
This first example uses the function with static arrays to demonstrate the intersect functionality.
-```sql
-SELECT VALUE {
- simpleIntersect: SetIntersect([1, 2, 3, 4], [3, 4, 5, 6]),
- emptyIntersect: SetIntersect([1, 2, 3, 4], []),
- duplicatesIntersect: SetIntersect([1, 2, 3, 4], [1, 1, 1, 1]),
- noMatchesIntersect: SetIntersect([1, 2, 3, 4], ["A", "B"]),
- unorderedIntersect: SetIntersect([1, 2, "A", "B"], ["A", 1])
-}
-```
-```json
-[
- {
- "simpleIntersect": [3, 4],
- "emptyIntersect": [],
- "duplicatesIntersect": [1],
- "noMatchesIntersect": [],
- "unorderedIntersect": ["A", 1]
- }
-]
-```
-This last example uses two items in a container that share values within an array property.
-
-```json
-[
- {
- "name": "Snowilla Women's Vest",
- "inStockColors": [
- "Rhino",
- "Finch"
- ],
- "colors": [
- "Finch",
- "Mine Shaft",
- "Rhino"
- ]
- }
-]
-```
+This last example uses a single item that share values within two array properties.
-```sql
-SELECT
- p.name,
- SetIntersect(p.colors, p.inStockColors) AS availableColors
-FROM
- products p
-```
-```json
-[
- {
- "name": "Snowilla Women's Vest",
- "availableColors": [
- "Rhino",
- "Finch"
- ]
- }
-]
-```
+The query selects the appropriate field from the item\[s\] in the container.
++ ## Remarks - This function doesn't return duplicates.-- This function doesn't utilize the index.
+- This function doesn't use the index.
## See also
cosmos-db Setunion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/setunion.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns an array of expressions.
This first example uses the function with static arrays to demonstrate the union functionality.
-```sql
-SELECT VALUE {
- simpleUnion: SetUnion([1, 2, 3, 4], [3, 4, 5, 6]),
- emptyUnion: SetUnion([1, 2, 3, 4], []),
- duplicatesUnion: SetUnion([1, 2, 3, 4], [1, 1, 1, 1]),
- unorderedUnion: SetUnion([1, 2, "A", "B"], ["A", 1])
-}
-```
-```json
-[
- {
- "simpleUnion": [1, 2, 3, 4, 5, 6],
- "emptyUnion": [1, 2, 3, 4],
- "duplicatesUnion": [1, 2, 3, 4],
- "unorderedUnion": [1, 2, "A", "B"]
- }
-]
-```
-This last example uses two items in a container that share values within an array property.
-
-```json
-[
- {
- "name": "Yarbeck Men's Coat",
- "colors": [
- {
- "season": "Winter",
- "values": [
- "Cutty Sark",
- "Horizon",
- "Russet",
- "Fuscous"
- ]
- },
- {
- "season": "Summer",
- "values": [
- "Fuscous",
- "Horizon",
- "Tacha"
- ]
- }
- ]
- }
-]
-```
+This last example uses an item that share values within multiple array properties.
-```sql
-SELECT
- p.name,
- SetUnion(p.colors[0].values, p.colors[1].values) AS allColors
-FROM
- products p
-```
-```json
-[
- {
- "name": "Yarbeck Men's Coat",
- "allColors": [
- "Cutty Sark",
- "Horizon",
- "Russet",
- "Fuscous",
- "Tacha"
- ]
- }
-]
-```
+The query returns the union of the two arrays as a new property.
++ ## Remarks - This function doesn't return duplicates.-- This function doesn't utilize the index.
+- This function doesn't use the index.
## See also
cosmos-db Sign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sign.md
Title: SIGN in Azure Cosmos DB query language
-description: Learn about SQL system function SIGN in Azure Cosmos DB.
-
+ Title: SIGN
+
+description: An Azure Cosmos DB for NoSQL system function that returns the sign of the specified number.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# SIGN (Azure Cosmos DB)
+
+# SIGN (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the positive (+1), zero (0), or negative (-1) sign of the specified numeric expression.
-
+Returns the positive (+1), zero (0), or negative (-1) sign of the specified numeric expression.
+ ## Syntax
-
+ ```sql SIGN(<numeric_expr>)
-```
-
+```
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example returns the `SIGN` values of numbers from -2 to 2.
-
-```sql
-SELECT SIGN(-2) AS s1, SIGN(-1) AS s2, SIGN(0) AS s3, SIGN(1) AS s4, SIGN(2) AS s5
-```
-
- Here is the result set.
-
-```json
-[{s1: -1, s2: -1, s3: 0, s4: 1, s5: 1}]
-```
+
+The following example returns the sign of various numbers from -2 to 2.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ABS`](abs.md)
cosmos-db Sin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sin.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns the trigonometric sine of the specified angle in radians.
```sql SIN(<numeric_expr>)
-```
+```
## Arguments
Returns a numeric expression.
The following example calculates the sine of the specified angle using the function.
-```sql
-SELECT VALUE {
- sine: SIN(45.175643)
-}
-```
-
-```json
-[
- {
- "sine": 0.929607286611012
- }
-]
-```
+ ## Remarks -- This system function doesn't utilize the index.
+- This function doesn't use the index.
## Next steps
cosmos-db Sqrt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sqrt.md
Returns a numeric expression.
The following example returns the square roots of various numeric values.
-```sql
-SELECT VALUE {
- sqrtZero: SQRT(0),
- sqrtOne: SQRT(1),
- sqrtFour: SQRT(4),
- sqrtPrime: SQRT(17),
- sqrtTwentyFive: SQRT(25)
-}
-```
-
-```json
-[
- {
- "sqrtZero": 0,
- "sqrtOne": 1,
- "sqrtFour": 2,
- "sqrtPrime": 4.123105625617661,
- "sqrtTwentyFive": 5
- }
-]
-```
+ ## Remarks -- This system function doesn't utilize the index.
+- This function doesn't use the index.
- If you attempt to find the square root value that results in an imaginary number, you get an error that the value can't be represented in JSON. For example, `SQRT(-25)` gives this error. ## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`POWER`](power.md)
cosmos-db Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/square.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns a numeric expression.
The following example returns the squares of various numbers.
-```sql
-SELECT VALUE {
- squareZero: SQUARE(0),
- squareOne: SQUARE(1),
- squareTwo: SQUARE(2),
- squareThree: SQUARE(3),
- squareNull: SQUARE(null)
-}
-```
-```json
-[
- {
- "squareZero": 0,
- "squareOne": 1,
- "squareTwo": 4,
- "squareThree": 9
- }
-]
-```
## Remarks -- This system function doesn't utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`SQRT`](sqrt.md)
+- [`POWER`](power.md)
cosmos-db St Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-area.md
Title: ST_AREA in Azure Cosmos DB query language
-description: Learn about SQL system function ST_AREA in Azure Cosmos DB.
-
+ Title: ST_AREA
+
+description: An Azure Cosmos DB for NoSQL system function that returns the total area of a GeoJSON polygon or multi-polygon.
+++ - Previously updated : 10/21/2022--+ Last updated : 07/24/2023+
-# ST_AREA (Azure Cosmos DB)
+# ST_AREA (NoSQL query)
[!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the total area of a GeoJSON Polygon or MultiPolygon expression. To learn more, see the [Geospatial and GeoJSON location data](geospatial-intro.md) article.
-
+Returns the total area of a GeoJSON **Polygon** or **MultiPolygon** expression.
+
+> [!NOTE]
+> For more information, see [Geospatial and GeoJSON location data](geospatial-intro.md).
+ ## Syntax
-
+ ```sql
-ST_AREA (<spatial_expr>)
+ST_AREA(<spatial_expr>)
```
-
+ ## Arguments
-
-*spatial_expr*
- Is any valid GeoJSON Polygon or MultiPolygon object expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr`** | Any valid GeoJSON **Polygon** or **MultiPolygon** expression. |
+ ## Return types
-
- Returns the total area of a set of points. This is expressed in square meters for the default reference system.
-
+
+Returns a numeric expression that enumerates the total area of a set of points.
+ ## Examples
-
- The following example shows how to return the area of a polygon using the `ST_AREA` built-in function.
-
-```sql
-SELECT ST_AREA({
- "type":"Polygon",
- "coordinates":[ [
- [ 31.8, -5 ],
- [ 32, -5 ],
- [ 32, -4.7 ],
- [ 31.8, -4.7 ],
- [ 31.8, -5 ]
- ] ]
-}) as Area
-```
-Here is the result set.
+The following example shows how to return the area of a polygon.
-```json
-[
- {
- "Area": 735970283.0522614
- }
-]
-```
-## Remarks
-Using the ST_AREA function to calculate the area of zero or one-dimensional figures like GeoJSON Points and LineStrings will result in an area of 0.
+## Remarks
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+- The result is expressed in square meters for the default reference system.
+- Using this function to calculate the area of zero or one-dimensional figures like GeoJSON **Points** and **LineStrings** results in an area of `0`.
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_WITHIN`](st-within.md)
cosmos-db St Distance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-distance.md
Title: ST_DISTANCE in Azure Cosmos DB query language
-description: Learn about SQL system function ST_DISTANCE in Azure Cosmos DB.
-
+ Title: ST_DISTANCE
+
+description: An Azure Cosmos DB for NoSQL system function that returns the distance between two GeoJSON Point, Polygon, MultiPolygon or LineStrings.
+++ - Previously updated : 02/17/2021--+ Last updated : 07/24/2023+
-# ST_DISTANCE (Azure Cosmos DB)
+
+# ST_DISTANCE (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the distance between the two GeoJSON Point, Polygon, MultiPolygon or LineString expressions. To learn more, see the [Geospatial and GeoJSON location data](geospatial-intro.md) article.
-
+Returns the distance between two GeoJSON Point, Polygon, MultiPolygon or LineString expressions.
+
+> [!NOTE]
+> For more information, see [Geospatial and GeoJSON location data](geospatial-intro.md).
+ ## Syntax
-
+ ```sql
-ST_DISTANCE (<spatial_expr>, <spatial_expr>)
-```
-
+ST_DISTANCE(<spatial_expr_1>, <spatial_expr_2>)
+```
+ ## Arguments
-
-*spatial_expr*
- Is any valid GeoJSON Point, Polygon, or LineString object expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr_1`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+| **`spatial_expr_2`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+ ## Return types
-
- Returns a numeric expression containing the distance. This is expressed in meters for the default reference system.
-
+
+Returns a numeric expression that enumerates the distance between two expressions.
+ ## Examples
-
- The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
-
-```sql
-SELECT f.id
-FROM Families f
-WHERE ST_DISTANCE(f.location, {'type': 'Point', 'coordinates':[31.9, -4.8]}) < 30000
-```
-
- Here is the result set.
-
-```json
-[{
- "id": "WakefieldFamily"
-}]
-```
-## Remarks
+The following example assumes a container exists with two items.
-This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+The example shows how to use the function as a filter to return items within a specified distance.
+++
+## Remarks
+
+- The result is expressed in meters for the default reference system.
+- This function benefits from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_INTERSECTS`](st-intersects.md)
cosmos-db St Intersects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-intersects.md
Title: ST_INTERSECTS in Azure Cosmos DB query language
-description: Learn about SQL system function ST_INTERSECTS in Azure Cosmos DB.
-
+ Title: ST_INTERSECTS
+
+description: An Azure Cosmos DB for NoSQL system function that returns whether two GeoJSON objects intersect.
+++ - Previously updated : 09/21/2021--+ Last updated : 07/24/2023+
-# ST_INTERSECTS (Azure Cosmos DB)
+
+# ST_INTERSECTS (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument intersects the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
-
+Returns a boolean indicating whether the GeoJSON object (**Point**, **Polygon**, **MultiPolygon**, or **LineString**) specified in the first argument intersects the GeoJSON object in the second argument.
+ ## Syntax
-
+ ```sql
-ST_INTERSECTS (<spatial_expr>, <spatial_expr>)
+ST_INTERSECTS(<spatial_expr_1>, <spatial_expr_2>)
```
-
+ ## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString object expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr_1`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+| **`spatial_expr_2`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+ ## Return types
-
- Returns a Boolean value.
-
+
+Returns a boolean value.
+ ## Examples
-
- The following example shows how to find all areas that intersect with the given polygon.
-
-```sql
-SELECT a.id
-FROM Areas a
-WHERE ST_INTERSECTS(a.location, {
- 'type':'Polygon',
- 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
-})
-```
-
- Here is the result set.
-
-```json
-[{ "id": "IntersectingPolygon" }]
-```
-## Remarks
+The following example shows how to find if two polygons intersect.
+
-This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+
+## Remarks
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+- This function benefits from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_WITHIN`](st-within.md)
cosmos-db St Isvalid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvalid.md
Title: ST_ISVALID in Azure Cosmos DB query language
-description: Learn about SQL system function ST_ISVALID in Azure Cosmos DB.
-
+ Title: ST_ISVALID
+
+description: An Azure Cosmos DB for NoSQL system function that returns if a GeoJSON object is valid.
+++ - Previously updated : 09/21/2021--+ Last updated : 07/24/2023+
-# ST_ISVALID (Azure Cosmos DB)
+
+# ST_ISVALID (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean value indicating whether the specified GeoJSON Point, Polygon, MultiPolygon, or LineString expression is valid.
-
+Returns a boolean value indicating whether the specified GeoJSON **Point**, **Polygon**, **MultiPolygon**, or **LineString** expression is valid.
+ ## Syntax
-
+ ```sql ST_ISVALID(<spatial_expr>)
-```
-
+```
+ ## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon**, or **LineString** expression. |
+ ## Return types
-
- Returns a Boolean expression.
-
+
+Returns a boolean value.
+ ## Examples
-
- The following example shows how to check if a point is valid using ST_VALID.
-
- For example, this point has a latitude value that's not in the valid range of values [-90, 90], so the query returns false.
-
- For polygons, the GeoJSON specification requires that the last coordinate pair provided should be the same as the first, to create a closed shape. Points within a polygon must be specified in counter-clockwise order. A polygon specified in clockwise order represents the inverse of the region within it.
-
-```sql
-SELECT ST_ISVALID({ "type": "Point", "coordinates": [31.9, -132.8] }) AS b
-```
-
- Here is the result set.
-
-```json
-[{ "b": false }]
-```
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+The following example how to check validity of multiple objects.
+++
+## Remarks
+
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_ISVALIDDETAILED`](st-isvaliddetailed.md)
cosmos-db St Isvaliddetailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-isvaliddetailed.md
Title: ST_ISVALIDDETAILED in Azure Cosmos DB query language
-description: Learn about SQL system function ST_ISVALIDDETAILED in Azure Cosmos DB.
-
+ Title: ST_ISVALIDDETAILED
+
+description: An Azure Cosmos DB for NoSQL system function that returns if a GeoJSON object is valid along with the reason.
+++ - Previously updated : 09/21/2021--+ Last updated : 07/24/2023+
-# ST_ISVALIDDETAILED (Azure Cosmos DB)
+
+# ST_ISVALIDDETAILED (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a JSON value containing a Boolean value if the specified GeoJSON Point, Polygon, or LineString expression is valid, and if invalid, additionally the reason as a string value.
-
+Returns a JSON value containing a Boolean value if the specified GeoJSON **Point**, **Polygon**, or **LineString** expression is valid, and if invalid, the reason.
+ ## Syntax
-
+ ```sql ST_ISVALIDDETAILED(<spatial_expr>)
-```
-
+```
+ ## Arguments
-
-*spatial_expr*
- Is a GeoJSON point or polygon expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr`** | Any valid GeoJSON **Point**, **Polygon**, or **LineString** expression. |
+ ## Return types
-
- Returns a JSON value containing a Boolean value if the specified GeoJSON point or polygon expression is valid, and if invalid, additionally the reason as a string value.
-
+
+Returns a JSON object containing a boolean value indicating if the specified GeoJSON point or polygon expression is valid. If invalid, the object additionally contains the reason as a string value.
+ ## Examples
-
- The following example how to check validity (with details) using `ST_ISVALIDDETAILED`.
-
-```sql
-SELECT ST_ISVALIDDETAILED({
- "type": "Polygon",
- "coordinates": [[ [ 31.8, -5 ], [ 31.8, -4.7 ], [ 32, -4.7 ], [ 32, -5 ] ]]
-}) AS b
-```
-
- Here is the result set.
-
-```json
-[{
- "b": {
- "valid": false,
- "reason": "The Polygon input is not valid because the start and end points of the ring number 1 are not the same. Each ring of a polygon must have the same start and end points."
- }
-}]
-```
-
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+The following example how to check validity of multiple objects.
+++
+## Remarks
+
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_ISVALID`](st-isvalid.md)
cosmos-db St Within https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-within.md
Title: ST_WITHIN in Azure Cosmos DB query language
-description: Learn about SQL system function ST_WITHIN in Azure Cosmos DB.
-
+ Title: ST_WITHIN
+
+description: An Azure Cosmos DB for NoSQL system function that returns if one GeoJSON object is within another.
+++ - Previously updated : 09/21/2021--+ Last updated : 07/24/2023+
-# ST_WITHIN (Azure Cosmos DB)
+
+# ST_WITHIN (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean expression indicating whether the GeoJSON object (Point, Polygon, MultiPolygon, or LineString) specified in the first argument is within the GeoJSON (Point, Polygon, MultiPolygon, or LineString) in the second argument.
-
+Returns a boolean expression indicating whether the GeoJSON object (GeoJSON **Point**, **Polygon**, or **LineString** expression) specified in the first argument is within the GeoJSON object in the second argument.
+ ## Syntax
-
+ ```sql
-ST_WITHIN (<spatial_expr>, <spatial_expr>)
-```
-
+ST_WITHIN(<spatial_expr_1>, <spatial_expr_2>)
+```
+ ## Arguments
-
-*spatial_expr*
- Is a GeoJSON Point, Polygon, or LineString object expression.
-
+
+| | Description |
+| | |
+| **`spatial_expr_1`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+| **`spatial_expr_2`** | Any valid GeoJSON **Point**, **Polygon**, **MultiPolygon** or **LineString** expression. |
+ ## Return types
-
- Returns a Boolean value.
-
+
+Returns a boolean value.
+ ## Examples
-
- The following example shows how to find all family documents within a polygon using `ST_WITHIN`.
-
-```sql
-SELECT f.id
-FROM Families f
-WHERE ST_WITHIN(f.location, {
- 'type':'Polygon',
- 'coordinates': [[[31.8, -5], [32, -5], [32, -4.7], [31.8, -4.7], [31.8, -5]]]
-})
-```
-
- Here is the result set.
-
-```json
-[{ "id": "WakefieldFamily" }]
-```
-## Remarks
+The following example shows how to find if a **Point** is within a **Polygon**.
+
-This system function will benefit from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
-> [!NOTE]
-> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+## Remarks
+- This function benefits from a [geospatial index](../../index-policy.md#spatial-indexes) except in queries with aggregates.
+- The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ST_INTERSECT`](st-intersects.md)
cosmos-db Startswith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/startswith.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns a boolean value indicating whether the first string expression starts wi
## Syntax ```sql
-ENDSWITH(<str_expr_1>, <str_expr_2> [, <bool_expr>])
+STARTSWITH(<string_expr_1>, <string_expr_2> [, <bool_expr>])
``` ## Arguments | | Description | | | |
-| **`str_expr_1`** | A string expression. |
-| **`str_expr_2`** | A string expression to be compared to the beginning of `str_expr_1`. |
-| **`bool_expr`** *(Optional)* | Optional value for ignoring case. When set to `true`, `ENDSWITH` does a case-insensitive search. When unspecified, this default value is `false`. |
+| **`string_expr_1`** | A string expression. |
+| **`string_expr_2`** | A string expression to be compared to the beginning of `string_expr_1`. |
+| **`bool_expr`** *(Optional)* | Optional value for ignoring case. When set to `true`, `STARTSWITH` does a case-insensitive search. When unspecified, this default value is `false`. |
## Return types
Returns a boolean expression.
The following example checks if the string `abc` starts with `b` or `ab`.
-```sql
-SELECT VALUE {
- endsWithWrongPrefix: STARTSWITH("abc", "b"),
- endsWithCorrectPrefix: STARTSWITH("abc", "ab"),
- endsWithPrefixWrongCase: STARTSWITH("abc", "Ab"),
- endsWithPrefixCaseInsensitive: STARTSWITH("abc", "Ab", true)
-}
-```
-
-```json
-[
- {
- "endsWithWrongPrefix": false,
- "endsWithCorrectPrefix": true,
- "endsWithPrefixWrongCase": false,
- "endsWithPrefixCaseInsensitive": true
- }
-]
-```
+ ## Remarks
cosmos-db Stringequals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringequals.md
Title: StringEquals in Azure Cosmos DB query language
-description: Learn about how the StringEquals SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression matches the second
-
+ Title: StringEquals
+
+description: An Azure Cosmos DB for NoSQL system function that returns a boolean indicating whether two strings are equivalent.
+++ - Previously updated : 05/20/2020---+ Last updated : 07/24/2023+
-# STRINGEQUALS (Azure Cosmos DB)
+
+# StringEquals (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a Boolean indicating whether the first string expression matches the second.
-
+Returns a boolean indicating whether the first string expression matches the second.
+ ## Syntax
-
+ ```sql
-STRINGEQUALS(<str_expr1>, <str_expr2> [, <bool_expr>])
-```
-
+STRINGEQUALS(<string_expr_1>, <string_expr_2> [, <boolean_expr>])
+```
+ ## Arguments
-
-*str_expr1*
- Is the first string expression to compare.
-
-*str_expr2*
- Is the second string expression to compare.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, StringEquals does a case-insensitive search. When unspecified, this value is false.
-
+
+| | Description |
+| | |
+| **`string_expr_1`** | The first string expression to compare. |
+| **`string_expr_2`** | The second string expression to compare. |
+| **`boolean_expr` *(Optional)*** | An optional boolean expression for ignoring case. When set to `true`, this function performs a case-insensitive search. If not specified, the default value is `false`. |
+ ## Return types
-
- Returns a Boolean expression.
-
+
+Returns a boolean expression.
+ ## Examples
-
- The following example checks if "abc" matches "abc" and if "abc" matches "ABC."
-
-```sql
-SELECT STRINGEQUALS("abc", "abc", false) AS c1, STRINGEQUALS("abc", "ABC", false) AS c2, STRINGEQUALS("abc", "ABC", true) AS c3
-```
-
- Here's the result set.
-
-```json
-[
- {
- "c1": true,
- "c2": false,
- "c3": true
- }
-]
-```
+
+The following example checks if "abc" matches "abc" and if "abc" matches "ABC."
++ ## Remarks
SELECT STRINGEQUALS("abc", "abc", false) AS c1, STRINGEQUALS("abc", "ABC", false
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`SUBSTRING`](substring.md)
cosmos-db Stringtoarray https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoarray.md
Title: StringToArray in Azure Cosmos DB query language
-description: Learn about SQL system function StringToArray in Azure Cosmos DB.
-
+ Title: StringToArray
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string expression converted to an array.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# StringToArray (Azure Cosmos DB)
+
+# StringToArray (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns expression translated to an Array. If expression can't be translated, returns undefined.
-
+Converts a string expression to an array.
+ ## Syntax
-
+ ```sql
-StringToArray(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON Array expression.
-
-## Return types
-
- Returns an array expression or undefined.
-
-## Remarks
- Nested string values must be written with double quotes to be valid JSON. For details on the JSON format, see [json.org](https://json.org/). This system function won't utilize the index.
-
-## Examples
-
- The following example shows how `StringToArray` behaves across different types.
-
- The following are examples with valid input.
-
-```sql
-SELECT
- StringToArray('[]') AS a1,
- StringToArray("[1,2,3]") AS a2,
- StringToArray("[\"str\",2,3]") AS a3,
- StringToArray('[["5","6","7"],["8"],["9"]]') AS a4,
- StringToArray('[1,2,3, "[4,5,6]",[7,8]]') AS a5
+StringToArray(<string_expr>)
```
-Here's the result set.
+## Arguments
-```json
-[{"a1": [], "a2": [1,2,3], "a3": ["str",2,3], "a4": [["5","6","7"],["8"],["9"]], "a5": [1,2,3,"[4,5,6]",[7,8]]}]
-```
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
-The following examples illustrate invalid input:
-
-- Single quotes within the array aren't valid JSON.-- Even though they're valid within a query, they won't parse to valid arrays. -- Strings within the array string must either be escaped "[\\"\\"]" or the surrounding quote must be single '[""]'.
+## Return types
-```sql
-SELECT
- StringToArray("['5','6','7']")
-```
+Returns an array.
-Here's the result set.
+## Examples
+
+The following example illustrates how this function works with various inputs.
-```json
-[{}]
-```
-The following are examples of invalid input.
-
- The expression passed will be parsed as a JSON array; the following don't evaluate to type array and thus return undefined.
-
-```sql
-SELECT
- StringToArray("["),
- StringToArray("1"),
- StringToArray(NaN),
- StringToArray(false),
- StringToArray(undefined)
-```
-Here's the result set.
+## Remarks
-```json
-[{}]
-```
+- This function doesn't use the index.
+- If the expression can't be converted, the function returns `undefined`.
+- Nested string values must be written with double quotes to be valid.
+- Single quotes within the array aren't valid JSON. Even though single quotes are valid within a query, they don't parse to valid arrays. Strings within the array string must either be escaped `\"` or the surrounding quote must be a single quote.
+
+> [!NOTE]
+> For more information on the JSON format, see [https://json.org](https://json.org/).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringToObject`](stringtoobject.md)
cosmos-db Stringtoboolean https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoboolean.md
Title: StringToBoolean in Azure Cosmos DB query language
-description: Learn about SQL system function StringToBoolean in Azure Cosmos DB.
-
+ Title: StringToBoolean
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string expression converted to a boolean.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# StringToBoolean (Azure Cosmos DB)
+
+# StringToBoolean (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns expression translated to a Boolean. If expression can't be translated, returns undefined.
+Converts a string expression to a boolean.
## Syntax ```sql
-StringToBoolean(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a Boolean expression.
-
-## Return types
-
- Returns a Boolean expression or undefined.
-
-## Examples
-
- The following example shows how `StringToBoolean` behaves across different types.
-
- The following are examples with valid input.
-
-Whitespace is allowed only before or after `true`/`false`.
-
-```sql
-SELECT
- StringToBoolean("true") AS b1,
- StringToBoolean(" false") AS b2,
- StringToBoolean("false ") AS b3
-```
-
- Here's the result set.
-
-```json
-[{"b1": true, "b2": false, "b3": false}]
+StringToBoolean(<string_expr>)
```
-The following are examples with invalid input.
+## Arguments
- Booleans are case sensitive and must be written with all lowercase characters such as `true` and `false`.
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
-```sql
-SELECT
- StringToBoolean("TRUE"),
- StringToBoolean("False")
-```
+## Return types
-Here's the result set.
+Returns a boolean value.
-```json
-[{}]
-```
-
-The expression passed will be parsed as a Boolean expression; these inputs don't evaluate to type Boolean and thus return undefined.
+## Examples
+
+The following example illustrates how this function works with various data types.
-```sql
-SELECT
- StringToBoolean("null"),
- StringToBoolean(undefined),
- StringToBoolean(NaN),
- StringToBoolean(false),
- StringToBoolean(true)
-```
-Here's the result set.
-
-```json
-[{}]
-```
## Remarks
-This system function won't utilize the index.
+- This function doesn't use the index.
+- If the expression can't be converted, the function returns `undefined`.
+
+> [!NOTE]
+> For more information on the JSON format, see [https://json.org](https://json.org/).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringToNumber`](stringtonumber.md)
cosmos-db Stringtonull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonull.md
Title: StringToNull in Azure Cosmos DB query language
-description: Learn about SQL system function StringToNull in Azure Cosmos DB.
-
+ Title: StringToNull
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string expression converted to null.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# StringToNull (Azure Cosmos DB)
+
+# StringToNull (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns expression translated to null. If expression can't be translated, returns undefined.
+Converts a string expression to `null`.
## Syntax
-
-```sql
-StringToNull(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a null expression.
-
-## Return types
-
- Returns a null expression or undefined.
-
-## Examples
-
- The following example shows how `StringToNull` behaves across different types.
-The following are examples with valid input.
+```sql
+StringToNull(<string_expr>)
+```
- Whitespace is allowed only before or after "null".
+## Arguments
-```sql
-SELECT
- StringToNull("null") AS n1,
- StringToNull(" null ") AS n2,
- IS_NULL(StringToNull("null ")) AS n3
-```
-
- Here's the result set.
-
-```json
-[{"n1": null, "n2": null, "n3": true}]
-```
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
-The following are examples with invalid input.
+## Return types
-Null is case sensitive and must be written with all lowercase characters such as `null`.
+Returns a `null`.
-```sql
-SELECT
- StringToNull("NULL"),
- StringToNull("Null")
-```
-
- Here's the result set.
+## Examples
-```json
-[{}]
-```
+The following example illustrates how this function works with various data types.
-The expression passed will be parsed as a null expression; these inputs don't evaluate to type null and thus return undefined.
-```sql
-SELECT
- StringToNull("true"),
- StringToNull(false),
- StringToNull(undefined),
- StringToNull(NaN)
-```
-
- Here's the result set.
-
-```json
-[{}]
-```
## Remarks
-This system function won't utilize the index.
+- This function doesn't use the index.
+- If the expression can't be converted, the function returns `undefined`.
+
+> [!NOTE]
+> For more information on the JSON format, see [https://json.org](https://json.org/).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringToBoolean`](stringtoboolean.md)
cosmos-db Stringtonumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtonumber.md
Title: StringToNumber in Azure Cosmos DB query language
-description: Learn about SQL system function StringToNumber in Azure Cosmos DB.
-
+ Title: StringToNumber
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string expression converted to a number.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# StringToNumber (Azure Cosmos DB)
+
+# StringToNumber (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns expression translated to a Number. If expression cannot be translated, returns undefined.
-
+Converts a string expression to a number.
+ ## Syntax
-
+ ```sql
-StringToNumber(<str_expr>)
+StringToNumber(<string_expr>)
```
-
+ ## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON Number expression. Numbers in JSON must be an integer or a floating point. For details on the JSON format, see [json.org](https://json.org/)
-
-## Return types
-
- Returns a Number expression or undefined.
-
-## Examples
-
- The following example shows how `StringToNumber` behaves across different types.
-Whitespace is allowed only before or after the Number.
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
-```sql
-SELECT
- StringToNumber("1.000000") AS num1,
- StringToNumber("3.14") AS num2,
- StringToNumber(" 60 ") AS num3,
- StringToNumber("-1.79769e+308") AS num4
-```
-
- Here is the result set.
-
-```json
-{{"num1": 1, "num2": 3.14, "num3": 60, "num4": -1.79769e+308}}
-```
+## Return types
-In JSON a valid Number must be either be an integer or a floating point number.
+Returns a number value.
-```sql
-SELECT
- StringToNumber("0xF")
-```
-
- Here is the result set.
-
-```json
-{{}}
-```
+## Examples
-The expression passed will be parsed as a Number expression; these inputs do not evaluate to type Number and thus return undefined.
+The following example illustrates how this function works with various data types.
-```sql
-SELECT
- StringToNumber("99 54"),
- StringToNumber(undefined),
- StringToNumber("false"),
- StringToNumber(false),
- StringToNumber(" "),
- StringToNumber(NaN)
-```
-
- Here is the result set.
-
-```json
-{{}}
-```
+ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
+- String expressions are parsed as a JSON number expression.
+- Numbers in JSON must be an integer or a floating point.
+- If the expression can't be converted, the function returns `undefined`.
+
+> [!NOTE]
+> For more information on the JSON format, see [https://json.org](https://json.org/).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringToBoolean`](stringtoboolean.md)
cosmos-db Stringtoobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/stringtoobject.md
Title: StringToObject in Azure Cosmos DB query language
-description: Learn about SQL system function StringToObject in Azure Cosmos DB.
-
+ Title: StringToObject
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string expression converted to an object.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/24/2023+
-# StringToObject (Azure Cosmos DB)
- Returns expression translated to an Object. If expression can't be translated, returns undefined.
-
-## Syntax
-
-```sql
-StringToObject(<str_expr>)
-```
-
-## Arguments
-
-*str_expr*
- Is a string expression to be parsed as a JSON object expression. Nested string values must be written with double quotes to be valid. For details on the JSON format, see [json.org](https://json.org/)
-
-## Return types
-
- Returns an object expression or undefined.
-
-## Examples
-
- The following example shows how `StringToObject` behaves across different types.
-
- The following are examples with valid input.
+# StringToObject (NoSQL query)
-```sql
-SELECT
- StringToObject("{}") AS obj1,
- StringToObject('{"A":[1,2,3]}') AS obj2,
- StringToObject('{"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]}') AS obj3,
- StringToObject("{\"C\":[{\"c1\":[5,6,7]},{\"c2\":8},{\"c3\":9}]}") AS obj4
-```
-
-Here's the result set.
-
-```json
-[{"obj1": {},
- "obj2": {"A": [1,2,3]},
- "obj3": {"B":[{"b1":[5,6,7]},{"b2":8},{"b3":9}]},
- "obj4": {"C":[{"c1":[5,6,7]},{"c2":8},{"c3":9}]}}]
-```
- The following are examples with invalid input.
-Even though they're valid within a query, they won't parse to valid objects.
- Strings within the string of object must either be escaped "{\\"a\\":\\"str\\"}" or the surrounding quote must be single
- '{"a": "str"}'.
+Converts a string expression to an object.
-Single quotes surrounding property names aren't valid JSON.
+## Syntax
```sql
-SELECT
- StringToObject("{'a':[1,2,3]}")
+StringToObject(<string_expr>)
```
-Here's the result set.
-
-```json
-[{}]
-```
+## Arguments
-Property names without surrounding quotes aren't valid JSON.
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
-```sql
-SELECT
- StringToObject("{a:[1,2,3]}")
-```
+## Return types
-Here's the result set.
+Returns an object.
-```json
-[{}]
-```
+## Examples
+
+The following example illustrates how this function works with various inputs.
-The following are examples with invalid input.
- The expression passed will be parsed as a JSON object; these inputs don't evaluate to type object and thus return undefined.
-
-```sql
-SELECT
- StringToObject("}"),
- StringToObject("{"),
- StringToObject("1"),
- StringToObject(NaN),
- StringToObject(false),
- StringToObject(undefined)
-```
-
- Here's the result set.
-
-```json
-[{}]
-```
## Remarks
-This system function won't utilize the index.
+- This function doesn't use the index.
+- If the expression can't be converted, the function returns `undefined`.
+- Nested string values must be written with double quotes to be valid.
+
+> [!NOTE]
+> For more information on the JSON format, see [https://json.org](https://json.org/).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringToArray`](stringtoarray.md)
cosmos-db Substring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/substring.md
Title: SUBSTRING in Azure Cosmos DB query language
-description: Learn about SQL system function SUBSTRING in Azure Cosmos DB.
-
+ Title: SUBSTRING
+
+description: An Azure Cosmos DB for NoSQL system function that returns a portion of a string using a starting position and length.
+++ - Previously updated : 09/13/2019--+ Last updated : 07/24/2023+
-# SUBSTRING (Azure Cosmos DB)
+
+# SUBSTRING (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns part of a string expression starting at the specified character zero-based position and continues to the specified length, or to the end of the string.
-
+Returns part of a string expression starting at the specified position and of the specified length, or to the end of the string.
+ ## Syntax
-
+ ```sql
-SUBSTRING(<str_expr>, <num_expr1>, <num_expr2>)
-```
-
+SUBSTRING(<string_expr>, <numeric_expr_1>, <numeric_expr_2>)
+```
+ ## Arguments
-
-*str_expr*
- Is a string expression.
-
-*num_expr1*
- Is a numeric expression to denote the start character. A value of 0 is the first character of *str_expr*.
-
-*num_expr2*
- Is a numeric expression to denote the maximum number of characters of *str_expr* to be returned. A value of 0 or less results in empty string.
+
+| | Description |
+| | |
+| **`string_expr`** | A string expression. |
+| **`numeric_expr_1`** | A numeric expression to denote the start character. |
+| **`numeric_expr_2`** | A numeric expression to denote the maximum number of characters of `string_expr` to be returned. |
## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example returns the substring of "abc" starting at 1 and for a length of 1 character.
-
-```sql
-SELECT SUBSTRING("abc", 1, 1) AS substring
-```
-
- Here is the result set.
-
-```json
-[{"substring": "b"}]
-```
+
+The following example returns substrings with various lengths and starting positions.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) if the starting position is `0`.
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy) if the starting position is `0`.
+- `numeric_expr_1` positions are zero-based, therefore a value of `0` starts from the first character of `string_expr`.
+- A value of `0` or less for `numeric_expr_2` results in empty string.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`StringEquals`](stringequals.md)
cosmos-db Tan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tan.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns the trigonometric tangent of the specified angle in radians.
```sql TAN(<numeric_expr>)
-```
+```
## Arguments
Returns a numeric expression.
The following example calculates the cotangent of the specified angle using the function.
-```sql
-SELECT VALUE {
- tangentSquareRootPi: TAN(PI()/2),
- tangentArbitraryNumber: TAN(124.1332)
-}
-```
-
-```json
-[
- {
- "tangentSquareRootPi": 16331239353195370,
- "tangentArbitraryNumber": -24.80651023035602
- }
-]
-```
+ ## Remarks -- This system function doesn't use the index.
+- This function doesn't use the index.
## Next steps
cosmos-db Tickstodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tickstodatetime.md
Title: TicksToDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function TicksToDateTime in Azure Cosmos DB.
-
+ Title: TicksToDateTime
+
+description: An Azure Cosmos DB for NoSQL system function that returns the number of ticks as a date and time value.
+++ - Previously updated : 08/18/2020---+ Last updated : 07/24/2023+
-# TicksToDateTime (Azure Cosmos DB)
+
+# TicksToDateTime (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Converts the specified ticks value to a DateTime.
-
+Converts the specified number of ticks to a date and time value.
+ ## Syntax
-
+ ```sql
-TicksToDateTime (<Ticks>)
+TicksToDateTime(<numeric_expr>)
``` ## Arguments
-*Ticks*
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
-A signed numeric value, the current number of 100 nanosecond ticks that have elapsed since the Unix epoch. In other words, it is the number of 100 nanosecond ticks that have elapsed since 00:00:00 Thursday, 1 January 1970.
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).
## Return types
-Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
- For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+Returns a UTC date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`.
-## Remarks
+## Examples
-TicksToDateTime will return `undefined` if the ticks value specified is invalid.
+The following example converts the ticks to a date and time value.
-## Examples
-
-The following example converts the ticks to a DateTime:
-```sql
-SELECT TicksToDateTime(15943368134575530) AS DateTime
-```
+
+## Remarks
-```json
-[
- {
- "DateTime": "2020-07-09T23:20:13.4575530Z"
- }
-]
-```
+- This function returns `undefined` if the ticks value specified is invalid.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`TimestampToDateTime`](timestamptodatetime.md)
cosmos-db Timestamptodatetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/timestamptodatetime.md
Title: TimestampToDateTime in Azure Cosmos DB query language
-description: Learn about SQL system function TimestampToDateTime in Azure Cosmos DB.
-
+ Title: TimestampToDateTime
+
+description: An Azure Cosmos DB for NoSQL system function that returns the timestamp as a date and time value.
+++ - Previously updated : 10/27/2022---+ Last updated : 07/24/2023+
-# TimestampToDateTime (Azure Cosmos DB)
+# TimestampToDateTime (NoSQL query)
[!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Converts the specified timestamp value to a DateTime.
-
+Converts the specified timestamp to a date and time value.
+ ## Syntax
-
+ ```sql
-TimestampToDateTime (<Timestamp>)
+TimestampToDateTime(<numeric_expr>)
``` ## Arguments
-### Timestamp
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
-A signed numeric value, the current number of milliseconds that have elapsed since the Unix epoch. In other words, the number of milliseconds that have elapsed since 00:00:00 Thursday, 1 January 1970.
+> [!NOTE]
+> For more information on the ISO 8601 format, see [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).
## Return types
-Returns the UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
-
-|Format|Description|
-|-|-|
-|YYYY|four-digit year|
-|MM|two-digit month (01 = January, etc.)|
-|DD|two-digit day of month (01 through 31)|
-|T|signifier for beginning of time elements|
-|hh|two-digit hour (00 through 23)|
-|mm|two-digit minutes (00 through 59)|
-|ss|two-digit seconds (00 through 59)|
-|.fffffff|seven-digit fractional seconds|
-|Z|UTC (Coordinated Universal Time) designator|
-
-For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
-
-## Remarks
-
-TimestampToDateTime will return `undefined` if the timestamp value specified is invalid.
+Returns a UTC date and time string in the ISO 8601 format `YYYY-MM-DDThh:mm:ss.fffffffZ`.
## Examples
-
-The following example converts the value `1,594,227,912,345` from milliseconds to a date and time of **July 8, 2020, 5:05 PM UTC**.
-```sql
-SELECT TimestampToDateTime(1594227912345) AS DateTime
-```
+The following example converts the ticks to a date and time value.
-```json
-[
- {
- "DateTime": "2020-07-08T17:05:12.3450000Z"
- }
-]
-```
-This next example uses the timestamp from an existing item in a container. The item's timestamp is expressed in seconds.
-```json
-{
- "id": "8cc56bd4-5b8d-450b-a576-449836171398",
- "type": "reading",
- "data": "temperature",
- "value": 35.726545156,
- "_ts": 1605862991
-}
-```
-
-To use the `_ts` value, you must multiply the value by 1,000 since the timestamp is expressed in seconds.
-
-```sql
-SELECT
- TimestampToDateTime(r._ts * 1000) AS timestamp,
- r.id
-FROM
- readings r
-```
+## Remarks
-```json
-[
- {
- "timestamp": "2020-11-20T09:03:11.0000000Z",
- "id": "8cc56bd4-5b8d-450b-a576-449836171398"
- }
-]
-```
+- This function returns `undefined` if the timestamp value specified is invalid.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`TicksToDateTime`](tickstodatetime.md)
cosmos-db Tostring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/tostring.md
Title: ToString in Azure Cosmos DB query language
-description: Learn about SQL system function ToString in Azure Cosmos DB.
-
+ Title: ToString
+
+description: An Azure Cosmos DB for NoSQL system function that returns a value converted to a string.
+++ - Previously updated : 03/04/2020--+ Last updated : 07/24/2023+
-# ToString (Azure Cosmos DB)
+
+# ToString (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a string representation of scalar expression.
-
+Returns a string representation of a value.
+ ## Syntax
-
+ ```sql ToString(<expr>)
-```
-
+```
+ ## Arguments
-
-*expr*
- Is any scalar expression.
-
+
+| | Description |
+| | |
+| **`expr`** | Any expression. |
+ ## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example shows how `ToString` behaves across different types.
-
-```sql
-SELECT
- ToString(1.0000) AS str1,
- ToString("Hello World") AS str2,
- ToString(NaN) AS str3,
- ToString(Infinity) AS str4,
- ToString(IS_STRING(ToString(undefined))) AS str5,
- ToString(0.1234) AS str6,
- ToString(false) AS str7,
- ToString(undefined) AS str8
-```
-
- Here is the result set.
-
-```json
-[{"str1": "1", "str2": "Hello World", "str3": "NaN", "str4": "Infinity", "str5": "false", "str6": "0.1234", "str7": "false"}]
-```
- Given the following input:
-```json
-{"Products":[{"ProductID":1,"Weight":4,"WeightUnits":"lb"},{"ProductID":2,"Weight":32,"WeightUnits":"kg"},{"ProductID":3,"Weight":400,"WeightUnits":"g"},{"ProductID":4,"Weight":8999,"WeightUnits":"mg"}]}
-```
- The following example shows how `ToString` can be used with other string functions like `CONCAT`.
-
-```sql
-SELECT
-CONCAT(ToString(p.Weight), p.WeightUnits)
-FROM p in c.Products
-```
-
-Here is the result set.
-
-```json
-[{"$1":"4lb" },
-{"$1":"32kg"},
-{"$1":"400g" },
-{"$1":"8999mg" }]
-
-```
-Given the following input.
-```json
-{"id":"08259","description":"Cereals ready-to-eat, KELLOGG, KELLOGG'S CRISPIX","nutrients":[{"id":"305","description":"Caffeine","units":"mg"},{"id":"306","description":"Cholesterol, HDL","nutritionValue":30,"units":"mg"},{"id":"307","description":"Sodium, NA","nutritionValue":612,"units":"mg"},{"id":"308","description":"Protein, ABP","nutritionValue":60,"units":"mg"},{"id":"309","description":"Zinc, ZN","nutritionValue":null,"units":"mg"}]}
-```
-The following example shows how `ToString` can be used with other string functions like `REPLACE`.
-```sql
-SELECT
- n.id AS nutrientID,
- REPLACE(ToString(n.nutritionValue), "6", "9") AS nutritionVal
-FROM food
-JOIN n IN food.nutrients
-```
-Here is the result set:
-```json
-[{"nutrientID":"305"},
-{"nutrientID":"306","nutritionVal":"30"},
-{"nutrientID":"307","nutritionVal":"912"},
-{"nutrientID":"308","nutritionVal":"90"},
-{"nutrientID":"309","nutritionVal":"null"}]
-```
+This example converts multiple scalar and object values to a string.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`IS_OBJECT`](is-object.md)
cosmos-db Trim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trim.md
Title: TRIM in Azure Cosmos DB query language
-description: Learn about SQL system function TRIM in Azure Cosmos DB.
-
+ Title: TRIM
+
+description: An Azure Cosmos DB for NoSQL system function that returns a string with leading or trailing whitespace trimmed.
+++ - Previously updated : 09/14/2021--+ Last updated : 07/24/2023+
-# TRIM (Azure Cosmos DB)
+
+# TRIM (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns a string expression after it removes leading and trailing whitespace or specified characters.
-
+Returns a string expression after it removes leading and trailing whitespace or custom characters.
+ ## Syntax
-
+ ```sql
-TRIM(<str_expr1>[, <str_expr2>])
-```
-
+TRIM(<string_expr_1> [, <string_expr_2>])
+```
+ ## Arguments
-
-*str_expr1*
- Is a string expression
-*str_expr2*
- Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
+| | Description |
+| | |
+| **`string_expr_1`** | A string expression. |
+| **`string_expr_2` *(Optional)*** | An optional string expression with a string to trim from `string_expr_1`. If not specified, the default is to trim whitespace. |
## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example shows how to use `TRIM` inside a query.
-
-```sql
-SELECT TRIM(" abc") AS t1,
-TRIM(" abc ") AS t2,
-TRIM("abc ") AS t3,
-TRIM("abc") AS t4,
-TRIM("abc", "ab") AS t5,
-TRIM("abc", "abc") AS t6
-```
-
- Here is the result set.
-
-```json
-[
- {
- "t1": "abc",
- "t2": "abc",
- "t3": "abc",
- "t4": "abc",
- "t5": "c",
- "t6": ""
- }
-]
-```
+
+This example illustrates various ways to trim a string expression.
++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`TRUNC`](trunc.md)
cosmos-db Trunc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/trunc.md
Title: TRUNC in Azure Cosmos DB query language
-description: Learn about SQL system function TRUNC in Azure Cosmos DB.
-
+ Title: TRUNC
+
+description: An Azure Cosmos DB for NoSQL system function that returns a truncated numeric value.
+++ - Previously updated : 06/22/2021--+ Last updated : 07/24/2023+
-# TRUNC (Azure Cosmos DB)
+
+# TRUNC (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a numeric value, truncated to the closest integer value.
-
+Returns a numeric value truncated to the closest integer value.
+ ## Syntax
-
+ ```sql
-TRUNC(<numeric_expr>)
-```
-
+TRUNC(<numeric_expr>)
+```
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example truncates the following positive and negative numbers to the nearest integer value.
-
-```sql
-SELECT TRUNC(2.4) AS t1, TRUNC(2.6) AS t2, TRUNC(2.5) AS t3, TRUNC(-2.4) AS t4, TRUNC(-2.6) AS t5
-```
-
- Here is the result set.
-
-```json
-[{t1: 2, t2: 2, t3: 2, t4: -2, t5: -2}]
-```
+
+This example illustrates various ways to truncate a number to the closest integer.
++ ## Remarks
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy).
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`TRIM`](trim.md)
cosmos-db Upper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/upper.md
Previously updated : 07/01/2023 Last updated : 07/24/2023
Returns a string expression after converting lowercase character data to upperca
## Syntax ```sql
-UPPER(<string_expr>)
-```
+UPPER(<string_expr>)
+```
## Arguments
Returns a string expression.
The following example shows how to use the function to modify various strings.
-```sql
-SELECT VALUE {
- lowercase: UPPER("adventureworks"),
- uppercase: UPPER("ADVENTUREWORKS"),
- camelCase: UPPER("adventureWorks"),
- pascalCase: UPPER("AdventureWorks"),
- upperSnakeCase: UPPER("ADVENTURE_WORKS")
-}
-```
-
-```json
-[
- {
- "lowercase": "ADVENTUREWORKS",
- "uppercase": "ADVENTUREWORKS",
- "camelCase": "ADVENTUREWORKS",
- "pascalCase": "ADVENTUREWORKS",
- "upperSnakeCase": "ADVENTURE_WORKS"
- }
-]
-```
+ ## Remarks -- This system function doesn't use the index.
+- This function doesn't use the index.
- If you plan to do frequent case insensitive comparisons, this function may consume a significant number of RUs. Consider normalizing the casing of strings when ingesting your data. Then a query like `SELECT * FROM c WHERE UPPER(c.name) = 'USERNAME'` is simplified to `SELECT * FROM c WHERE c.name = 'USERNAME'`. ## Next steps
cost-management-billing Capabilities Analysis Showback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-analysis-showback.md
As a starting point, we focus on tools available in the Azure portal and Microso
- [Analyze resource usage metrics in Azure Monitor](../../azure-monitor/essentials/tutorial-metrics.md). - [Review resource configuration changes in Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md). - If you need to build more advanced reports or merge cost data with other cloud or business data, [connect to Cost Management data in Power BI](/power-bi/connect-data/desktop-connect-azure-cost-management).
+ - If getting started with cost reporting in Power BI, consider using these [Power BI sample reports](https://github.com/flanakin/cost-management-powerbi).
## Building on the basics
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
Next, specify a staging linked service and staging folder in Azure Data Lake Gen
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-staging-folder.png" alt-text="Screenshot of specify staging folder in data flow activity.":::
+The **Checkpoint Key** is used by the SAP CDC runtime to store status information about the change data capture process. This, for example, allows SAP CDC mapping data flows to automatically recover from error situations, or know whether a change data capture process for a given data flow has already been established. It is therefore important to use a unique **Checkpoint Key** for each source. Otherwise status information of one source will be overwritten by another source.
+
+>[!NOTE]
+ > - To avoid conflicts, a unique id is generated as **Checkpoint Key** by default.
+ > - When using parameters to leverage the same data flow for multiple sources, make sure to parametrize the **Checkpoint Key** with unique values per source.
+ > - The **Checkpoint Key** property is not shown if the **Run mode** within the SAP CDC source is set to **Full on every run** (see next section), because in this case no change data capture process is established.
++ ### Mapping data flow properties To create a mapping data flow using the SAP CDC connector as a source, complete the following steps:
data-factory Sap Change Data Capture Advanced Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-advanced-topics.md
+
+ Title: SAP CDC advanced topics
+
+description: Learn about advanced features and best practices for SAP change data capture in Azure Data Factory.
+++++ Last updated : 07/04/2023+++
+# SAP CDC advanced topics
++
+Learn about advanced topics for the SAP CDC connector like metadata driven data integration, debugging, and more.
+
+## Parametrizing an SAP CDC mapping data flow
+
+One of the key strengths of pipelines and mapping data flows in Azure Data Factory and Azure Synapse Analytics is the support for metadata driven data integration. With this feature, it's possible to design a single (or few) parametrized pipeline that can be used to handle integration of potentially hundreds or even thousands of sources.
+The SAP CDC connector has been designed with this principle in mind: all relevant properties, whether it's the source object, run mode, key columns, etc., can be provided via parameters to maximize flexibility and reuse potential of SAP CDC mapping data flows.
+
+To understand the basic concepts of parametrizing mapping data flows, read [Parameterizing mapping data flows](parameters-data-flow.md).
+
+In the template gallery of Azure Data Factory and Azure Synapse Analytics, you find a [template pipeline and data flow](solution-template-replicate-multiple-objects-sap-cdc.md) which shows how to parametrize SAP CDC data ingestion.
+
+### Parametrizing source and run mode
+
+Mapping data flows don't necessarily require a Dataset artifact: both source and sink transformations offer a **Source type** (or **Sink type**) **Inline**. In this case, all source properties otherwise defined in an ADF dataset can be configured in the **Source options** of the source transformation (or **Settings** tab of the sink transformation). Using an inline dataset provides better overview and simplifies parametrizing a mapping data flow since the complete source (or sink) configuration is maintained in a one place.
+
+For SAP CDC, the properties that are most commonly set via parameters are found in the tabs **Source options** and **Optimize**.
+When **Source type** is **Inline**, the following properties can be parametrized in **Source options**.
+
+- **ODP context**: valid parameter values are
+ - **ABAP_CDS** for ABAP Core Data Services Views
+ - **BW** for SAP BW or SAP BW/4HANA InfoProviders
+ - **HANA** for SAP HANA Information Views
+ - **SAPI** for SAP DataSources/Extractors
+ - when SAP Landscape Transformation Replication Server (SLT) is used as a source, the ODP context name is SLT~\<Queue Alias\>. The **Queue Alias** value can be found under **Administration Data** in the SLT configuration in the SLT cockpit (SAP transaction LTRC).
+ - **ODP_SELF** and **RANDOM** are ODP contexts used for technical validation and testing, and are typically not relevant.
+- **ODP name**: provide the ODP name you want to extract data from.
+- **Run mode**: valid parameter values are
+ - **fullAndIncrementalLoad** for **Full on the first run, then incremental**, which initiates a change data capture process and extracts a current full data snapshot.
+ - **fullLoad** for **Full on every run**, which extracts a current full data snapshot without initiating a change data capture process.
+ - **incrementalLoad** for **Incremental changes only**, which initiates a change data capture process without extracting a current full snapshot.
+- **Key columns**: key columns are provided as an array of (double-quoted) strings. For example, when working with SAP table **VBAP** (sales order items), the key definition would have to be \["VBELN", "POSNR"\] (or \["MANDT","VBELN","POSNR"\] in case the client field is taken into account as well).
+
+### Parametrizing the filter conditions for source partitioning
+
+In the **Optimize** tab, a source partitioning scheme (see [optimizing performance for full or initial loads](connector-sap-change-data-capture.md#optimizing-performance-of-full-or-initial-loads-with-source-partitioning)) can be defined via parameters. Typically, two steps are required:
+1. Define the source partitioning scheme.
+2. Ingest the partitioning parameter into the mapping data flow.
+
+#### Define a source partitioning scheme
+
+The format in step 1 follows the JSON standard, consisting of an array of partition definitions, each of which itself is an array of individual filter conditions. These conditions themselves are JSON objects with a structure aligned with so-called **selection options** in SAP. In fact, the format required by the SAP ODP framework is basically the same as dynamic DTP filters in SAP BW:
+
+```json
+{ "fieldName": <>, "sign": <>, "option": <>, "low": <>, "high": <> }
+```
+
+For example
+
+```json
+{ "fieldName": "VBELN", "sign": "I", "option": "EQ", "low": "0000001000" }
+```
+
+corresponds to a SQL WHERE clause ... **WHERE "VBELN" = '0000001000'**, or
+
+```json
+{ "fieldName": "VBELN", "sign": "I", "option": "BT", "low": "0000000000", "high": "0000001000" }
+```
+
+corresponds to a SQL WHERE clause ... **WHERE "VBELN" BETWEEN '0000000000' AND '0000001000'**
+
+A JSON definition of a partitioning scheme containing two partitions thus looks as follows
+
+```json
+[
+ [
+ { "fieldName": "GJAHR", "sign": "I", "option": "BT", "low": "2011", "high": "2015" }
+ ],
+ [
+ { "fieldName": "GJAHR", "sign": "I", "option": "BT", "low": "2016", "high": "2020" }
+ ]
+]
+```
+
+where the first partition contains fiscal years (GJAHR) 2011 through 2015, and the second partition contains fiscal years 2016 through 2020.
+
+>[!NOTE]
+ > Azure Data Factory doesn't perform any checks on these conditions. For example, it is in the user's responsibility to ensure that partition conditions don't overlap.
+
+Partition conditions can be more complex, consisting of multiple elementary filter conditions themselves. There are no logical conjunctions that explicitly define how to combine multiple elementary conditions within one partition. The implicit definition in SAP is as follows:
+1. **including** conditions ("sign": "I") for the same field name are combined with **OR** (mentally, put brackets around the resulting condition)
+2. **excluding** conditions ("sign": "E") for the same field name are combined with **OR** (again, mentally, put brackets around the resulting condition)
+3. the resulting conditions of steps 1 and 2 are
+ - combined with **AND** for **including** conditions,
+ - combined with **AND NOT** for **excluding** conditions.
+
+As an example, the partition condition
+
+```json
+ [
+ { "fieldName": "BUKRS", "sign": "I", "option": "EQ", "low": "1000" },
+ { "fieldName": "BUKRS", "sign": "I", "option": "EQ", "low": "1010" },
+ { "fieldName": "GJAHR", "sign": "I", "option": "BT", "low": "2010", "high": "2025" },
+ { "fieldName": "GJAHR", "sign": "E", "option": "EQ", "low": "2023" },
+ { "fieldName": "GJAHR", "sign": "E", "option": "EQ", "low": "2021" }
+ ]
+```
+corresponds to a SQL WHERE clause ... **WHERE ("BUKRS" = '1000' OR "BUKRS" = '1010') AND ("GJAHR" BETWEEN '2010' AND '2025') AND NOT ("GJAHR" = '2021' or "GJARH" = '2023')**
+
+>[!NOTE]
+ > Make sure to use the SAP internal format for the low and high values, include leading zeroes, and express calendar dates as an eight character string with the format \"YYYYMMDD\".
+
+#### Ingesting the partitioning parameter into mapping data flow
+
+To ingest the partitioning scheme into a mapping data flow, create a data flow parameter (for example, "sapPartitions"). To pass the JSON format to this parameter, it has to be converted to a string using the **@string()** function:
++
+Finally, in the **optimize** tab of the source transformation in your mapping data flow, select **Partition type** "Source", and enter the data flow parameter in the **Partition conditions** property.
++
+### Parametrizing the Checkpoint Key
+
+When using a parametrized data flow to extract data from multiple SAP CDC sources, it's important to parametrize the **Checkpoint Key** in the data flow activity of your pipeline. The checkpoint key is used by Azure Data Factory to manage the status of a change data capture process. To avoid that the status of one CDC process overwrites the status of another one, make sure that the checkpoint key values are unique for each parameter set used in a dataflow.
+
+>[!NOTE]
+ > A best practice to ensure uniqueness of the **Checkpoint Key** is to add the checkpoint key value to the set of parameters for your dataflow.
+
+For more information on the checkpoint key, see [Transform data with the SAP CDC connector](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector).
+
+## Debugging
+
+Azure Data Factory pipelines can be executed via **triggered** or **debug runs**. A fundamental difference between these two options is, that debug runs execute the dataflow and pipeline based on the current version modeled in the user interface, while triggered runs execute the last published version of a dataflow and pipeline.
+
+For SAP CDC, there's one more aspect that needs to be understood: to avoid an impact of debug runs on an existing change data capture process, debug runs use a different "subscriber process" value (see [Monitor SAP CDC data flows](sap-change-data-capture-management.md#monitor-sap-cdc-data-flows)) than triggered runs. Thus, they create separate subscriptions (that is, change data capture processes) within the SAP system. In addition, the "subscriber process" value for debug runs has a life time limited to the browser UI session.
+
+>[!NOTE]
+ > To test stability of a change data capture process with SAP CDC over a longer period of time (say, multiple days), data flow and pipeline need to be published, and **triggered** runs need to be executed.
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
This article provides a high-level architecture of the SAP CDC capabilities in A
## How to use the SAP CDC capabilities
-At the core of the SAP CDC capabilities is the new SAP CDC connector. It can connect to all SAP systems that support ODP. This includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
+The SAP CDC connector is the core of the SAP CDC capabilities. It can connect to all SAP systems that support ODP, which includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
Use the SAP CDC connector with Data Factory features like mapping data flow activities, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline.
Use the SAP CDC connector with Data Factory features like mapping data flow acti
The SAP CDC solution in Azure Data Factory is a connector between SAP and Azure. The SAP side includes the SAP ODP connector that invokes the ODP API over standard Remote Function Call (RFC) modules to extract full and delta raw SAP data.
-The Azure side includes the Data Factory mapping data flow that can transform and load the SAP data into any data sink supported by mapping data flows. This includes storage destinations like Azure Data Lake Storage Gen2 or databases like Azure SQL Database or Azure Synapse Analytics. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period. You can run your pipeline and mapping data flows frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking.
+The Azure side includes the mapping data flow that can transform and load the SAP data into any data sink supported by mapping data flows. Some of these options are storage destinations like Azure Data Lake Storage Gen2 or databases like Azure SQL Database or Azure Synapse Analytics. The mapping data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period. You can run your pipeline and mapping data flows frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP CDC solution.":::
-To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. A staging storage is required to be configured in data flow activity to make your self-hosted integration runtime work seamlessly with Data Flow integration runtime.
+To get started, create an SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM) that has a line of sight to your SAP source systems or your SLT server. The mapping data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. A staging storage is required to be configured in mapping data flow activity to make your self-hosted integration runtime work seamlessly with mapping data flow integration runtime.
The SAP CDC connector uses the SAP ODP framework to extract various data source types, including:
The SAP CDC connector uses the SAP ODP framework to extract various data source
- InfoProviders and InfoObjects datasets in SAP BW and SAP BW/4HANA - SAP application tables, when you use an SAP LT replication server (SLT) as a proxy
-In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory mapping data flow source is a *subscriber* of the ODQ.
+In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The mapping data flow source is a *subscriber* of the ODQ.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-shir-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP ODP framework through a self-hosted integration runtime.":::
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
Previously updated : 11/17/2022 Last updated : 07/24/2023
To set up your SAP systems to use the SAP ODP framework, follow the guidelines t
### SAP system requirements
-The ODP framework is part of many SAP systems, including SAP ECC and SAP S/4HANA. It is also contained in SAP BW and SAP BW/4HANA. To ensure that your SAP releases have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP Data Services, the information also applies to Data Factory.
+The ODP framework is part of many SAP systems, including SAP ECC and SAP S/4HANA. It's also contained in SAP BW and SAP BW/4HANA. To ensure that your SAP releases have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP Data Services, the information also applies to Data Factory.
- To support ODP, run your SAP systems on SAP NetWeaver 7.0 SPS 24 or later. For more information, see [Transferring Data from SAP Source Systems via ODP (Extractors)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/327833022dcf42159a5bec552663dc51.html). - To support SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) full extractions via ODP, run your SAP systems on NetWeaver 7.4 SPS 08 or later. To support SAP ABAP CDS delta extractions, run your SAP systems on NetWeaver 7.5 SPS 05 or later. For more information, see [Transferring Data from SAP Systems via ODP (ABAP CDS Views)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/af11a5cb6d2e4d4f90d344f58fa0fb1d.html).
Data extractions via ODP require a properly configured user on SAP systems. The
ODP offers various data extraction contexts or *source object types*. Although most data source objects are ready to extract, some require more configuration. In an SAPI context, the objects to extract are called DataSources or *extractors*. To extract DataSources, be sure to meet the following requirements: -- Ensure that DataSources are activated on your SAP source systems. This requirement applies only to DataSources that are delivered by SAP or its partners. DataSources that are created by customers are automatically activated. If DataSources have been or are being extracted by SAP BW or BW/4HANA, the DataSources have already been activated. For more information about DataSources and their activations, see [Installing BW Content DataSources](https://help.sap.com/saphelp_nw73/helpdata/en/4a/1be8b7aece044fe10000000a421937/frameset.htm).
+- Ensure that DataSources are activated on your SAP source systems. This requirement applies only to DataSources SAP or its partners deliver out-of-the-box. Customer-created DataSources are automatically active. If you already use a certain DataSource with SAP BW or BW/4HANA, it's already activate. For more information about DataSources and their activation, see [Installing BW Content DataSources](https://help.sap.com/saphelp_nw73/helpdata/en/4a/1be8b7aece044fe10000000a421937/frameset.htm).
-- Make sure that DataSources are released for extraction via ODP. This requirement applies to DataSources that customers create as well as DataSources created by SAP in older releases of SAP ECC. For more information, see the following SAP support note [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584).
+- Make sure that DataSources are released for extraction via ODP. This requirement applies to DataSources that customers create and DataSources created by SAP in older releases of SAP ECC. For more information, see the following SAP support note [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584).
-### Set up the SAP Landscape Transformation Replication Server
+### Set up the SAP Landscape Transformation Replication Server (optional)
-SAP Landscape Transformation Replication Server (SLT) is a database trigger-enabled CDC solution that can replicate SAP application tables and simple views in near real time. SLT replicates from SAP source systems to various targets, including the operational delta queue (ODQ). You can use SLT as a proxy in data extraction ODP. You can install SLT on an SAP source system as an SAP Data Migration Server (DMIS) add-on or use it on a standalone replication server. To use SLT as a proxy, complete the following steps:
+SAP Landscape Transformation Replication Server (SLT) is a database trigger-enabled CDC solution that can replicate SAP application tables and simple views in near real time. SLT replicates from SAP source systems to various targets, including the operational delta queue (ODQ).
+
+>[!NOTE]
+ > SAP Landscape Transformation Replication Server (SLT) is only required if you want to replicate data from SAP tables with the SAP CDC connector. All other sources work out-of-the-box without SLT.
+
+You can use SLT as a proxy in data extraction ODP. You can install SLT on an SAP source system as an SAP Data Migration Server (DMIS) add-on or use it on a standalone replication server. To use SLT as a proxy, complete the following steps:
1. Install NetWeaver 7.4 SPS 04 or later and the DMIS 2011 SP 05 add-on on your replication server. For more information, see [Transferring Data from SLT Using Operational Data Provisioning](https://help.sap.com/docs/SAP_NETWEAVER_750/ccc9cdbdc6cd4eceaf1e5485b1bf8f4b/6ca2eb9870c049159de25831d3269f3f.html).
To validate your SAP system configurations for ODP, you can run the RODPS_REPL_T
The following SAP support notes resolve known issues on SAP systems: - [1660374 - To extend timeout when fetching large data sets via ODP](https://launchpad.support.sap.com/#/notes/1660374)-- [2321589 - To resolve non-existing Business Add-In (BAdI) for RSODP_ODATA subscriber type](https://launchpad.support.sap.com/#/notes/2321589)
+- [2321589 - To resolve missing Business add-in (BAdI) implementation for RSODP_ODATA subscriber type](https://launchpad.support.sap.com/#/notes/2321589)
- [2636663 - To resolve inconsistent database trigger status in SLT when extracting and replicating the same SAP application table](https://launchpad.support.sap.com/#/notes/2636663) - [3038236 - To resolve CDS view extractions that fail to populate ODQ](https://launchpad.support.sap.com/#/notes/3038236) - [3076927 - To remove unsupported callbacks when extracting from SAP BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/3076927)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project description: Defend your GCP resources by using Microsoft Defender for Cloud. Previously updated : 06/28/2023 Last updated : 07/24/2023 # Connect your GCP project to Microsoft Defender for Cloud
To connect your GCP project to Defender for Cloud by using a native connector:
:::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the pane for creating a GCP connector." lightbox="media/quickstart-onboard-gcp/create-connector.png":::
- Optionally, if you select **Organization**, a management project and an organization custom role are created on your GCP project for the onboarding process. Auto-provisioning is enabled for the onboarding of new projects.
+ Optionally, if you select **Organization**, a management project and an organization custom role are created on your GCP project for the onboarding process. Autoprovisioning is enabled for the onboarding of new projects.
1. Select **Next: Select plans**.
To connect your GCP project to Defender for Cloud by using a native connector:
| CSPM | Defender for Containers| |--|--|
- | CSPM service account reader role <br><br> Microsoft Defender for Cloud identity federation <br><br> CSPM identity pool <br><br>Microsoft Defender for Servers service account (when the servers plan is enabled) <br><br>*Azure Arc for servers onboarding* service account (when Azure Arc for servers auto-provisioning is enabled) | Microsoft Defender for Containers service account role <br><br> Microsoft Defender Data Collector service account role <br><br> Microsoft Defender for Cloud identity pool |
+ | CSPM service account reader role <br><br> Microsoft Defender for Cloud identity federation <br><br> CSPM identity pool <br><br>Microsoft Defender for Servers service account (when the servers plan is enabled) <br><br>*Azure Arc for servers onboarding* service account (when Azure Arc for servers autoprovisioning is enabled) | Microsoft Defender for Containers service account role <br><br> Microsoft Defender Data Collector service account role <br><br> Microsoft Defender for Cloud identity pool |
-After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled auto-provisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource.
+After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled autoprovisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource.
## Optional: Configure selected plans
Microsoft Defender for Servers brings threat detection and advanced defenses to
- Azure Arc for servers installed on your VM instances.
-We recommend that you use the auto-provisioning process to install Azure Arc on your VM instances. Auto-provisioning is enabled by default in the onboarding process and requires **Owner** permissions on the subscription. The Azure Arc auto-provisioning process uses the OS Config agent on the GCP end. [Learn more about the availability of the OS Config agent on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager).
+We recommend that you use the autoprovisioning process to install Azure Arc on your VM instances. Autoprovisioning is enabled by default in the onboarding process and requires **Owner** permissions on the subscription. The Azure Arc autoprovisioning process uses the OS Config agent on the GCP end. [Learn more about the availability of the OS Config agent on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager).
-The Azure Arc auto-provisioning process uses the VM manager on GCP to enforce policies on your VMs through the OS Config agent. A VM that has an [active OS Config agent](https://cloud.google.com/compute/docs/manage-os#agent-state) incurs a cost according to GCP. To see how this cost might affect your account, refer to the [GCP technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing).
+The Azure Arc autoprovisioning process uses the VM manager on GCP to enforce policies on your VMs through the OS Config agent. A VM that has an [active OS Config agent](https://cloud.google.com/compute/docs/manage-os#agent-state) incurs a cost according to GCP. To see how this cost might affect your account, refer to the [GCP technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing).
-Microsoft Defender for Servers does not install the OS Config agent to a VM that doesn't have it installed. However, Microsoft Defender for Servers enables communication between the OS Config agent and the OS Config service if the agent is already installed but not communicating with the service. This communication can change the OS Config agent from `inactive` to `active` and lead to more costs.
+Microsoft Defender for Servers doesn't install the OS Config agent to a VM that doesn't have it installed. However, Microsoft Defender for Servers enables communication between the OS Config agent and the OS Config service if the agent is already installed but not communicating with the service. This communication can change the OS Config agent from `inactive` to `active` and lead to more costs.
Alternatively, you can manually connect your VM instances to Azure Arc for servers. Instances in projects with the Defender for Servers plan enabled that aren't connected to Azure Arc are surfaced by the recommendation **GCP VM instances should be connected to Azure Arc**. Select the **Fix** option in the recommendation to install Azure Arc on the selected machines.
-The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after 7 days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed.
+The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after seven days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed.
Ensure that you fulfill the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud).
Enable these other extensions on the Azure Arc-connected machines:
Make sure the selected Log Analytics workspace has a security solution installed. The Log Analytics agent and the Azure Monitor agent are currently configured at the *subscription* level. All the multicloud accounts and projects (from both AWS and GCP) under the same subscription inherit the subscription settings for the Log Analytics agent and the Azure Monitor agent. [Learn more about monitoring components for Defender for Servers](monitoring-components.md).
-Defender for Servers assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Servers can manage your resources: `Cloud`, `InstanceName`, `MDFCSecurityConnector`, `MachineId`, `ProjectId`, and `ProjectNumber`.
+Defender for Servers assigns tags to your GCP resources to manage the autoprovisioning process. You must have these tags properly assigned to your resources so that Defender for Servers can manage your resources: `Cloud`, `InstanceName`, `MDFCSecurityConnector`, `MachineId`, `ProjectId`, and `ProjectNumber`.
To configure the Defender for Servers plan:
To configure the Defender for Databases plan:
Microsoft Defender for Containers brings threat detection and advanced defenses to your GCP Google Kubernetes Engine (GKE) Standard clusters. To get the full security value out of Defender for Containers and to fully protect GCP clusters, ensure that you meet the following requirements. > [!NOTE]
-> If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+>
+> - If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+> - Defender for Containers when deployed on GCP, may incur external costs such as [logging costs](https://cloud.google.com/stackdriver/pricing), [pub/sub costs](https://cloud.google.com/pubsub/pricing) and [egress costs](https://cloud.google.com/vpc/network-pricing#:~:text=Platform%20SKUs%20apply.-%2cInternet%20egress%20rates%2c-Premium%20Tier%20pricing).
- **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis. - **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension**: Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three ways:
- - Enable Defender for Containers auto-provisioning at the project level, as explained in the instructions in this section. We recommend this method.
+ - Enable Defender for Containers autoprovisioning at the project level, as explained in the instructions in this section. We recommend this method.
- Use Defender for Cloud recommendations for per-cluster installation. They appear on the Microsoft Defender for Cloud recommendations page. [Learn how to deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manually install [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md) and [extensions](../azure-arc/kubernetes/extensions.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 | | [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 | | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 |
+| [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 |
| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 | ### Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"
Existing customers of Defender for Key-Vault, Defender for Azure Resource Manage
For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h)
+### Preview alerts for DNS servers to be deprecated
+
+**Estimated date for change: August 2023**
+
+Following quality improvement process, security alerts for DNS servers are set to be deprecated in August. For cloud resources, use [Azure DNS](defender-for-dns-introduction.md) to receive the same security value.
+
+The following table lists the alerts to be deprecated:
+
+| AlertDisplayName | AlertType |
+|--|--|
+| Communication with suspicious random domain name (Preview) | DNS_RandomizedDomain
+| Communication with suspicious domain identified by threat intelligence (Preview) | DNS_ThreatIntelSuspectDomain |
+| Digital currency mining activity (Preview) | DNS_CurrencyMining |
+| Network intrusion detection signature activation (Preview) | DNS_SuspiciousDomain |
+| Attempted communication with suspicious sinkholed domain (Preview) | DNS_SinkholedDomain |
+| Communication with possible phishing domain (Preview) | DNS_PhishingDomain|
+| Possible data transfer via DNS tunnel (Preview) | DNS_DataObfuscation |
+| Possible data exfiltration via DNS tunnel (Preview) | DNS_DataExfiltration |
+| Communication with suspicious algorithmically generated domain (Preview) | DNS_DomainGenerationAlgorithm |
+| Possible data download via DNS tunnel (Preview) | DNS_DataInfiltration |
+| Anonymity network activity (Preview) | DNS_DarkWeb |
+| Anonymity network activity using web proxy (Preview) | DNS_DarkWebProxy |
+ ### Change to the Log Analytics daily cap Azure monitor offers the capability to [set a daily cap](../azure-monitor/logs/daily-cap.md) on the data that is ingested on your Log analytics workspaces. However, Defender for Cloud security events are currently not supported in those exclusions.
event-grid Concepts Pull Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts-pull-delivery.md
Last updated 05/24/2023
-# Azure Event Grid's pull delivery - Concepts
+# Azure Event Grid's pull delivery (Preview) - Concepts
This article describes the main concepts related to the new resource model that uses namespaces.
event-grid Create View Manage Event Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-event-subscriptions.md
Last updated 05/24/2023
-# Create, view, and manage event subscriptions in namespace topics
+# Create, view, and manage event subscriptions in namespace topics (Preview)
This article shows you how to create, view, and manage event subscriptions to namespace topics in Azure Event Grid. [!INCLUDE [pull-preview-note](./includes/pull-preview-note.md)]
event-grid Create View Manage Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespace-topics.md
Last updated 05/23/2023
-# Create, view, and manage namespace topics
+# Create, view, and manage namespace topics (Preview)
This article shows you how to create, view, and manage namespace topics in Azure Event Grid. [!INCLUDE [pull-preview-note](./includes/pull-preview-note.md)]
event-grid Create View Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespaces.md
Title: Create, view, and manage Azure Event Grid namespaces
+ Title: Create, view, and manage Azure Event Grid namespaces (Preview)
description: This article describes how to create, view and manage namespaces
Last updated 05/23/2023
-# Create, view, and manage namespaces
+# Create, view, and manage namespaces (Preview)
A namespace in Azure Event Grid is a logical container for one or more topics, clients, client groups, topic spaces and permission bindings. It provides a unique namespace, allowing you to have multiple resources in the same Azure region. With an Azure Event Grid namespace you can group now together related resources and manage them as a single unit in your Azure subscription.
+> [!IMPORTANT]
+> The Namespace resource is currently in PREVIEW.
This article shows you how to use the Azure portal to create, view and manage an Azure Event Grid namespace.
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
Last updated 05/23/2023
-# Monitor data reference for Azure Event Grid's MQTT delivery
+# Monitor data reference for Azure Event Grid's MQTT delivery (Preview)
This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Event Grid's MQTT delivery. [!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
event-grid Monitor Pull Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-pull-reference.md
Last updated 04/28/2023
-# Monitor data reference for Azure Event Grid's pull event delivery
+# Monitor data reference for Azure Event Grid's pull event delivery (Preview)
This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Event Grid's pull delivery. [!INCLUDE [pull-preview-note](./includes/pull-preview-note.md)]
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
-# Quickstart: Publish and subscribe to MQTT messages on Event Grid Namespace with Azure CLI
+# Quickstart: Publish and subscribe to MQTT messages on Event Grid Namespace with Azure CLI (Preview)
Azure Event Grid supports messaging using the MQTT protocol. Clients (both devices and cloud applications) can publish and subscribe MQTT messages over flexible hierarchical topics for scenarios such as high scale broadcast, and command & control.
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
-# Quickstart: Publish and subscribe to MQTT messages on Event Grid Namespace with Azure portal
+# Quickstart: Publish and subscribe to MQTT messages on Event Grid Namespace with Azure portal (Preview)
In this article, you use the Azure portal to do the following tasks:
If you don't already have a certificate, you can create a sample certificate usi
1. Once you installed Step, in Windows PowerShell, run the command to create root and intermediate certificates. ```powershell
- .\step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner
+ step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner
```
-2. Using the CA files generated to create certificate for the client.
+2. Using the CA files generated in step 1 to create certificate for the client.
```powershell
- .\step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h
+ step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h
``` 3. To view the thumbprint, run the Step command. ```powershell
- .\step certificate fingerprint client1-authnID.pem
+ step certificate fingerprint client1-authnID.pem
``` ## Create a Namespace
event-grid Mqtt Routing To Event Hubs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-cli.md
-# Tutorial: Route MQTT messages to Azure Event Hubs from Azure Event Grid with Azure CLI
+# Tutorial: Route MQTT messages to Azure Event Hubs from Azure Event Grid with Azure CLI (Preview)
Use message routing in Azure Event Grid to send data from your MQTT clients to Azure services such as storage queues, and Event Hubs.
event-grid Mqtt Routing To Event Hubs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-portal.md
-# Tutorial: Route MQTT messages to Azure Event Hubs from Azure Event Grid with Azure portal
+# Tutorial: Route MQTT messages to Azure Event Hubs from Azure Event Grid with Azure portal (Preview)
Use message routing in Azure Event Grid to send data from your MQTT clients to Azure services such as storage queues, and Event Hubs. In this tutorial, you perform the following tasks:
event-grid Publish Events Using Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md
Title: Publish and consume events or messages using namespace topics
+ Title: Publish and consume events or messages using namespace topics (Preview)
description: Describes the steps to publish and consume events or messages using namespace topics.
Last updated 05/24/2023
-# Publish to namespace topics and consume events
+# Publish to namespace topics and consume events (Preview)
This article describes the steps to publish and consume events using the [CloudEvents](https://github.com/cloudevents/spec) with [JSON format](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) using namespace topics and event subscriptions.
event-hubs Explore Captured Avro Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/explore-captured-avro-files.md
You can verify that captured files were created in the Azure Storage account usi
An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Drill][Apache Drill] for a lightweight SQL-driven experience or [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data.
-## Use Apache Drill
-[Apache Drill][Apache Drill] is an "open-source SQL query engine for Big Data exploration" that can query structured and semi-structured data wherever it is. The engine can run as a standalone node or as a huge cluster for great performance.
-
-A native support to Azure Blob storage is available, which makes it easy to query data in an Avro file, as described in the documentation:
-
-[Apache Drill: Azure Blob Storage Plugin][Apache Drill: Azure Blob Storage Plugin]
-
-To easily query captured files, you can create and execute a VM with Apache Drill enabled via a container to access Azure Blob storage. See the following sample: [Streaming at Scale with Event Hubs Capture](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-capture-databricks-delta).
## Use Apache Spark [Apache Spark][Apache Spark] is a "unified analytics engine for large-scale data processing." It supports different languages, including SQL, and can easily access Azure Blob storage. There are a few options to run Apache Spark in Azure, and each provides easy access to Azure Blob storage: - [HDInsight: Address files in Azure storage][HDInsight: Address files in Azure storage]-- [Azure Databricks: Azure Blob storage][Azure Databricks: Azure Blob Storage]
+- [Azure Databricks: Azure Blob storage][Azure Databricks: Azure Blob Storage]. See the following sample: [Streaming at Scale with Event Hubs Capture](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-capture-databricks-delta).
- [Azure Kubernetes Service](../aks/spark-job.md) ## Use Avro Tools
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Supported bandwidth offers:
50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, 10 Gbps
+> [!NOTE]
+> ExpressRoute supports redundant pair of cross connection. If you exceed the configured bandwidth of your ExpressRoute circuit in a cross connection, your traffic would be subjected to rate limiting within that cross connection.
+>
+ ### What's the maximum MTU supported? ExpressRoute and other hybrid networking services--VPN and vWAN--supports a maximum MTU of 1400 bytes.
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
Previously updated : 03/10/2022 Last updated : 07/25/2023 zone_pivot_groups: front-door-tiers
The following table includes links to Bicep and Azure Resource Manager deploymen
| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. | |**Azure Functions origins**| **Description** | | [Azure Functions](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-function-public/) | Creates an Azure Functions app with a public endpoint, and a Front Door profile. |
-| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
|**API Management origins**| **Description** | | [API Management (external)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-api-management-external) | Creates an API Management instance with external VNet integration, and a Front Door profile. | |**Storage origins**| **Description** |
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
The response to this request looks like the following example:
} ```
+#### Enrichments
+
+There are three types of enrichment that you can add to an export: custom strings, system properties, and custom properties:
+
+The following example shows how to use the `enrichments` node to add a custom string to the outgoing message:
+
+```json
+"enrichments": {
+ "My custom string": {
+ "value": "My value"
+ },
+ //...
+}
+```
+
+The following example shows how to use the `enrichments` node to add a system property to the outgoing message:
+
+```json
+"enrichments": {
+ "Device template": {
+ "path": "$templateDisplayName"
+ },
+ //...
+}
+```
+
+You can add the following system properties:
+
+| Property | Description |
+| -- | -- |
+| `$enabled` | Is the device enabled? |
+| `$displayName` | The device name. |
+| `$templateDisplayName` | The device template name. |
+| `$organizations` | The organizations the device belongs to. |
+| `$provisioned` | Is the device provisioned? |
+| `$simulated` | Is the device simulated? |
+
+The following example shows how to use the `enrichments` node to add a custom property to the outgoing message. Custom properties are properties defined in the device template the device is associated with:
+
+```json
+"enrichments": {
+ "Device model": {
+ "target": "dtmi:azure:DeviceManagement:DeviceInformation;1",
+ "path": "model"
+ },
+ //...
+}
+```
+
+#### Filters
+
+You can filter the exported messages based on telemetry or property values.
+
+The following example shows how to use the `filter` field to export only messages where the accelerometer-X telemetry value is greater than 0:
+
+```json
+{
+ "id": "export-001",
+ "displayName": "Enriched Export",
+ "enabled": true,
+ "source": "telemetry",
+ "filter": "SELECT * FROM dtmi:azurertos:devkit:gsgmxchip;1 WHERE accelerometerX > 0",
+ "destinations": [
+ {
+ "id": "dest-001"
+ }
+ ],
+ "status": "healthy"
+}
+```
+
+The following example shows how to use the `filter` field to export only messages where the `temperature` telemetry value is greater than the `targetTemperature` property:
+
+```json
+{
+ "id": "export-001",
+ "displayName": "Enriched Export",
+ "enabled": true,
+ "source": "telemetry",
+ "filter": "SELECT * FROM dtmi:azurertos:devkit:gsgmxchip;1 AS A, dtmi:contoso:Thermostat;1 WHERE A.temperature > targetTemperature",
+ "destinations": [
+ {
+ "id": "dest-001"
+ }
+ ],
+ "status": "healthy"
+}
+```
+ ### Get an export by ID Use the following request to retrieve details of an export definition from your application:
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Previously updated : 06/14/2023 Last updated : 07/24/2023
You can use Azure IoT Central to manage your connected devices at scale through jobs. Jobs let you do bulk updates to device and cloud properties and run commands. This article shows you how to use jobs in your own application and how to use the import and export features.
-To learn how to manage jobs by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices.](../core/howto-manage-jobs-with-rest-api.md).
+To learn how to manage jobs by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices](../core/howto-manage-jobs-with-rest-api.md).
+
+> [!TIP]
+> When you create a recurring job, sign in to your application using a Microsoft account or Azure Active Directory account. If you sign in using an Azure Active Directory group, it's possible that the Azure Active Directory token associated with the group will expire at some point in the future and cause the job to fail.
## Create and run a job
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
To view the health of your public standard load balancer resources:
A generic description of a resource health status is available in the [resource health documentation](../service-health/resource-health-overview.md).
+### Resource health alerts
+
+Azure Resource Health alerts can notify you in near real-time when the health state of your Load balancer resource changes. It's recommended that you set resource health alerts to notify you when your Load balancer resource is in a **Degraded** or **Unavailable** state.
+
+When you create Azure resource health alerts for Load balancer, Azure sends resource health notifications to your Azure subscription. You can create and customize alerts based on:
+* The subscription affected
+* The resource group affected
+* The resource type affected (Load balancer)
+* The specific resource (any Load balancer resource you choose to set up an alert for)
+* The event status of the Load balancer resource affected
+* The current status of the Load balancer resource affected
+* The previous status of the Load balancer resource affected
+* The reason type of the Load balancer resource affected
+
+You can also configure who the alert should be sent to:
+* A new action group (that can be used for future alerts)
+* An existing action group
+
+For more information on how to set up these resource health alerts, see:
+* [Resource health alerts using Azure portal](/azure/service-health/resource-health-alert-monitor-guide#create-a-resource-health-alert-rule-in-the-azure-portal)
+* [Resource health alerts using Resource Manager templates](/azure/service-health/resource-health-alert-arm-template-guide)
+ ## Next steps - Learn about [Network Analytics](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics).
machine-learning Dsvm Tools Productivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-productivity.md
Last updated 05/12/2021
In addition to the data science and programming tools, the DSVM contains productivity tools to help you capture and share insights with your colleagues. Microsoft 365 is the most productive and most secure Office experience for enterprises, allowing your teams to work together seamlessly from anywhere, anytime. With Power BI Desktop you can go from data to insight to action. And the Microsoft Edge browser is a modern, fast, and secure Web browser.
-| Tool | Windows 2019 Server DSVM | Linux DSVM | Usage notes |
-|--|:-:|:-:|:-|
-| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
-| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
-| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989; </span>| <span class='red-x'>&#10060; </span> | |
-| [Microsoft Edge Browser](https://www.microsoft.com/edge) | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
+> [!IMPORTANT]
+> This feature is currently in public preview.
+> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+| Tool | Windows 2019 Server DSVM | Windows 2022 Server DSVM (Preview) | Linux DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-|
+| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989; </span> | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989; </span> | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
+| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989; </span> | <span class='green-check'>&#9989; </span>| <span class='red-x'>&#10060; </span> | |
+| [Microsoft Edge Browser](https://www.microsoft.com/edge) | <span class='green-check'>&#9989; </span> | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
Last updated 06/23/2022
# What is the Azure Data Science Virtual Machine for Linux and Windows?
+> [!IMPORTANT]
+> This feature is currently in public preview.
+> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ The Data Science Virtual Machine (DSVM) is a customized VM image on the Azure cloud platform built specifically for doing data science. It has many popular data science tools preinstalled and preconfigured to jump-start building intelligent applications for advanced analytics. The DSVM is available on: + Windows Server 2019++ Windows Server 2022 (Preview) + Ubuntu 20.04 LTS Additionally, we're excited to offer Azure DSVM for PyTorch, which is an Ubuntu 20.04 image from Azure Marketplace that is optimized for large, distributed deep learning workloads. It comes preinstalled and validated with the latest PyTorch version to reduce setup costs and accelerate time to value. It comes packaged with various optimization functionalities (ONNX Runtime​, DeepSpeed​, MSCCL​, ORTMoE​, Fairscale​, Nvidia Apex​), and an up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA.
The DSVM is a customized VM image for Data Science but [Azure Machine Learning](
Key differences between these:
-|Feature |Data Science<br>VM |AzureML<br>Compute Instance |
+|Feature |Data Science<br>VM |Azure Machine Learning<br>Compute Instance |
|||| | Fully Managed | No | Yes | |Language Support | Python, R, Julia, SQL, C#,<br> Java, Node.js, F# | Python and R |
Key differences between these:
|Built-in<br>Hosted Notebooks | No<br>(requires additional configuration) | Yes | |Built-in SSO | No <br>(requires additional configuration) | Yes | |Built-in Collaboration | No | Yes |
-|Preinstalled Tools | Jupyter(lab), VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
+|Preinstalled Tools | Jupyter(lab), VS Code,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
## Sample use cases
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine is an easy way to explore data and do machine learning in the cloud. The Data Science Virtual Machines are pre-configured with the complete operating system, security patches, drivers, and popular data science and development software. You can choose the hardware environment, ranging from lower-cost CPU-centric machines to very powerful machines with multiple GPUs, NVMe storage, and large amounts of memory. For machines with GPUs, all drivers are installed, all machine learning frameworks are version-matched for GPU compatibility, and acceleration is enabled in all application software that supports GPUs.
-The Data Science Virtual Machine comes with the most useful data-science tools pre-installed.
+The Data Science Virtual Machine comes with the most useful data-science tools pre-installed.
++
+> [!IMPORTANT]
+> This feature is currently in public preview.
+> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Build deep learning and machine learning solutions
-| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [CUDA, cuDNN, NVIDIA Driver](https://developer.nvidia.com/cuda-toolkit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [CUDA, cuDNN, NVIDIA Driver on the DSVM](./dsvm-tools-deep-learning-frameworks.md#cuda-cudnn-nvidia-driver) |
-| [Horovod](https://github.com/horovod/horovod) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | [Horovod on the DSVM](./dsvm-tools-deep-learning-frameworks.md#horovod) |
-| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) |
-| [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) |
-| [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
-| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure Machine Learning SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
-| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) |
-| [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) |
-| [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
-| LightGBM | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> (GPU, MPI support) | |
-| H2O | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| CatBoost | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Intel MKL | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| OpenCV | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Dlib | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Docker | <span class='green-check'>&#9989;</span> <br/> (Windows containers only) | <span class='green-check'>&#9989;</span> | |
-| Nccl | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Rattle | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
-| PostgreSQL | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| ONNX Runtime | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Windows Server 2022 DSVM (Preview) | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [CUDA, cuDNN, NVIDIA Driver](https://developer.nvidia.com/cuda-toolkit) | <span class='green-check'>&#9989;</span> |<span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [CUDA, cuDNN, NVIDIA Driver on the DSVM](./dsvm-tools-deep-learning-frameworks.md#cuda-cudnn-nvidia-driver) |
+| [Horovod](https://github.com/horovod/horovod) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | [Horovod on the DSVM](./dsvm-tools-deep-learning-frameworks.md#horovod) |
+| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) |
+| [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) |
+| [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
+| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure Machine Learning SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
+| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) |
+| [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) |
+| [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| LightGBM | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> (GPU, MPI support) | |
+| H2O | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| CatBoost | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Intel MKL | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| OpenCV | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Dlib | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Docker | <span class='green-check'>&#9989;</span><br/> (Windows containers only) | <span class='green-check'>&#9989;</span> <br/> (Windows containers only) | <span class='green-check'>&#9989;</span> | |
+| Nccl | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Rattle | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| PostgreSQL | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| ONNX Runtime | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
## Store, retrieve, and manipulate data
-| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| Relational databases | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server on the DSVM](./dsvm-tools-data-platforms.md#sql-server-developer-edition) |
-| Database tools | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | |
-| [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) | <span class='green-check'>&#9989;</span></br> | |
-| [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) |
-| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
-| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Azure Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
-| Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | |
+| Tool | Windows Server 2019 DSVM | Windows Server 2022 DSVM (Preview)| Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| Relational databases | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server on the DSVM](./dsvm-tools-data-platforms.md#sql-server-developer-edition) |
+| Database tools | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | |
+| [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) |
+| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
+| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Azure Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
+| Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | |
## Program in Python, R, Julia, and Node.js
-| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [CRAN-R](https://cran.r-project.org/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| [Anaconda Python](https://www.continuum.io/) with popular packages pre-installed | <span class='green-check'>&#9989;</span><br/> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | |
-| [Julia (Julialang)](https://julialang.org/) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| JupyterHub (multiuser notebook server) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| JupyterLab (multiuser notebook server) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| Node.js | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| [Jupyter Notebook Server](https://jupyter.org/) with the following kernels: | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | [Jupyter Notebook samples](./dsvm-samples-and-walkthroughs.md) |
-| &nbsp;&nbsp;&nbsp;&nbsp; R | | | [R Jupyter Samples](./dsvm-samples-and-walkthroughs.md#r-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; Python | | | [Python Jupyter Samples](./dsvm-samples-and-walkthroughs.md#python-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; Julia | | | [Julia Jupyter Samples](./dsvm-samples-and-walkthroughs.md#julia-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) |
-
-**Ubuntu 20.04 DSVM and Windows Server 2019 DSVM** have the following Jupyter Kernels:-</br>
+| Tool | Windows Server 2019 DSVM | Windows Server 2022 DSVM (Preview) | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [CRAN-R](https://cran.r-project.org/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [Anaconda Python](https://www.continuum.io/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> |<span class='green-check'>&#9989;</span><br/> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | |
+| [Julia (Julialang)](https://julialang.org/) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| JupyterHub (multiuser notebook server) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| JupyterLab (multiuser notebook server) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Node.js | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [Jupyter Notebook Server](https://jupyter.org/) with the following kernels: | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | [Jupyter Notebook samples](./dsvm-samples-and-walkthroughs.md) |
+| &nbsp;&nbsp;&nbsp;&nbsp; R | | | | [R Jupyter Samples](./dsvm-samples-and-walkthroughs.md#r-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; Python | | | | [Python Jupyter Samples](./dsvm-samples-and-walkthroughs.md#python-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; Julia | | | | [Julia Jupyter Samples](./dsvm-samples-and-walkthroughs.md#julia-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) |
+
+**Ubuntu 20.04 DSVM, Windows Server 2019 DSVM, and Windows Server 2022 DSVM (Preview)** have the following Jupyter Kernels:-</br>
* Python3.8-default</br> * Python3.8-Tensorflow-Pytorch</br> * Python3.8-AzureML</br>
The Data Science Virtual Machine comes with the most useful data-science tools p
* Scala Spark ΓÇô HDInsight</br> * Python 3 Spark ΓÇô HDInsight</br>
-**Ubuntu 20.04 DSVM and Windows Server 2019 DSVM** have the following conda environments:-</br>
+**Ubuntu 20.04 DSVM, Windows Server 2019 DSVM, and Windows Server 2022 DSVM (Preview)** have the following conda environments:-</br>
* Python3.8-defaultΓÇ» </br> * Python3.8-Tensorflow-PytorchΓÇ»</br> * Python3.8-AzureMLΓÇ» </br> ## Use your preferred editor or IDE
-| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [Notepad++](https://notepad-plus-plus.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
-| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
-| [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) |
-| [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
-| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) |
-| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
-| [Emacs](https://www.gnu.org/software/emacs) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
-| [Git](https://git-scm.com/) and Git Bash | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| [OpenJDK](https://openjdk.java.net) 11 | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| .NET Framework | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
-| Azure SDK | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Windows Server 2022 DSVM (Preview)| Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [Notepad++](https://notepad-plus-plus.org/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
+| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
+| [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) |
+| [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
+| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) |
+| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
+| [Emacs](https://www.gnu.org/software/emacs) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
+| [Git](https://git-scm.com/) and Git Bash | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [OpenJDK](https://openjdk.java.net) 11 | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| .NET Framework | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
+| Azure SDK | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
## Organize & present results
-| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
-| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
-| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
-| Microsoft Edge Browser | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Windows Server 2022 DSVM (Preview) | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
+| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989;</span> |<span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
+| Microsoft Edge Browser | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
machine-learning How To Create Vector Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md
Title: How to create vector index in Azure Machine Learning prompt flow (preview)
+ Title: Create a vector index in an Azure Machine Learning prompt flow (preview)
-description: How to create a vector index in Azure Machine Learning and use it in a prompt flow.
+description: Learn how to create a vector index in Azure Machine Learning and use it in a prompt flow.
-# How to create vector index in Azure Machine Learning prompt flow (preview)
+# Create a vector index in an Azure Machine Learning prompt flow (preview)
-Azure Machine Learning enables you to create a vector index from files/folders on your machine, a location in a cloud storage, an Azure Machine Learning data asset, a Git repository, or an SQL database. Azure Machine Learning can currently crack and process text files, md files, pdf, excel files, word documents. You can also reuse an existing Azure Cognitive Search Index instead of creating a new Index.
+You can use Azure Machine Learning to create a vector index from files or folders on your machine, a location in cloud storage, an Azure Machine Learning data asset, a Git repository, or a SQL database. Azure Machine Learning can currently process .txt, .md, .pdf, .xls, and .docx files. You can also reuse an existing Azure Cognitive Search index instead of creating a new index.
-When a Vector Index is created, Azure Machine Learning will chunk the data, create embeddings, and store the embeddings in a FAISS Index or Azure Cognitive Search Index. In addition, Azure Machine Learning creates:
+When you create a vector index, Azure Machine Learning chunks the data, creates embeddings, and stores the embeddings in a Faiss index or Azure Cognitive Search index. In addition, Azure Machine Learning creates:
* Test data for your data source.
-* A sample prompt flow, which uses the Vector Index you created. The sample prompt flow, which gets created has several key features like: Automatically generated prompt variants. Evaluation of each of these variations using the [test data generated](https://aka.ms/prompt_flow_blog). Metrics against each of the variants to help you choose the best variant to run. You can use this sample to continue developing your prompt.
+* A sample prompt flow, which uses the vector index that you created. Features of the sample prompt flow include:
+
+ * Automatically generated prompt variants.
+ * Evaluation of each prompt variant by using the [generated test data](https://aka.ms/prompt_flow_blog).
+ * Metrics against each prompt variant to help you choose the best variant to run.
+
+ You can use this sample to continue developing your prompt.
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
When a Vector Index is created, Azure Machine Learning will chunk the data, crea
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Access to Azure Open AI.
-
-* Enable prompt flow in your Azure Machine Learning workspace
-
-In your Azure Machine Learning workspace, you can enable prompt flow by turn-on **Build AI solutions with Prompt flow** in the **Manage preview features** panel.
--
-## Create a new Vector Index using studio
-
-1. Select **Prompt flow** on the left menu
-
- :::image type="content" source="media/how-to-create-vector-index/prompt.png" alt-text="Screenshot showing the Prompt flow location on the left menu.":::
-
-1. Select **Vector Index** on the top menu
+* Access to Azure OpenAI Service.
- :::image type="content" source="./media/how-to-create-vector-index/vector-index.png" alt-text="Screenshot showing the Vector Index location on the top menu.":::
+* Prompt flows enabled in your Azure Machine Learning workspace. You can enable prompt flows by turning on **Build AI solutions with Prompt flow** on the **Manage preview features** panel.
+## Create a vector index by using Machine Learning studio
-1. Select **Create**
+1. Select **Prompt flow** on the left menu.
-1. After the create new vector index form opens, provide a name for your vector index.
+ :::image type="content" source="media/how-to-create-vector-index/prompt.png" alt-text="Screenshot that shows the location of prompt flow on the left menu.":::
-1. Next choose your data source type
+1. Select the **Vector Index** tab.
- :::image type="content" source="media/how-to-create-vector-index/new-vector-creation.png" alt-text="Screenshot showing the create new Vector Index form.":::
+ :::image type="content" source="./media/how-to-create-vector-index/vector-index.png" alt-text="Screenshot that shows the tab for vector index.":::
-1. Based on the chosen type, provide the location details of your
- source. Then, select **Next**.
+1. Select **Create**.
-1. Review the details of your vector index, then select the **Create** button to create the vector index. For more information about how to [use Vector Stores (preview).](concept-vector-stores.md)
+1. When the form for creating a vector index opens, provide a name for your vector index.
-1. This takes you to an overview page from where you can track and view the status of your Vector Index creation. Note: Vector Index creation may take a while depending on the size of data.
+ :::image type="content" source="media/how-to-create-vector-index/new-vector-creation.png" alt-text="Screenshot that shows basic settings for creating a vector index.":::
+1. Select your data source type.
+1. Based on the chosen type, provide the location details of your source. Then, select **Next**.
-## Add a Vector Index to a prompt flow
+1. Review the details of your vector index, and then select the **Create** button.
-Once you have created a Vector Index, you can add it to a prompt flow from the prompt flow canvas. The prompt flow designer has a Vector Index lookup tool. Add this tool to the canvas and enter the path to your Vector Index and the query you want to perform against the index. You can find the steps to do this here.
+1. On the overview page that appears, you can track and view the status of creating your vector index. The process might take a while, depending on the size of your data.
+## Add a vector index to a prompt flow
-1. Open an existing prompt flow
+After you create a vector index, you can add it to a prompt flow from the prompt flow canvas.
+1. Open an existing prompt flow.
-1. On the top menu, select **More Tools** and select Vector Index Lookup
+1. On the top menu of the prompt flow designer, select **More tools**, and then select **Vector Index Lookup**.
- :::image type="content" source="media/how-to-create-vector-index/vector-lookup.png" alt-text="Screenshot showing the location of the More Tools button.":::
+ :::image type="content" source="media/how-to-create-vector-index/vector-lookup.png" alt-text="Screenshot that shows the list of available tools.":::
-1. The Vector Index lookup tool gets added to the canvas. If you don't see the tool immediately, scroll to the bottom of the canvas.
+ The Vector Index Lookup tool is added to the canvas. If you don't see the tool immediately, scroll to the bottom of the canvas.
- :::image type="content" source="media/how-to-create-vector-index/vector-index-lookup-tool.png" alt-text="Screenshot showing the vector index lookup tool.":::
+ :::image type="content" source="media/how-to-create-vector-index/vector-index-lookup-tool.png" alt-text="Screenshot that shows the Vector Index Lookup tool.":::
-1. Enter the path to your Vector Index and enter your desired query. Be sure to type in your path directly, or to paste the path.
+1. Enter the path to your vector index, along with the query that you want to perform against the index.
## Next steps
-[Get started with RAG using a prompt flow sample (preview)](how-to-use-pipelines-prompt-flow.md)
+[Get started with RAG by using a prompt flow sample (preview)](how-to-use-pipelines-prompt-flow.md)
-[Use Vector Stores](concept-vector-stores.md) with Azure Machine Learning (preview)
+[Use vector stores with Azure Machine Learning (preview)](concept-vector-stores.md)
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
This scenario includes prebuilt workflows for two approaches to deploying a trai
* [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install) * [Install and set up Python CLI v2](how-to-configure-cli.md) * [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) on GitHub
+* Training course on [MLOps with Machine Learning](https://learn.microsoft.com/training/paths/introduction-machine-learn-operations/)
* Learn more about [Azure Pipelines with Azure Machine Learning](how-to-devops-machine-learning.md) * Learn more about [GitHub Actions with Azure Machine Learning](how-to-github-actions-machine-learning.md) * Deploy MLOps on Azure in Less Than an Hour - [Community MLOps V2 Accelerator video](https://www.youtube.com/watch?v=5yPDkWCMmtk)
machine-learning How To Setup Mlops Github Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-github-azure-ml.md
The sample training and deployment Machine Learning pipelines and GitHub workflo
* [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install) * [Install and set up Python CLI v2](how-to-configure-cli.md) * [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) on GitHub
+* Training course on [MLOps with Machine Learning](https://learn.microsoft.com/training/paths/introduction-machine-learn-operations/)
* Learn more about [Azure Pipelines with Machine Learning](how-to-devops-machine-learning.md) * Learn more about [GitHub Actions with Machine Learning](how-to-github-actions-machine-learning.md) * Deploy MLOps on Azure in Less Than an Hour - [Community MLOps V2 Accelerator video](https://www.youtube.com/watch?v=5yPDkWCMmtk)
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
In this article, you'll learn how to deploy a flow as a managed online endpoint
1. Have basic understanding on managed online endpoints. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way that frees you from the overhead of setting up and managing the underlying deployment infrastructure. For more information on managed online endpoints, see [What are Azure Machine Learning endpoints?](../concept-endpoints-online.md#managed-online-endpoints). 1. Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To be able to deploy an endpoint in Prompt flow, your user account must be assigned the **AzureML Data scientist** or role with more privileges for the **Azure Machine Learning workspace**.
+1. Have basic understanding on managed identities. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md)
## Build the flow and get it ready for deployment If you already completed the [get started tutorial](get-started-prompt-flow.md), you've already tested the flow properly by submitting bulk tests and evaluating the results.
-If you didn't complete the tutorial, you'll need to build a flow. Testing the flow properly by bulk tests and evaluation before deployment is a recommended best practice.
+If you didn't complete the tutorial, you need to build a flow. Testing the flow properly by bulk tests and evaluation before deployment is a recommended best practice.
We'll use the sample flow **Web Classification** as example to show how to deploy the flow. This sample flow is a standard flow. Deploying chat flows is similar. Evaluation flow doesn't support deployment.
Now that you have built a flow and tested it properly, it's time to create your
The Prompt flow supports you to deploy endpoints from a flow, or a bulk test run. Testing your flow before deployment is recommended best practice.
-1. In the flow authoring page or run detail page, select **Deploy**.
+In the flow authoring page or run detail page, select **Deploy**.
- **Flow authoring page**:
+**Flow authoring page**:
- :::image type="content" source="./media/how-to-deploy-for-real-time-inference/deploy-flow-authoring-page.png" alt-text="Screenshot of Web Classification on the flow authoring page. " lightbox = "./media/how-to-deploy-for-real-time-inference/deploy-flow-authoring-page.png":::
- **Run detail page**:
+**Run detail page**:
- :::image type="content" source="./media/how-to-deploy-for-real-time-inference/deploy-run-detail-page.png" alt-text="Screenshot of Web Classification on the run detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/deploy-run-detail-page.png":::
-1. A wizard for you to configure the endpoint occurs and include following steps.
+A wizard for you to configure the endpoint occurs and include following steps.
### Endpoint
The authentication method for the endpoint. Key-based authentication provides a
The endpoint needs to access Azure resources such as the Azure Container Registry or your workspace connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity.
-System-assigned identity will be autocreated after your endpoint is created, while user-assigned identity is created by user. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md)
+System-assigned identity will be autocreated after your endpoint is created, while user-assigned identity is created by user. The advantage of user-assigned identity is that you can assign multiple endpoints with the same user-assigned identity, and you just need to grant needed permissions to the user-assigned identity once. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md)
-Select the identity you want to use, and you'll notice a warning message to remind you to grant correct permissions to the identity after the endpoint is created.
+Select the identity you want to use, and you'll notice a warning message to remind you to grant correct permissions to the identity.
-You can continue to configure the endpoint in wizard, as the endpoint creation will take some time. Make sure you grant permissions to the identity after the endpoint is created. See detailed guidance in [Grant permissions to the endpoint](#grant-permissions-to-the-endpoint).
+> [!IMPORTANT]
+> When creating the deployment, Azure tries to pull the user container image from the workspace Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
+>
+> To do these, Azure uses managed identities to access the storage account and the container registry.
+>
+> - If you created the associated endpoint with **System Assigned Identity**, Azure role-based access control (RBAC) permission is automatically granted, and no further permissions are needed.
+>
+> - If you created the associated endpoint with **User Assigned Identity**, the user's managed identity must have Storage blob data reader permission on the storage account for the workspace, and AcrPull permission on the Azure Container Registry (ACR) for the workspace. Make sure your User Assigned Identity has the right permission **before the deployment creation**; otherwise, the deployment creation will fail. If you need to create multiple endpoints, it is recommended to use the same user-assigned identity for all endpoints in the same workspace, so that you only need to grant the permissions to the identity once.
+
+|Property| System Assigned Identity | User Assigned Identity|
+||||
+|| if you select system assigned identity, it will be auto-created by system for this endpoint <br> | created by user. [Learn more about how to create user assigned identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). <br> one user assigned identity can be assigned to multiple endpoints|
+|Pros| Permissions needed to pull image and mount model and code artifacts from workspace storage are auto-granted.| Can be shared by multiple endpoints.|
+|Required permissions|**Workspace**: **AzureML Data Scientist** role **OR** a customized role with ΓÇ£Microsoft.MachineLearningServices/workspaces/connections/listsecrets/actionΓÇ¥ <br> |**Workspace**: **AzureML Data Scientist** role **OR** a customized role with ΓÇ£Microsoft.MachineLearningServices/workspaces/connections/listsecrets/actionΓÇ¥ <br> **Workspace container registry**: **Acr pull** <br> **Workspace default storage**: **Storage Blob Data Reader**|
+
+See detailed guidance about how to grant permissions to the endpoint identity in [Grant permissions to the endpoint](#grant-permissions-to-the-endpoint).
#### Allow sharing sample input data for testing purpose only
You can also directly go to the **Endpoints** page in the studio, and check the
## Grant permissions to the endpoint > [!IMPORTANT]
- > After you finish creating the endpoint and **before you test or consume the endpoint**, make sure you have granted correct permissions by adding role assignment to the managed identity of the endpoint. Otherwise, the endpoint will fail to perform inference due to lacking of permissions.
+ > If you select **System Assigned Identity**, make sure you have granted correct permissions by adding role assignment to the managed identity of the endpoint **before you test or consume the endpoint**. Otherwise, the endpoint will fail to perform inference due to lacking of permissions.
+ >
+ > If you select **User Assigned Identity**, the user's managed identity must have Storage blob data reader permission on the storage account for the workspace, and AcrPull permission on the Azure Container Registry (ACR) for the workspace. Make sure your User Assigned Identity has the right permission **before the deployment creation** - better do it before you finisht the deploy wizard; otherwise, the deployment creation will fail. If you need to create multiple endpoints, it is recommended to use the same user-assigned identity for all endpoints in the same workspace, so that you only need to grant the permissions to the identity once.
> > Granting permissions (adding role assignment) is only enabled to the **Owner** of the specific Azure resources. You may need to ask your IT admin for help. >
For **User-assigned** identity:
To grant permissions to the endpoint identity, there are two ways: -- You can leverage Azure Resource Manager template to grant all permissions. You can find related Azure Resource Manager templates in [Prompt flow GitHub repo](https://github.com/cloga/azure-quickstart-templates/tree/lochen/promptflow/quickstarts/microsoft.machinelearningservices/machine-learning-prompt-flow).
+- You can use Azure Resource Manager template to grant all permissions. You can find related Azure Resource Manager templates in [Prompt flow GitHub repo](https://github.com/cloga/azure-quickstart-templates/tree/lochen/promptflow/quickstarts/microsoft.machinelearningservices/machine-learning-prompt-flow).
- You can also grant all permissions in Azure portal UI by following steps.
After you deploy the endpoint and want to test it in the **Test tab** in the end
:::image type="content" source="./media/how-to-deploy-for-real-time-inference/unable-to-fetch-deployment-schema.png" alt-text="Screenshot of the error unable to fetch deployment schema in Test tab in endpoint detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/unable-to-fetch-deployment-schema.png"::: - Make sure you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](#grant-permissions-to-the-endpoint).-- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime which was in old version as well. Update the runtime following [this guidance](./how-to-create-manage-runtime.md#update-runtime-from-ui) and re-run the flow in the latest runtime and then deploy the flow again.
+- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime which was in old version as well. Update the runtime following [this guidance](./how-to-create-manage-runtime.md#update-runtime-from-ui) and rerun the flow in the latest runtime and then deploy the flow again.
### Access denied to list workspace secret
machine-learning Tutorial Enable Materialization Backfill Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md
description: Managed Feature Store tutorial part 2. + Previously updated : 05/05/2023 Last updated : 07/24/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
# Tutorial #2: Enable materialization and backfill feature data (preview)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
-In this tutorial series you'll learn how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. This tutorial describes materialization, which computes the feature values for a given feature window, and then stores those values in a materialization store. All feature queries can then use the values from the materialization store. A feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This works well for the prototyping phase. However, for training and inference operations in a production environment, it's recommended that you materialize the features, for greater reliability and availability.
-Part 1 of this tutorial showed how to create a feature set, and use it to generate training data. A feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This works well for the prototyping phase. However, when you run training and inference in production environment, it's recommended that you materialize the features, for greater reliability and availability. Materialization is the process of computing the feature values for a given feature window, and then storing these values in a materialization store. All feature queries now use the values from the materialization store.
+This tutorial is part two of a four part series. In this tutorial, you'll learn how to:
-Here in Tutorial part 2, you'll learn how to:
+> [!div class="checklist"]
+> * Enable offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user assigned managed identity
+> * Enable offline materialization on the feature sets, and backfill the feature data
-* Enable offline store on the feature store by creating and attaching an ADLS gen2 container and a user assigned managed identity
-* Enable offline materialization on the feature sets, and backfill the feature data
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites Before you proceed with this article, make sure you cover these prerequisites:
-1. Complete the part 1 tutorial, to create the required feature store, account entity and transaction feature set
-1. An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` role and `Contributor` role.
+* Complete the part 1 tutorial, to create the required feature store, account entity and transaction feature set
+* An Azure Resource group, where you (or the service principal you use) have `User Access Administrator`and `Contributor` roles.
+
+To proceed with this article, your user account needs the owner role or contributor role for the resource group that holds the created feature store.
+
+## Set up
+
+This list summarizes the required setup steps:
+
+1. In your project workspace, create an Azure Machine Learning compute resource, to run the training pipeline
+1. In your feature store workspace, create an offline materialization store: create an Azure gen2 storage account and a container inside it, and attach it to the feature store. Optional: you can use an existing storage container
+1. Create and assign a user-assigned managed identity to the feature store. Optionally, you can use an existing managed identity. The system managed materialization jobs - in other words, the recurrent jobs - use the managed identity. Part 3 of the tutorial relies on this
+1. Grant required role-based authentication control (RBAC) permissions to the user-assigned managed identity
+1. Grant required role-based authentication control (RBAC) to your Azure AD identity. Users, including yourself, need read access to the sources and the materialization store
+
+### Configure the Azure Machine Learning spark notebook
+
+1. Running the tutorial:
+
+ You can create a new notebook, and execute the instructions in this document, step by step. You can also open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and then run it. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+
+1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+
+1. Configure the session:
+
+ * Select "configure session" in the bottom nav
+ * Select **upload conda file**
+ * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
+ * Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
+
+### Set up the root directory for the samples
+
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)]
+
+ 1. Set up the CLI
+
+ # [Python SDK](#tab/python)
+
+ Not applicable
+
+ # [Azure CLI](#tab/cli)
+
+ 1. Install the Azure Machine Learning extension
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
+
+ 1. Authentication
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
+
+ 1. Set the default subscription
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
+
+
+
+1. Initialize the project workspace properties
+
+ This is the current workspace. You'll run the tutorial notebook from this workspace.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
+
+1. Initialize the feature store properties
+
+ Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
+
+1. Initialize the feature store core SDK client
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
+
+1. Set up the offline materialization store
+
+ You can create a new gen2 storage account and a container. You can also reuse an existing gen2 storage account and container as the offline materialization store for the feature store.
+
+ # [Python SDK](#tab/python)
+
+ You can optionally override the default settings.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=setup-utility-fns)]
+
+ # [Azure CLI](#tab/cli)
+
+ Not applicable
+
+
+
+## Set values for the Azure Data Lake Storage Gen2 storage
+
+ The materialization store uses these values. You can optionally override the default settings.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
+
+1. Storage containers
+
+ Option 1: create new storage and container resources
+
+ # [Python SDK](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
+
+ # [Azure CLI](#tab/cli)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage-container)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-container-arm-id-cli)]
-* To perform the steps in this article, your user account must be assigned the owner or contributor role to the resource group, which holds the created feature store
+
-## The summary of the setup steps to execute:
+ Option 2: reuse an existing storage container
-* In your project workspace, create Azure Machine Learning compute to run training pipeline
-* In your feature store workspace, create an offline materialization store: create an Azure gen2 storage account and a container in it and attach to feature store. Optionally you can use existing storage container.
-* Create and assign a user-assigned managed identity to the feature store. Optionally, you can use an existing managed identity. The system managed materialization jobs, in other words, recurrent jobs, uses the managed identity. Part 3 of the tutorial relies on it
-* Grant required role-based authentication control (RBAC) permissions to the user-assigned managed identity
-* Grant required role-based authentication control (RBAC) to your Azure AD identity. Users (like you) need read access to (a) sources (b) materialization store
+ # [Python SDK](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
+
+ # [Azure CLI](#tab/cli)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
+
+
-#### Configure the Azure Machine Learning spark notebook
+1. Set up user assigned managed identity (UAI)
-1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session".
+ The system-managed materialization jobs will use the UAI. For example, the recurrent job in part 3 of this tutorial uses this UAI.
-1. Configure session:
+### Set the UAI values
- * Select "configure session" in the top nav
- * Select **upload conda file**
- * Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` from your local device
- * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
+### User assigned managed identity (option 1)
-#### Set up the root directory for the samples
+ Create a new one
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
-#### Initialize the project workspace CRUD client
+### User assigned managed identity (option 2)
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
+ Reuse an existing managed identity
-#### Initialize the feature store CRUD client
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
-Ensure you update the `featurestore_name` value to reflect what you created in part 1 of this tutorial
+### Retrieve UAI properties
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
+ Run this code sample in the SDK to retrieve the UAI properties:
-#### Initialize the feature store core SDK client
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
+
-#### Set up offline materialization store
+## Grant RBAC permission to the user assigned managed identity (UAI)
-You can create a new gen2 storage account and container, or reuse an existing one to serve as the offline materialization store for the feature store
+ This UAI is assigned to the feature store shortly. It requires these permissions:
-##### Set up utility functions
+ | **Scope** | **Action/Role** |
+ ||--|
+ | Feature Store | Azure Machine Learning Data Scientist role |
+ | Storage account of feature store offline store | Blob storage data contributor role |
+ | Storage accounts of source data | Blob storage data reader role |
-> [!Note]
-> This code sets up utility functions that create storage and user assigned identity. These utility functions use standard azure SDKs. They are provided here to keep the tutorial concise. However, do not use this approach for production purposes, because it might not implement best practices.
+ The next CLI commands will assign the first two roles to the UAI. In this example, "Storage accounts of source data" doesn't apply because we read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see the [access control document]() in the documentation resources.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=setup-utility-fns)]
+ # [Python SDK](#tab/python)
-##### Set the values for the Azure data lake storage (ADLS) gen 2 storage that becomes a materialization store
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
-You can optionally override the default settings
+ # [Azure CLI](#tab/cli)
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
-##### Storage container (option 1): create a new storage container
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
+
-##### Storage container (option 2): reuse an existing storage container
+### Grant the blob data reader role access to your user account in the offline store
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
+ If the feature data is materialized, you need this role to read feature data from the offline materialization store.
-#### Setup user assigned managed identity (UAI)
+ Obtain your Azure AD object ID value from the Azure portal as described [here](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
-In part 3 of the tutorial, system managed materialization jobs - for example, recurrent jobs - use UAI
+ To learn more about access control, see the [access control document](./how-to-setup-access-control-feature-store.md).
-##### Set values for UAI
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
+ The following steps grant the blob data reader role access to your user account.
-##### User-assigned managed identity (option 1): create a new one
+ 1. Attach the offline materialization store and UAI, to enable the offline store on the feature store
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
+ # [Python SDK](#tab/python)
-##### User-assigned managed identity (option 2): reuse an existing managed identity
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
+ # [Azure CLI](#tab/cli)
-##### Grant role-based authentication control (RBAC) permission to the user assigned managed identity (UAI)
+ Action: inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
-This UAI is assigned to the feature store shortly. It requires the following permissions:
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)]
-| Scope | Action / Role |
-| -- | -- |
-| Feature store | Azure Machine Learning Data Scientist role |
-| Storage account of feature store offline store | Blob storage data contributor role |
-| Storage accounts of source data | Blob storage data reader role |
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
-This utility function code assigns the first two roles to the UAI. In this example, "Storage accounts of source data" doesn't apply, because we read the sample data from a public access blob storage resource. If you have your own data sources, then you should assign the required roles to the UAI. To learn more about access control, see the [access control document](./how-to-setup-access-control-feature-store.md)
+
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
+ 2. Enable offline materialization on the transactions feature set
-##### Grant your user account the "Blob data reader" role on the offline store
+ Once materialization is enabled on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. See [part 3](./tutorial-experiment-train-models-using-features.md) of this tutorial series for more information.
-If the feature data is materialized, then you need this role to read feature data from offline materialization store.
+ # [Python SDK](#tab/python)
-Learn how to get your Azure AD object ID from the Azure portal at [this](/partner-center/find-ids-and-domain-names#find-the-user-object-id) page.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
-To learn more about access control, see access control document.
+ # [Azure CLI](#tab/cli)
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
-## Step 1: Enable offline store on the feature store by attaching offline materialization store and UAI
+
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
+ Optional: you can save the feature set asset as a YAML resource
-## Step 2: Enable offline materialization on transactions feature set
+ # [Python SDK](#tab/python)
-Once materialization is enabled on a feature set, you can perform backfill (described in this part of the tutorial), or you can schedule recurrent materialization jobs (described in the next part of the tutorial)
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ # [Azure CLI](#tab/cli)
-As another option, you can save the above feature set asset as a yaml resource
+ Not applicable
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
+
-## Step 3: Backfill data for the transactions feature set
+ 3. Backfill data for the transactions feature set
-As explained earlier in this tutorial, materialization involves computation of the feature values for a given feature window, and storage of those values in a materialization store. Materializing the features increases its reliability and availability. All feature queries now use the values from the materialization store. In this step, you perform a one-time backfill for a feature window of **three months**.
+ As explained earlier in this tutorial, materialization computes the feature values for a given feature window, and stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill, for a feature window of three months.
-> [!Note]
-> Determination of the backfill data window is important. It must match the training data window. For example, to train with two years of data, you must retrieve features for that same window. Therefore, backfill for a two year window.
+ > [!NOTE]
+ > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two year window.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
-Let's print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. We retrieved the training and inference data with the `get_offline_features()` method. This method uses the materialization store by default.
+ We'll print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data, and it also uses the materialization store by default.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
## Cleanup
-[Part 4](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) of this tutorial describes how to delete the resources
+The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
## Next steps * [Part 3: tutorial features and the machine learning lifecycle](./tutorial-experiment-train-models-using-features.md) * [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) * [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Reference: [YAML reference](./reference-yaml-overview.md)
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
description: Managed Feature Store tutorial part 4 + Previously updated : 05/05/2023 Last updated : 07/24/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
# Tutorial #4: Enable recurrent materialization and run batch inference (preview)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-In this tutorial series, you'll learn how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
-Earlier in this tutorial, you experimented with features, trained a model, and registered the model along with the feature-retrieval spec. Here in Tutorial #4, you'll learn how to run batch inference for the registered model.
+Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Part 3 of this tutorial showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Tutorial 4 explains how to
-You'll learn how to:
+> [!div class="checklist"]
+> * Run batch inference for the registered model
+> * Enable recurrent materialization for the `transactions` feature set
+> * Run a batch inference pipeline on the registered model
-* Enable recurrent materialization for the `transactions` feature set
-* Run batch inference pipeline on the registered model
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites -- Ensure you have executed tutorial parts 1, 2, and 3
+Before you proceed with this article, make sure you complete parts 1, 2, and 3 of this tutorial series.
-## Setup
+## Set up
### Configure the Azure Machine Learning spark notebook
-1. In the "Compute" dropdown in the top nav, select "Configure session".
+ 1. In the "Compute" dropdown in the top nav, select "Configure session"
-1. Configure session:
+ To run this tutorial, you can create a new notebook, and execute the instructions in this document, step by step. You can also open and run the existing notebook named `4. Enable recurrent materialization and run batch inference`. You can find that notebook, and all the notebooks in this series, at the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+
+ 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+
+ 1. Configure session:
* Select "configure session" in the bottom nav * Select **upload conda file**
- * Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` from your local device
+ * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
* (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
-#### Start the spark session
+### Start the spark session
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
+
+### Set up the root directory for the samples
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
+
+ ### [Python SDK](#tab/python)
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
+ Not applicable
-#### Set up the samples root directory
+ ### [Azure CLI](#tab/cli)
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
+ **Set up the CLI**
-#### Initialize the project workspace CRUD client
+ 1. Install the Azure Machine Learning extension
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-ws-crud-client)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
-#### Initialize the feature store CRUD client
+ 1. Authentication
-Ensure you update the `featurestore_name` to reflect what you created in part 1 of this tutorial
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-crud-client)]
+ 1. Set the default subscription
-#### Initialize the feature store SDK client
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-core-sdk)]
+
-## Step 1: Enable recurrent materialization on the `transactions` feature set
+1. Initialize the project workspace CRUD client
-In tutorial part 2, we enabled materialization, and we performed backfill on the transactions feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. However, to perform inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up-to-date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
+ The tutorial notebook runs from this current workspace
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-ws-crud-client)]
+
+1. Initialize the feature store variables
+
+ Make sure that you update the `featurestore_name` value, to reflect what you created in part 1 of this tutorial.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-crud-client)]
+
+1. Initialize the feature store SDK client
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-core-sdk)]
+
+## Enable recurrent materialization on the `transactions` feature set
+
+We enabled materialization in tutorial part 2, and we also performed backfill on the transactions feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. However, to handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up-to-date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
* Interval and frequency values define a window. For example, values of
- * interval = 3
- * frequency = Hour
+ * interval = 3
+ * frequency = Hour
- define a three-hour window.
+ define a three-hour window.
* The first window starts at the start_time defined in the RecurrenceTrigger, and so on.
-* The first recurrent job will be submitted at the start of the next window after the update time.
+* The first recurrent job is submitted at the start of the next window after the update time.
* Later recurrent jobs will be submitted at every window after the first job. As explained in earlier parts of this tutorial, once data is materialized (backfill / recurrent materialization), feature retrieval uses the materialized data by default.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=enable-recurrent-mat-txns-fset)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=enable-recurrent-mat-txns-fset)]
-### (Optional) Save the feature set asset yaml with the updated settings
+## (Optional) Save the feature set asset yaml file
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
+ We use the updated settings to save the yaml file
-### Track status of the recurrent materialization jobs in the feature store studio UI
+ ### [Python SDK](#tab/python)
-This job runs every three hours.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
-Action:
+ ### [Azure CLI](#tab/cli)
-* Feel free to execute the next step for now (batch inference).
-* In three hours, check the recurrent job status with the UI
+ Not applicable
-## Run the batch-inference pipeline
+
-In this step, you'll manually trigger the batch inference pipeline. In a production scenario, a ci/cd pipeline could trigger the pipeline, based on model registration and approval.
+## Run the batch-inference pipeline
-The batch-inference has these steps:
+ The batch-inference has these steps:
-1. Feature retrieval: this uses the same built-in feature retrieval component used in the training pipeline, in the part 3 of the tutorial. For pipeline training, we provided a feature retrieval spec as a component input. However, for batch inference, we pass the registered model as the input, and the component looks for the feature retrieval spec in the model artifact. Additionally, for training, the observation data had the target variable. However, batch inference observation data will not have the target variable. The feature retrieval step joins the observation data with the features, and output the data for batch inference.
-1. Batch inference: This step uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
+ 1. Feature retrieval: this uses the same built-in feature retrieval component used in the training pipeline, covered in tutorial part 3. For pipeline training, we provided a feature retrieval spec as a component input. However, for batch inference, we pass the registered model as the input, and the component looks for the feature retrieval spec in the model artifact.
+
+ Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features, and outputs the data for batch inference.
-> [!Note]
-> We use a job for batch inference in this example. You can also use Azure ML's batch endpoints.
+ 1. Batch inference: This step uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=run-batch-inf-pipeline)]
+ > [!NOTE]
+ > We use a job for batch inference in this example. You can also use Azure ML's batch endpoints.
-### Inspect the batch inference output data
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=run-batch-inf-pipeline)]
-1. In the cell output, click on the webview for the pipeline run
- * select inference_step
- * in the outputs card, copy the Data field. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`
- * Paste it in the cell following cell, with separate name and version values (notice that the last character is the version, separated by a `:`).
- * Notice that the batch inference pipeline generated the `batch inference pipeline`
+ ### Inspect the batch inference output data
-Explanation: Since we didn't provide `name` or `version` values of `inference_step` in the batch inference pipeline (/project/fraud_mode/pipelines/batch_inference_pipeline.yaml) outputs, the system created an untracked data asset with a guid as name and version as 1. In the next cell, we'll derive and then display the data path from the asset.
+ In the pipeline view
+ 1. Select `inference_step` in the `outputs` card
+ 1. Copy the Data field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`
+ 1. Paste the Data field value in the following cell, with separate name and version values (note that the last character is the version, preceded by a `:`).
+ 1. Note the `predict_is_fraud` column that the batch inference pipeline generated
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
+ Explanation: In the batch inference pipeline (`/project/fraud_mode/pipelines/batch_inference_pipeline.yaml`) outputs, since we didn't provide `name` or `version` values in the `outputs` of the `inference_step`, the system created an untracked data asset with a guid as the name value, and 1 as the version value. In this cell, we derive and then display the data path from the asset:
-Notice that the prediction from batch inference is appended as the last column, named `predict_is_fraud`
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
## Cleanup
-If you created a resource group for the tutorial, you can delete the resource group to delete all the resources associated with this tutorial.
-
-Otherwise, you can delete the resources individually:
+If you created a resource group for the tutorial, you can delete the resource group, to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually:
-1. Delete the feature store: Go to the resource group in the Azure portal, select the feature store and delete it
-1. Follow the instructions [here](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to delete the user assigned managed identity
-1. Delete the offline store (storage account): Go to the resource group in the Azure portal, select the storage you created and delete it
+1. To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it
+1. Follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to delete the user-assigned managed identity
+1. To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage you created, and delete it
## Next steps * Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) * [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) * [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Reference: [YAML reference](./reference-yaml-overview.md)
machine-learning Tutorial Experiment Train Models Using Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md
Title: "Tutorial #3: experiment and train models using features (preview)"-+ description: Managed Feature Store tutorial part 3.
Previously updated : 05/05/2023 Last updated : 07/24/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
# Tutorial #3: Experiment and train models using features (preview)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations. Part 2 of the tutorial showed how to enable materialization and perform a backfill. This tutorial shows how to experiment with features, to improve model performance. At the end of the tutorial, you'll see how a feature store increases agility in the experimentation and training flows.
+Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Tutorial 3 shows how to experiment with features, as a way to improve model performance. This tutorial also shows how a feature store increases agility in the experimentation and training flows. It shows how to:
-Tutorial part 3 here shows how to:
+> [!div class="checklist"]
+> * Prototype a new `accounts` feature set spec, using existing precomputed values as features. Then, register the local feature set spec as a feature set in the feature store. This differs from tutorial part 1, where we created a feature set that had custom transformations
+> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature-retrieval spec
+> * Run a training pipeline that uses the feature retrieval spec to train a new model. This pipeline uses the built-in feature-retrieval component, to generate the training data
-* Prototype a new `accounts` feature set spec, using existing precomputed values as features. You'll then register the local feature set spec as a feature set in the feature store. This differs from part 1 of the tutorial, where we created a feature set that had custom transformations.
-* Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature-retrieval spec.
-* Run a training pipeline that uses the feature retrieval spec to train a new model. This pipeline uses the built-in feature-retrieval component, to generate the training data.
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites -- Ensure you have executed part 1 and 2 of the tutorial.
+Before you proceed with this article, make sure you complete parts 1 and 2 of this tutorial series.
-## Setup
+## Set up
-### Configure the Azure Machine Learning spark notebook
+1. Configure the Azure Machine Learning spark notebook
-1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session".
+ 1. Running the tutorial: You can create a new notebook, and execute the instructions in this document step by step. You can also open and run existing notebook `3. Experiment and train models using features.ipynb`. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
-1. Configure session:
+ 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session".
+
+ 1. Configure the session:
* Select "configure session" in the bottom nav * Select **upload conda file**
- * Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` from your local device
+ * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
* (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
-#### Start the spark session
+ 1. Start the spark session
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=start-spark-session)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=start-spark-session)]
+ 1. Set up the root directory for the samples
-#### Set up the samples root directory
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=root-dir)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=root-dir)]
+ ### [Python SDK](#tab/python)
+
+ Not applicable
+
+ ### [Azure CLI](#tab/cli)
+
+ Set up the CLI
+
+ 1. Install the Azure Machine Learning extension
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=install-ml-ext-cli)]
+
+ 1. Authentication
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=auth-cli)]
+
+ 1. Set the default subscription
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=set-default-subs-cli)]
+
+
-#### Initialize the project workspace CRUD client
+1. Initialize the project workspace variables
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-ws-crud-client)]
+ This is the current workspace, and the tutorial notebook runs in this resource.
-#### Initialize the feature store CRUD client
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-ws-crud-client)]
-Ensure you update the `featurestore_name` to reflect what you created in part 1 of this tutorial
+1. Initialize the feature store variables
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-crud-client)]
+ Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
-#### Initialize the feature store SDK client
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-crud-client)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-core-sdk)]
+1. Initialize the feature store consumption client
-#### In the project workspace, create a compute cluster named cpu-cluster
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-core-sdk)]
-Here, we run training/batch inference jobs that rely on this compute cluster
+1. Create a compute cluster
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-compute-cluster)]
+ We'll create a compute cluster named `cpu-cluster` in the project workspace. We need this compute cluster when we run the training / batch inference jobs.
-## Step 1: Locally create an accounts feature set from precomputed data
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-compute-cluster)]
+
+## Create the accounts feature set locally
In tutorial part 1, we created a transactions feature set that had custom transformations. Here, we create an accounts feature set that uses precomputed values.
-To onboard precomputed features, you can create a feature set spec without writing any transformation code. A feature set spec, or specification, is a specification to develop and test a feature set, in a fully local development environment, without a connection to a feature store. This step creates the feature set spec locally, and sample the values from it. To get managed feature store capabilities, you must use a feature asset definition to register the feature set spec with a feature store. A later part of this tutorial provides more information.
+To onboard precomputed features, you can create a feature set spec without writing any transformation code. A feature set spec is a specification that we use to develop and test a feature set, in a fully local development environment. We don't need to connect to a feature store. In this step, you create the feature set spec locally, and then sample the values from it. For managed feature store capabilities, you must use a feature asset definition to register the feature set spec with a feature store. Later steps in this tutorial provide more details.
+
+1. Explore the source data for the accounts
-### Step 1a: Explore the source data for accounts
+ > [!NOTE]
+ > This notebook uses sample data hosted in a publicly-accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets using your own source data, please host those feature sets in an adls gen2 account, and use an `abfss` driver in the data path.
-> [!Note]
-> The sample data used in this notebook is hosted in a public accessible blob container. It can only be read in Spark via wasbs driver. When you create feature sets using your own source data, please host them in adls gen2 account and use abfss driver in the data path.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=explore-accts-fset-src-data)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=explore-accts-fset-src-data)]
+1. Create the `accounts` feature set spec in local, from these precomputed features
-### Step 1b: Create an `accounts` feature set spec in local from these precomputed features
+ We don't need any transformation code here, because we reference precomputed features.
-Creation of a feature set spec does not require transformation, code because we reference precomputed features.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-accts-fset-spec)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-accts-fset-spec)]
+1. Export as a feature set spec
-### Step 1c: Export as a feature set spec
+ To register the feature set spec with the feature store, you must save the feature set spec in a specific format.
-To register the feature set spec with the feature store, the feature set spec needs to be saved in a specific format. Action: After running the next cell, inspect the generated `accounts` FeaturesetSpec: Open this file from the file tree, to see the spec: `featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml`
+ Action: After you run the next cell, inspect the generated `accounts` feature set spec. To see the spec, open the `featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml` file from the file tree to see the spec.
-The spec has these elements:
+ The spec has these important elements:
-1. `source`: a reference to a storage resource. In this case, the storage is a parquet file in a blob storage.
-1. `features`: list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. If you don't provide the transformation code (for accounts, because accounts are precomputed), the system builds the query to map the features to the source
-1. `index_columns`: the join keys required to access values from the feature set
+ 1. `source`: a reference to a storage resource, in this case, a parquet file in a blog storage resource
+
+ 1. `features`: a list of features and their datatypes. With provided transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. Without the provided transformation code (in this case, the generated `accounts` feature set spec, because it's precomputed), the system builds the query to map the features and datatypes to the source
+
+ 1. `index_columns`: the join keys required to access values from the feature set
-To learn more, see the [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and the [CLI (v2) feature set specification YAML schema](./reference-yaml-featureset-spec.md).
+ See the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-featureset-spec.md) to learn more.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=dump-accts-fset-spec)]
+ As an extra benefit, persisting supports source control.
-Persisting the spec in this way means that it can be source controlled.
+ We don't need any transformation code here, because we reference precomputed features.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=dump-accts-fset-spec)]
-## Step 2: Experiment with unregistered features locally and register with feature store when ready
+## Locally experiment with unregistered features
-In feature development, you might want to locally test and validate before proceeding with feature store registration, or execution of cloud training pipelines. In this step, you generate training data for the ML model, from a combination of features. These features include a local unregistered feature set (accounts) and a feature set registered in the feature store (transactions).
+As you develop features, you might want to locally test and validate them, before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`), and a feature set registered in the feature store (`transactions`), generates training data for the ML model.
-### Step 2a: Select model features
+1. Select features for the model
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-unreg-features-for-model)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-unreg-features-for-model)]
-### Step 2b: Locally generate training data
+1. Locally generate training data
-This step generates training data for illustrative purposes. You can optionally train models locally with this data. A later part of this tutorial shows how to train a model in the cloud.
+ This step generates training data for illustrative purposes. As an option, you can locally train models here. Later steps in this tutorial explain how to train a model in the cloud.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=gen-training-data-locally)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=gen-training-data-locally)]
-### Step 2c: Register the `accounts` feature set with the feature store
+1. Register the `accounts` feature set with the feature store
-After you locally experiment with different feature definitions, and sanity test them, you can register them with the feature store. You register a feature set asset definition with the feature store for this step.
+ After you locally experiment with different feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=reg-accts-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=reg-accts-fset)]
-### Step 2d: Get the registered feature set, and sanity test it
+1. Get the registered feature set, and sanity test it
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=sample-accts-fset-data)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=sample-accts-fset-data)]
-## Step 3: Run a training experiment
+## Run a training experiment
-Here, you select a list of features, run a training pipeline, and register the model. You can repeat this step until you're happy with the model performance.
+In this step, you select a list of features, run a training pipeline, and register the model. You can repeat this step until the model performs as you'd like.
-### (Optional) Step 3a: Discover features from the feature store UI
+1. (Optional) Discover features from the feature store UI
-Part 1 of the tutorial covered the transactions feature set, after you registered the transactions feature set. Since you also have the accounts feature set, you can browse the available features:
+ Part 1 of this tutorial covered this, when you registered the transactions feature set. Since you also have an accounts feature set, you can browse the available features:
-* Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStoresPrPr,FeatureStoresPuPr)
-* In the left nav, select `Feature stores`
-* It shows a list of feature stores that you can access. Select the feature store that you created in the steps earlier in this tutorial.
+ * Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores).
+ * In the left nav, select `feature stores`
+ * The list of feature stores that you can access appears. Select the feature store that you created earlier.
-You can see the feature sets and entity that you created. Select feature sets to browse the feature definitions. You can also use the global search box to search for feature sets across feature stores.
+ You can see the feature sets and entity that you created. Select the feature sets to browse the feature definitions. You can use the global search box to search for feature sets across feature stores.
-### (Optional) Step 3b: Discover features from the SDK
+1. (Optional) Discover features from the SDK
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=discover-features-from-sdk)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=discover-features-from-sdk)]
-### Step 3c: Select features for the model, and export it as a feature-retrieval spec
+1. Select features for the model, and export the model as a feature-retrieval spec
-In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. Now you can experiment in the cloud. Save the selected features as a feature-retrieval spec and using that spec in the mlops / cicd flow, for training and inference, increases your agility as you ship models.
+ In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model shipping agility increases if you save the selected features as a feature-retrieval spec, and use the spec in the mlops/cicd flow for training and inference.
-Select features for the model
+1. Select features for the model
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-reg-features)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-reg-features)]
-Export selected features as a feature-retrieval spec
+1. Export selected features as a feature-retrieval spec
-> [!Note]
-> A feature retrieval spec is a portable definition of a feature list associated with a model. This can help streamline ML model development and operationalization. This will become an input to the training pipeline, which generates the training data. It will be packaged along with the model, and during inference, it looks up the features. It becomes a glue that integrates all phases of the ML lifecycle. Changes to the training and inference pipeline can be kept minimal as you experiment and deploy.
+ > [!NOTE]
+ > A **feature retrieval spec** is a portable definition of the feature list associated with a model. It can help streamline ML model development and operationalization. It will become an input to the training pipeline which generates the training data. Then, it will be packaged with the model. The inference phase uses it to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy.
-Use of the feature retrieval spec and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` api as shown earlier in this tutorial.
+ Use of the feature retrieval spec and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the spec should be **feature_retrieval_spec.yaml** when it's packaged with the model. This way, the system can recognize it.
-The spec should have the name `feature_retrieval_spec.yaml`, so that the system can recognize the name of the spec when it's packaged with the model.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=export-as-frspec)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=export-as-frspec)]
+## Train in the cloud with pipelines, and register the model
-## Step 4: Train in the cloud using pipelines, and register the model if satisfactory
+In this step, you manually trigger the training pipeline. In a production scenario, a ci/cd pipeline could trigger it, based on changes to the feature-retrieval spec in the source repository. You can register the model if it's satisfactory.
-In this step, you manually trigger the training pipeline. A ci/cd pipeline could trigger the training pipeline in a production scenario based on changes to the feature-retrieval spec in the source repository.
+1. Run the training pipeline
-### Step 4a: Run the training pipeline
+ The training pipeline has these steps:
-The training pipeline has these steps:
+ 1. Feature retrieval: For its input, this built-in component takes the feature retrieval spec, the observation data, and the timestamp column name. It then generates the training data as output. It runs these steps as a managed spark job.
+
+ 1. Training: Based on the training data, this step trains the model, and then generates a model (not yet registered)
+
+ 1. Evaluation: This step validates whether or not the model performance and quality fall within a threshold (in our case, it's a placeholder/dummy step for illustration purposes)
+
+ 1. Register the model: This step registers the model
-1. Feature retrieval step: here, a built-in component takes the feature retrieval spec, the observation data, and the timestamp column name, all as input. Then, it generates the training data as output. It runs the feature retrieval step as a managed spark job.
-1. Training step: This step trains the model based on the training data, and generates a model (not yet registered)
-1. Evaluation step: This step validates whether or not the model performance / quality falls within the threshold (here, it works as a placeholder / dummy step for illustration purposes)
-1. Register model step: This step registers the model
+ > [!NOTE]
+ > In part 2 of this tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior will be the same, even if you use the `get_offline_features()` API.
-In part 2 of this tutorial, you ran a backfill job to materialize data for a transaction feature set. The feature retrieval step reads feature values from an offline store for this feature set. The behavior is the same even if you use the `get_offline_features()` api.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=run-training-pipeline)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=run-training-pipeline)]
+ 1. Inspect the training pipeline and the model
-Open the pipeline run "web view" in a new window to inspect the steps in the training pipeline.
+ 1. Open the above pipeline, and run "web view" in a new window to see the pipeline steps.
-#### Step 4b: Examine the feature retrieval spec in the model artifacts
+1. Use the feature retrieval spec in the model artifacts
-1. In the left nav of the current workspace, select Models, to open in a new tab or window
-1. Select `fraud_model`
-1. In the top nav, select `Artifacts`
+ 1. In the left nav of the current workspace, select `Models`
+ 1. Select open in a new tab or window
+ 1. Select **fraud_model**
+ 1. In the top nav, select Artifacts
-Notice that the earlier model registration step of the training pipeline packaged the feature retrieval spec with the model. You created a feature retrieval spec during experimentation, which has become part of the model definition. The next tutorial will show how inferencing uses the feature retrieval spec.
+ The feature retrieval spec is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval spec during experimentation. Now it became part of the model definition. In the next tutorial, you'll see how inferencing uses it.
-## Step 5: View the feature set and model dependencies
+## View the feature set and model dependencies
-### Step 5a: View the list of feature sets associated with the model
+1. View the list of feature sets associated with the model
-In the same models page, select the `feature sets` tab. This tab shows both the `transactions` and `accounts` feature sets on which this model depends.
+ In the same models page, select the `feature sets` tab. This tab shows both the `transactions` and the `accounts` feature sets on which this model depends.
-### Step 5b: View the list of models using the feature sets
+1. View the list of models that use the feature sets
-1. Open the feature store UI (described earlier in this tutorial)
-1. In the left nav, select `Feature sets`
-1. Select any feature set
-1. Select the Models tab
+ 1. Open the feature store UI (explained earlier in this tutorial)
+ 1. Select `Feature sets` on the left nav
+ 1. Select a feature set
+ 1. Select the `Models` tab
-You can see the list of models that are using the feature sets (determined from the feature retrieval spec when the model was registered).
+ You can see the list of models that use the feature sets. The feature retrieval spec determined this list when the model was registered.
## Cleanup
-[Part 4](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) of this tutorial describes how to delete the resources
+The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
## Next steps * Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) * [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) * [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Reference: [YAML reference](./reference-yaml-overview.md)
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Title: "Tutorial #1: develop and register a feature set with managed feature store (preview)"-+ description: Managed Feature Store tutorial part 1.
Previously updated : 05/09/2023 Last updated : 07/24/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
# Tutorial #1: develop and register a feature set with managed feature store (preview)
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+
+Azure Machine Learning managed feature store lets you discover, create and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic feature store concepts, see [what is managed feature store](./concept-what-is-managed-feature-store.md) and [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
-Azure Machine Learning managed feature store lets you discover, create and operationalize features. The machine learning lifecycle involves the prototyping phase, where you experiment with various features. It also involves the operationalization phase, where models are deployed and inference looks up feature data. Features serve as the connective tissue in the machine learning lifecycle. For information about the basic feature store concepts, see [what is managed feature store](./concept-what-is-managed-feature-store.md) and [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+This tutorial is the first part of a four part series. Here, you'll learn how to:
-This tutorial is the first part of a four part series. In this tutorial, you'll learn how to:
+> [!div class="checklist"]
+> * Create a new minimal feature store resource
+> * Develop and locally test a feature set with feature transformation capability
+> * Register a feature store entity with the feature store
+> * Register the feature set that you developed with the feature store
+> * Generate a sample training dataframe using the features you created
-* Create a new minimal feature store resource
-* Develop and locally test a feature set with feature transformation capability
-* Register a feature store entity with the feature store
-* Register the feature set that you developed with the feature store
-* Generate a sample training dataframe using the features you created
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported, or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
+> [!NOTE]
+> This tutorial series has two tracks:
+> * SDK only track: Uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
+> * SDK & CLI track: This track uses the CLI for CRUD operations (create, update, and delete), and the Python SDK for feature set development and testing only. This is useful in CI / CD, or GitOps, scenarios, where CLI/yaml is preferred.
+ Before you proceed with this article, make sure you cover these prerequisites:
-* An Azure Machine Learning workspace. If you don't have one, see the [Quickstart: Create workspace resources](./quickstart-create-resources.md) article to create one
+* An Azure Machine Learning workspace. See [Quickstart: Create workspace resources](./quickstart-create-resources.md) article for more information about workspace creation.
-* To perform the steps in this article, your user account must be assigned the owner or contributor role to a resource group where the feature store is created
+* To proceed with this article, your user account must be assigned the owner or contributor role to the resource group where the feature store is created
-(Optional): If you use a new resource group for this tutorial, you can easily delete all the resource by deleting the resource group
+ (Optional): If you use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group
-## Setup
+## Set up
### Prepare the notebook environment for development
-Note: This tutorial uses Azure Machine Learning spark notebook for development.
-1. Clone the examples repository to your local machine: To run the tutorial, first clone the [examples repository - (azureml-examples)](https://github.com/azure/azureml-examples) with this command:
+> [!NOTE]
+> This tutorial uses an Azure Machine Learning Spark notebook for development.
+
+1. In the Azure Machine Learning studio environment, first select **Notebooks** in the left nav, and then select the **Samples** tab. Navigate to the **featurestore_sample** directory
+
+ **Samples -> SDK v2 -> sdk -> python -> featurestore_sample**
+
+ and then select **Clone**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot showing selection of the featurestore_sample directory in Azure Machine Learning studio UI.":::
+
+1. The **Select target directory** panel opens next. Select the User directory, in this case **testUser**, and then select **Clone**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot showing selection of the target directory location in Azure Machine Learning studio UI for the featurestore_sample resource.":::
+
+1. To configure the notebook environment, you must upload the **conda.yml** file. Select **Notebooks** in the left nav, and then select the **Files** tab. Navigate to the **env** directory
+
+ **Users -> testUser -> featurestore_sample -> project -> env**
+
+ and select the **conda.yml** file. In this navigation, **testUser** is the user directory. Select **Download**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot showing selection of the conda.yml file in Azure Machine Learning studio UI.":::
+
+1. At the Azure Machine Learning environment, open the notebook, and select **Configure Session**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot showing Open Configure Session for this notebook.":::
+
+1. At the **Configure Session** panel, select **Python packages**. To upload the Conda file, select **Upload Conda file**, and **Browse** to the directory that hosts the Conda file. Select **conda.yml**, and then select **Open**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot showing the directory hosting the Conda file.":::
+
+1. Select **Apply**, as shown in this screenshot:
+
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot showing the Conda file upload.":::
+
+## Start the Spark session
+
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=start-spark-session)]
+
+## Set up the root directory for the samples
+
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
- `git clone --depth 1 https://github.com/Azure/azureml-examples`
+### [SDK Track](#tab/SDK-track)
- You can also download a zip file from the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local device.
+Not applicable
-1. Upload the feature store samples directory to project workspace.
- * Open the [Azure Machine Learning studio UI](https://ml.azure.com/) resource of your Azure Machine Learning workspace
- * Select **Notebooks** in left nav
- * Select your user name in the directory listing
- * Select **upload folder**
- * Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`
+### [SDK and CLI Track](#tab/SDK-and-CLI-track)
-1. You can create a new notebook, and proceed and execute the instructions in this document step by step. You can also open the existing notebook named `1. Develop a feature set and register with managed feature store.ipynb`, and execute its individual cells step by step, one at a time. The notebooks are available in the folder `featurestore_sample/notebooks/sdk_only`. Keep this document open and refer to it for detailed explanation of the steps.
+### Set up the CLI
-1. Select **AzureML Spark compute** in the top nav "Compute" dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **configure session**.
+1. Install the Azure Machine Learning extension
-1. Select "configure session" from the top nav (this could take one to two minutes to display):
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
- 1. Select **configure session** in the bottom nav
- 1. Select **Upload conda file**
- 1. Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` located on your local device
- 1. (Optional) Increase the session time-out (idle time) to reduce the serverless spark cluster startup time.
+1. Authentication
-#### Start Spark Session
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=start-spark-session)]
+1. Set the default subscription
-#### Set up the root directory for the samples
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
++
+> [!NOTE]
+> Feature store Vs Project workspace: You'll use a feature store to reuse features across projects. You'll use a project workspace (an Azure Machine Learning workspace) to train and inference models, by leveraging features from feature stores. Many project workspaces can share and reuse the same feature store.
+
+### [SDK Track](#tab/SDK-track)
+
+This tutorial uses two SDKs:
+* The Feature Store CRUD SDK
+* You use the same MLClient (package name azure-ai-ml) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for feature store CRUD operations for feature store, feature set, and feature store entity.
+
+* The feature store core SDK
+
+ This SDK (azureml-featurestore) is intended for feature set development and consumption. Later steps in this tutorial describe these operations:
+
+ * Feature set specification development
+ * Feature data retrieval
+ * List and Get registered feature sets
+ * Generate and resolve feature retrieval specs
+ * Generate training and inference data using point-in-time joins
+
+This tutorial doesn't require explicit installation of those SDKs, because the earlier **conda YAML** instructions cover this step.
+
+### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+
+This tutorial uses both the Feature store core SDK, and the CLI, for CRUD operations. It only uses the Python SDK for Feature set development and testing. This approach is useful for GitOps or CI / CD scenarios, where CLI / yaml is preferred.
+
+* Use the CLI for CRUD operations on feature store, feature set, and feature store entities
+* Feature store core SDK: This SDK (`azureml-featurestore`) is meant for feature set development and consumption. This tutorial covers these operations:
+
+ * List / Get a registered feature set
+ * Generate / resolve a feature retrieval spec
+ * Execute a feature set definition, to generate a Spark dataframe
+ * Generate training with a point-in-time join
+
+This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The **conda.yaml** file includes them in an earlier step.
+++
+## Create a minimal feature store
+
+1. Set feature store parameters
+
+ Set the name, location, and other values for the feature store
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
+
+1. Create the feature store
-> [!Note]
-> Feature store Vs Project workspace: You'll use a feature store to reuse features across projects. You'll use a project workspace (i.e. Azure ML workspace) to train and inference models, by leveraging features from feature stores. Many project workspaces can share and reuse the same feature store.
+ ### [SDK Track](#tab/SDK-track)
-> [!Note]
-> This tutorial uses two SDK's:
->
-> 1. The Feature Store CRUD SDK
->
-> * You'll use the same MLClient (package name `azure-ai-ml`) SDK that you use with the Azure ML workpace. Feature store is implemented as a type of workspace. As a result, this SDK is used for feature store CRUD operations (Create, Update and Delete), for feature store, feature set and feature store entity.
->
-> 2. The feature store core sdk
->
-> * This SDK (`azureml-featurestore`) is intended for feature set development and consumption (you'll learn more about these operations later):
->
- > - Develop feature set specification and retrieve feature data using it
- > - List/Get registered feature sets
- > - Generate/resolve feature retrieval spec
- > - Generate training/inference data using a point-in-time join
->
-> This tutorial does not require explicit installation of these SDK's, since the instructions already explain the process. The **conda YAML** instructions in the earlier step cover this.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
-## Step 1: Create a minimal feature store
+ ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
-### Step 1a: Set feature store parameters
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
-Set name, location, and other values for the feature store
+1. Initialize an Azure Machine Learning feature store core SDK client
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
+ As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features
-### Step 1b: Create the feature store
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
+## Prototype and develop a feature set
-### Step 1c: Initialize Azure Machine Learning feature store core SDK client
+We'll build a feature set named `transactions` that has rolling, window aggregate-based features
-As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features
+1. Explore the transactions source data
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)]
+ > [!NOTE]
+ > This notebook uses sample data hosted in a publicly-accessible blob container. It can only be read into Spark with a `wasbs` driver. When you create feature sets using your own source data, host them in an adls gen2 account, and use an `abfss` driver in the data path.
-## Step 2: Prototype and develop a feature set called `transactions` that has rolling window aggregate based features
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
-### Step 2a: Explore the transactions source data
+1. Locally develop the feature set
-> [!Note]
-> This notebook uses sample data hosted in a publicly-accessible blob container. It can only be read into Spark with a `wasbs` driver. When you create feature sets using your own source data, please host them in an adls gen2 account, and use an `abfss` driver in the data path.
+ A feature set specification is a self-contained feature set definition that you can locally develop and test. Here, we create these rolling window aggregate features:
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
+ * transactions three-day count
+ * transactions amount three-day sum
+ * transactions amount three-day avg
+ * transactions seven-day count
+ * transactions amount seven-day sum
+ * transactions amount seven-day avg
-### Step 2b: Locally develop the feature set
+ **Action:**
-A feature set specification is a self-contained feature set definition that you can locally develop and test.
+ - Review the feature transformation code file: `featurestore/featuresets/transactions/transformation_code/transaction_transform.py`. Note the rolling aggregation defined for the features. This is a spark transformer.
-In this step, we create these rolling window aggregate features:
+ See [feature store concepts](./concept-what-is-managed-feature-store.md) and **transformation concepts** to learn more about the feature set and transformations.
-- transactions three-day count-- transactions amount three-day sum-- transactions amount three-day avg-- transactions seven-day count-- transactions amount seven-day sum-- transactions amount seven-day avg
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
-**Action:**
+1. Export as a feature set spec
-- Inspect the feature transformation code file: `featurestore/featuresets/transactions/transformation_code/transaction_transform.py`. Note the rolling aggregation defined for the features. This is a spark transformer.
+ To register the feature set spec with the feature store, you must save that spec in a specific format.
-See [feature store concepts](./concept-what-is-managed-feature-store.md) and **transformation concepts** to learn more about the feature set and transformations.
+ **Action:** Review the generated `transactions` feature set spec: Open this file from the file tree to see the spec: `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml`
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
+ The spec contains these elements:
+
+ 1. `source`: a reference to a storage resource. In this case, it's a parquet file in a blob storage resource.
+ 1. `features`: a list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes.
+ 1. `index_columns`: the join keys required to access values from the feature set
-### Step 2c: Export as a feature set spec
+ To learn more about the spec, see [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-feature-set.md).
-To register the feature set spec with the feature store, that spec must be saved in a specific format.
+ Persisting the feature set spec offers another benefit: the feature set spec can be source controlled.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
-**Action:** Inspect the generated `transactions` Featureset spec: Open this file from the file tree to see the spec: `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml`
+## Register a feature-store entity
-The spec contains these important elements:
+As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities can include accounts, customers, etc. Entities are typically created once, and then reused across feature sets. To learn more, see [feature store concepts](./concept-top-level-entities-in-managed-feature-store.md).
-1. `source`: a reference to a storage resource. In this case, it's a parquet file in a blob storage resource.
-1. `features`: a list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes.
-1. `index_columns`: the join keys required to access values from the feature set
+ ### [SDK Track](#tab/SDK-track)
-Learn more about the spec in the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-feature-set.md).
+ 1. Initialize the Feature Store CRUD client
-Persisting the feature set spec offers another benefit: the feature set spec can be source controlled.
+ As explained earlier in this tutorial, the MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at feature store level.
-## Step 3: Register a feature-store entity
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
-As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities can include accounts, customers, etc. Entities are typically created once, and then reused across feature sets. For information, see [feature store concepts](./concept-top-level-entities-in-managed-feature-store.md).
+ 1. Register the `account` entity with the feature store
-### Step 3a: Initialize the Feature Store CRUD client
+ Create an account entity that has the join key `accountID`, of type string.
-As explained earlier in this tutorial, MLClient is used for CRUD of feature store assets. The following code searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. This is a prerequisite for feature store creation. In the next code sample, the client is scoped at feature store level.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
+ ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
-### Step 3b: Register the `account` entity with the feature store
+ 1. Initialize the Feature Store CRUD client
-Create an account entity that has the join key `accountID`, of type string.
+ As explained earlier in this tutorial, MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID`, of type string.
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
-## Step 4: Register the transaction feature set with the feature store
+
-You can register a feature set asset with the feature store. In this way, you can share and reuse that asset with others. Feature set asset registration offers managed capabilities, such as versioning and materialization (we'll learn more about managed capabilities in this tutorial series).
+## Register the transaction feature set with the feature store
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
+First, register a feature set asset with the feature store. You can then reuse that asset, and easily share it. Feature set asset registration offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities.
-### Explore the feature store UI
+ ### [SDK Track](#tab/SDK-track)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
+
+ ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
+
+
+
+## Explore the feature store UI
* Open the [Azure Machine Learning global landing page](https://ml.azure.com/home). * Select `Feature stores` in the left nav
-* Note the list of accessible feature stores. Select on the feature store that you created earlier in this tutorial.
-
-The list shows the feature set and entity that you created.
+* From this list of accessible feature stores, select the feature store you created earlier in this tutorial.
-> [!Note]
-> Creating and updating feature store assets are possible only through SDK and CLI. You can use the UI to search/browse the feature store.
+> [!NOTE]
+> Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse the feature store.
-## Step 5: Generate a training data dataframe using the registered feature set
+## Generate a training data dataframe using the registered feature set
-### Step 5a: Load observation data
+1. Load observation data
-First, we explore the observation data. Observation data typically involves the core data used in training and inferencing. Then, this data joins with the feature data to create the full training data. Observation data is the data captured during the time of the event. Here, it has core transaction data including transaction ID, account ID, and transaction amount. Since we use it for training, it also has the target variable appended (**is_fraud**).
+ Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource. Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Since we use it for training, it also has an appended target variable (**is_fraud**).
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
-### Step 5b: Get the registered feature set and list its features
+1. Get the registered feature set, and list its features
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
-### Step 5c: Select features and generate training data
+1. Select features, and generate training data
-Here, we select features that become part of the training data, and we use the feature store sdk to generate the training data.
+ Here, we select the features that become part of the training data. Then, we use the feature store SDK to generate the training data itself.
-A point-in-time join appends the features to the training data.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)]
-[!notebook-python[] (~/azureml-examples-featurestore/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)]
+ A point-in-time join appends the features to the training data.
-This tutorial built the training data with features from feature store. Optionally: you can save it to storage for later use, or you can run model training on it directly.
+This tutorial built the training data with features from feature store. Optional: you can save the training data to storage for later use, or you can run model training on it directly.
## Cleanup
-[Part 4](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) of this tutorial describes how to delete the resources
+The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
## Next steps
This tutorial built the training data with features from feature store. Optional
* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) * [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) * [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Reference: [YAML reference](./reference-yaml-overview.md)
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-python.md
description: Learn how to use the Execute Python Script model in Azure Machine Learning designer to run custom operations written in Python. -+ -+ Last updated 02/08/2023
Now you have a dataset, which has a new **Dollars/HP** feature. This new feature
## Next steps
-Learn how to [import your own data](how-to-designer-import-data.md) in Azure Machine Learning designer.
+Learn how to [import your own data](how-to-designer-import-data.md) in Azure Machine Learning designer.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 07/10/2023 Last updated : 07/24/2023
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate. ## Update (July 2023)
+- Discover Azure Migrate from Operations Manager console: Operations Manager 2022 allows you to discover Azure Migrate from console. You can now generate a complete inventory of your on-premises environment without appliance. This can be used in Azure Migrate to assess machines at scale. [Learn more](https://support.microsoft.com/topic/discover-azure-migrate-for-operations-manager-04b33766-f824-4e99-9065-3109411ede63).
- Public Preview: Upgrade your Windows OS during Migration using the Migration and modernization tool in your VMware environment. [Learn more](how-to-upgrade-windows.md). ## Update (June 2023)
Learn more on how to perform [software inventory](how-to-discover-applications.m
## Update (October 2022) -- Support for export of errors and notifications from the portal for software inventory and agentless dependency.
+- Support for export of errors and notifications from the portal for software inventory and agentless dependency. [Learn more](troubleshoot-dependencies.md)
## Update (September 2022)
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additi
The minimum IOPS are 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table](#service-tiers-size-and-server-types). > [!Important]
-> **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, 300 + storage provisioned in GiB * 3)<br>
> **Minimum IOPS are 360 across all compute sizes<br> > **Maximum IOPS are determined by the selected compute size.
mysql Concepts Storage Iops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-storage-iops.md
+
+ Title: Azure Database for MySQL - Flexible Server storage cops
+description: This article describes the storage IOPS in Azure Database for MySQL - Flexible Server.
+++++ Last updated : 07/20/2023++
+# Storage IOPS in Azure Database for MySQL - Flexible Server
+++
+Storage IOPS (I/O Operations Per Second) refer to the number of read and write operations that can be performed by the storage system per second. Higher IOPS values indicate better storage performance, allowing your database to handle more simultaneous read and write operations, resulting in faster data retrieval and improved overall efficiency. When the IOPS setting is set too low, the database server may experience delays in processing requests, resulting in slow performance and reduced throughput. On the other hand, if the IOPS setting is set too high, it may lead to unnecessary resource allocation and potentially increased costs without significant performance improvements.
+
+Azure database for MySQL Flexible Server currently offers two settings for IOPS management, Pre-provisioned IOPS and Autoscale IOPS.
+
+## Pre-provisioned IOPS
+Azure Database for MySQL Flexible Server offers pre-provisioned IOPS, allowing you to allocate a specific number of IOPS to your MySQL database server. This setting ensures consistent and predictable performance for your workloads. With pre-provisioned IOPS, you can define a specific IOPS limit for your storage volume, guaranteeing the ability to handle a certain number of requests per second. This results in a reliable and assured level of performance.
+
+Moreover, Additional IOPS with pre-provisioned refers to the flexibility of increasing the provisioned IOPS for the storage volume associated with the server. You have the option to add extra IOPS beyond the default provisioned level, allowing you to customize the performance aligning with your workload requirements at any time.
+
+## Autoscale IOPS
+
+Autoscale IOPS offer the flexibility to scale IOPS on demand, eliminating the need to pre-provision a specific amount of IO per second. By enabling Autoscale IOPS, your server will automatically adjust IOPS based on workload requirements. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs.ΓÇ»
+With this feature, you'll only be charged for the IO your server actually utilizes, avoiding unnecessary provisioning and expenses for underutilized resources. This ensures both cost savings and optimal performance, making it a smart choice for managing your database workload efficiently.
++
+## Monitor Storage performance
+Monitoring Storage IOPS utilization is easy with [Metrics available under Monitoring](./concepts-monitoring.md#list-of-metrics) .
+
+#### Overview
+To obtain a comprehensive view of the IO utilization for the selected time period.
+Navigate to the Monitoring in the Azure portal for Azure Database for MySQL Flexible Server under the Overview blade.
+
+[:::image type="content" source="./media/concepts-storage-iops/1-overview.png" alt-text="Screenshot of overview metrics.":::](./media/concepts-storage-iops/1-overview.png#lightbox)
+
+#### Enhanced Metrics Workbook
+- Navigate to Workbooks under Monitoring section on your Azure portal.
+- Select "Enhanced Metrics" workbook.
+- Check for Storage IO Percentage metrics under Overview section of the workbook.
+
+[:::image type="content" source="./media/concepts-storage-iops/2-workbook.png" alt-text="Screenshot of enhanced metrics.":::](./media/concepts-storage-iops/2-workbook.png#lightbox)
+
+#### Metrics under Monitoring
+- Navigate to Metrics, under Monitoring section on your Azure portal.
+- Select "Add metric" option.
+- Choose ΓÇ£Storage IO PercentΓÇ¥ from the drop-down of available metrics.
+- Choose "Storage IO count" from the drop-down of available metrics.
+
+[:::image type="content" source="./media/concepts-storage-iops/3-metrics.png" alt-text="Screenshot of monitoring metrics.":::](./media/concepts-storage-iops/3-metrics.png#lightbox)
++
+## Selecting the Optimal IOPS Setting
+
+Having learned how to monitor your IOPS usage effectively, you're now equipped to explore the best settings for your server. When choosing the IOPS setting for your Azure Database for MySQL Flexible Server, several important factors should be considered. Understanding these factors can help you make an informed decision to ensure the best performance and cost-efficiency for your workload.
+
+### Performance Optimization
+
+With Autoscale IOPS, consistent requirements can be met for workload, which is predictable without facing the drawback of storage throttling and manual interaction to add more IOPS.
+If your workload has consistent throughput or requires consistent IOPS, Pre-provisioned IOPS may be preferable. It provides a predictable performance level, and the fixed allocation of IOPS correlates with workload within the specified limits.
+Although for any requirement of higher throughput from usual requirement, Additional IOPS can be allotted with Pre-provisioned IOPS, which requires manual interaction and understanding of throughput increase time.
+
+### Throttling impact
+
+Consider the impact of throttling on your workload. If the potential performance degradation due to throttling is a concern, Autoscale IOPS can dynamically handle workload spikes, minimizing the risk of throttling and maintaining performance to optimal level.
+
+Ultimately, the decision between Autoscale and Pre-provisioned IOPS depends on your specific workload requirements and performance expectations. Analyze your workload patterns, evaluate the cost implications, and consider the potential impact of throttling to make an informed choice that aligns with your priorities.
+By considering the specific characteristics of your database workload, such as traffic fluctuations, query patterns, and performance requirements, you can make an informed decision regarding the choice between Autoscale and Pre-provisioned IOPS.
++
+| **Workload Considerations** | **Pre-Provisioned IOPS** | **Autoscale IOPS** |
+||||
+| Workloads with consistent and predictable I/O patterns | Recommended as it utilizes only provisioned IOPS | Compatible, no manual provisioning of IOPS required |
+| Workloads with varying usage patterns | Not Recommended as it may not provide efficient performance during high usage periods. | Recommended as it automatically adjusts to handle varying workloads |
+| Workloads with dynamic growth or changing performance need | Not recommended as it requires constant adjustments as per changing IOPS requirement | Recommended as no extra settings is required for specific throughput requirement |
+
+### Cost considerations
+If you have a fluctuating workload with unpredictable peaks, opting for Autoscale IOPS may be more cost-effective. It ensures that you only pay for the higher IOPS used during peak periods, offering flexibility and cost savings. Pre-provisioned IOPS, while providing consistent and max IOPS, may come at a higher cost depending on the workload. Consider the trade-off between cost and performance required from your server.
+
+### Test and Evaluate
+If unsure about the optimal IOPS setting, consider running performance tests using both Autoscale IOPS and Pre-provisioned IOPS. Assess the results and determine which setting meets your workload requirements and performance expectations.
+
+**Example workloads: E-commerce websites**
+
+If you own an e-commerce website that experiences fluctuations in traffic throughout the year. During normal periods, the workload is moderate, but during holiday seasons or special promotions, the traffic surges exponentially.
+
+Autoscale IOPS: With Autoscale IOPS, your database can dynamically adjust its IOPS to handle the increased workload during peak periods. When traffic spikes, such as during Black Friday sales, the auto scale feature allows your database to seamlessly scale up the IOPS to meet the demand. This ensures smooth and uninterrupted performance, preventing slowdowns or service disruptions. After the peak period, when the traffic subsides, the IOPS scale back down, allowing for cost savings as you only pay for the resources utilized during the surge.
+
+Pre-provisioned IOPS: If you opt for pre-provisioned IOPS, you need to estimate the maximum workload capacity and allocate a fixed number of IOPS accordingly. However, during peak periods, the workload might exceed the predetermined IOPS limit. As a result, the storage I/O could throttle, impacting performance and potentially causing delays or timeouts for your users.
+
+**Example workloads: Reporting /Data Analytics Platforms**
+
+Suppose you have Azure Database for MySQL Flexible Server used for data analytics where users submit complex queries and large-scale data processing tasks.
+The workload pattern is relatively consistent, with a steady flow of queries throughout the day.
+
+Pre-provisioned IOPS: With pre-provisioned IOPS, you can select a suitable number of IOPS based on the expected workload. As long as the chosen IOPS adequately handle the daily query volume, there's no risk of throttling or performance degradation. This approach provides cost predictability and allows you to optimize resources efficiently without the need for dynamic scaling.
+
+Autoscale IOPS: The Autoscale feature might not provide significant advantages in this case. Since the workload is consistent, the database can be provisioned with a fixed number of IOPS that comfortably meets the demand. Autoscaling might not be necessary as there are no sudden bursts of activity that require additional IOPS. By using Pre-provisioned IOPS, you have predictable performance without the need for scaling, and the cost is directly tied to the allocated storage.
++
+## Frequent Asked Questions
+
+#### How to move from pre-provisioned IOPS to Autoscale IOPS?
+- Access your Azure portal and locate the relevant Azure database for MySQL Flexible Server.
+- Go to the Settings blade and choose the Compute + Storage section.
+- Within the IOPS section, opt for Auto Scale IOPS and save the settings to apply the modifications.
+
+#### How soon does Autoscale IOPS take effect after making the change?
+Once you enable Autoscale IOPS for your Azure database for MySQL Flexible Server and save the settings, the changes take effect immediately after the deployment to the resource has completed successfully. This means that the Autoscale IOPS feature will be applied to your database without any delay.
+
+#### How to know when IOPS have scaled up and scaled down when the server is using Autoscale IOPS feature? Or Can I monitor IOPS usage for my server?
+Refer to [ΓÇ£Monitor Storage performanceΓÇ¥](#monitor-storage-performance) section, which will help to identify if your server has scaled up or scaled down during specific time window.
+
+#### Can I switch between Autoscale IOPS and pre-provisioned IOPS later?
+Yes, you can move back to pre-provisioned IOPS by opting for pre-provisioned IOPS under Compute + Storage section under Settings blade.
+
+#### How do I know how much IOPS have been utilized for Azure database for MySQL Flexible Server?
+My navigating to Monitoring under Overview section. Or navigate to [IO count metrics](./concepts-monitoring.md#list-of-metrics) under Monitoring blade. IO count metric gives sum of IOPS used by server in the selected timeframe.
+++
+## Next steps
+- Learn more about [service limitations](./concepts-limitations.md).
+- Learn more about [Pricing](./concepts-service-tiers-storage.md#pricing) information.
+++
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## July 2023
+
+- **Autoscale IOPS in Azure Database for MySQL - Flexible Server (General Availability)**
+
+You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature, you pay only for the IO you use and no longer need to provision and pay for resources they aren't fully using, saving time and money. Autoscale IOPS eliminates the administration required to provide the best performance for Azure Database for MySQL customers at the least cost. [Learn more](./concepts-service-tiers-storage.md#autoscale-iops)
## June 2023
nat-gateway Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/resource-health.md
The health of your NAT gateway resource is displayed as one of the following sta
| Resource health status | Description | ||| | Available | Your NAT gateway resource is healthy and available. |
-| Degraded | Your NAT gateway resource has platform or user initiated events impacting the health of your NAT gateway. The metric for the data-path availability has reported less than 80% but greater than 25% health for the last fifteen minutes. You'll experience moderate to severe performance impact. |
+| Degraded | Your NAT gateway resource has platform or user initiated events impacting the health of your NAT gateway. The metric for the data-path availability has reported less than 80% but greater than 25% health for the last 15 minutes. You'll experience moderate to severe performance impact. |
| Unavailable | Your NAT gateway resource isn't healthy. The metric for the data-path availability has reported less than 25% for the past 15 minutes. You'll experience significant performance impact or unavailability of your NAT gateway resource for outbound connectivity. There may be user or platform events causing unavailability. | | Unknown | Health status for your NAT gateway resource hasnΓÇÖt been updated or hasnΓÇÖt received information for data-path availability for more than 5 minutes. This state should be transient and will reflect the correct status as soon as data is received. |
To view the health of your NAT gateway resource:
3. Select the **+ Add resource health alert** at the top of the page to set up an alert for a specific health status of your NAT gateway resource.
+## Resource health alerts
+
+Azure Resource Health alerts can notify you in near real-time when the health state of your NAT gateway resource changes. It's recommended that you set resource health alerts to notify you when your NAT gateway resource is in a **Degraded** or **Unavailable** state.
+
+When you create Azure resource health alerts for NAT gateway, Azure sends resource health notifications to your Azure subscription. You can create and customize alerts based on:
+* The subscription affected
+* The resource group affected
+* The resource type affected (Microsoft.Network/NATGateways)
+* The specific resource (any NAT gateway resource you choose to set up an alert for)
+* The event status of the NAT gateway resource affected
+* The current status of the NAT gateway resource affected
+* The previous status of the NAT gateway resource affected
+* The reason type of the NAT gateway resource affected
+
+You can also configure who the alert should be sent to:
+* A new action group (that can be used for future alerts)
+* An existing action group
+
+For more information on how to set up these resource health alerts, see:
+* [Resource health alerts using Azure portal](/azure/service-health/resource-health-alert-monitor-guide#create-a-resource-health-alert-rule-in-the-azure-portal)
+* [Resource health alerts using Resource Manager templates](/azure/service-health/resource-health-alert-arm-template-guide)
+ ## Next steps - Learn about [Azure NAT Gateway](./nat-overview.md)
network-watcher Network Watcher Nsg Flow Logging Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md
description: Learn how to log network traffic flow to and from a virtual machine
Previously updated : 05/31/2023 Last updated : 07/24/2023 # Customer intent: I need to log the network traffic to and from a virtual machine (VM) so I can analyze it for anomalies.
This tutorial helps you use NSG flow logs to log a virtual machine's network tra
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a virtual network and a Bastion host
+> * Create a virtual network
> * Create a virtual machine with a network security group associated to its network interface > * Register Microsoft.insights provider
-> * Enable flow logging for a network security group using Network Watcher NSG flow logs
+> * Enable flow logging for a network security group using Network Watcher flow logs
> * Download logged data > * View logged data
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com).
-## Create a virtual network and a Bastion host
+## Create a virtual network
-In this section, you create **myVNet** virtual network with two subnets and an Azure Bastion host. The first subnet is used for the virtual machine, and the second subnet is used for the Bastion host.
+In this section, you create **myVNet** virtual network with one subnet for the virtual machine.
-1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results.
+1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** from the search results.
:::image type="content" source="./media/network-watcher-nsg-flow-logging-portal/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal.":::
In this section, you create **myVNet** virtual network with two subnets and an A
| Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. | | **Instance details** | | | Name | Enter *myVNet*. |
- | Region | Select **East US**. |
-
-1. Select the **Security** tab, or select the **Next** button at the bottom of the page.
-
-1. Under **Azure Bastion**, select **Enable Azure Bastion** and accept the default values:
-
- | Setting | Value |
- | | |
- | Azure Bastion host name | **myVNet-Bastion**. |
- | Azure Bastion public IP Address | **(New) myVNet-bastion-publicIpAddress**. |
-
-1. Select the **IP Addresses** tab, or select **Next** button at the bottom of the page.
-
-1. Accept the default IP address space **10.0.0.0/16** and rename the **default** subnet by selecting the pencil icon next to it. In the **Edit subnet** page, enter the subnet name:
-
- | Setting | Value |
- | | |
- | **Subnet details** | |
- | Name | Enter *mySubnet*. |
+ | Region | Select **(US) East US**. |
1. Select **Review + create**.
In this section, you create **myVNet** virtual network with two subnets and an A
In this section, you create **myVM** virtual machine.
-1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** in the search results.
+1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** from the search results.
-2. Select **+ Create** and then select **Azure virtual machine**.
+1. Select **+ Create** and then select **Azure virtual machine**.
-3. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
| Setting | Value | | | |
In this section, you create **myVM** virtual machine.
| Password | Enter a password. | | Confirm password | Reenter password. |
-4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-5. In the Networking tab, select the following values:
+1. In the Networking tab, select the following values:
| Setting | Value | | | | | **Network interface** | | | Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. |
- | Public IP | Select **None**. |
+ | Public IP | Select **(new) myVM-ip**. |
| NIC network security group | Select **Basic**. This setting creates a network security group named **myVM-nsg** and associates it with the network interface of **myVM** virtual machine. |
- | Public inbound ports | Select **None**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **RDP (3389)**. |
-6. Select **Review + create**.
+ > [!CAUTION]
+ > Leaving the RDP port open to the internet is only recommended for testing. For production environments, it's recommended to restrict access to the RDP port to a specific IP address or range of IP addresses. You can also block internet access to the RDP port and use [Azure Bastion](../bastion/bastion-overview.md) to securely connect to your virtual machine from the Azure portal.
-7. Review the settings, and then select **Create**.
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
-8. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myVM**.
+1. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myVM**.
-9. Select **Connect** then select **Bastion**.
+1. Select **Connect** then select **RDP**.
-10. Enter the username and password that you created in the previous steps. Leave **Open in new browser tab** checked.
+1. Select **Download RDP File** and open the downloaded file.
-11. Select **Connect** button.
+1. Select **Connect** and then enter the username and password that you created in the previous steps. Accept the certificate if prompted.
## Register Insights provider
In this section, you create a storage account to use it to store the flow logs.
| Storage account name | Enter a unique name. This tutorial uses **mynwstorageaccount**. | | Region | Select **(US) East US**. The storage account must be in the same region as the virtual machine and its network security group. | | Performance | Select **Standard**. NSG flow logs only support Standard-tier storage accounts. |
- | Redundancy | Select **Locally-redundant storage (LRS)**. |
+ | Redundancy | Select **Locally-redundant storage (LRS)** or different replication strategy that matches your durability requirements. |
1. Select the **Review** tab or select the **Review** button at the bottom.
In this section, you create an NSG flow log that's saved into the storage accoun
:::image type="content" source="./media/network-watcher-nsg-flow-logging-portal/flow-logs-list.png" alt-text="Screenshot of Flow logs page in the Azure portal showing the newly created flow log." lightbox="./media/network-watcher-nsg-flow-logging-portal/flow-logs-list.png":::
-1. Go back to your browser tab of **myVM** virtual machine.
+1. Go back to your RDP session with **myVM** virtual machine.
-1. In **myVM**, open Microsoft Edge and go to `www.bing.com`.
+1. Open Microsoft Edge and go to `www.bing.com`.
## Download the flow log
In this section, you go to the storage account you previously selected and downl
5. In the container, navigate the folder hierarchy until you get to the `PT1H.json` file. NSG log files are written to a folder hierarchy that follows the following naming convention:
- **https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json**
+ ```
+ https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={acAddress}/PT1H.json
+ ```
6. Select the ellipsis **...** to the right of the PT1H.json file, then select **Download**.
The comma-separated information for **flowTuples** is as follows:
When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains and **myVM-nsg-myResourceGroup-flowlog** flow log:
-**Delete the flow log**:
+**Delete the resource group**:
-1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
-1. Under **Logs**, select **Flow logs**.
+1. Select **Delete resource group**.
-1. In **Network Watcher | Flow logs**, select the checkbox of the flow log.
+1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**.
-1. Select **Delete**.
+1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-**Delete the resource group**:
+**Delete the flow log**:
-1. In the search box at the top of the portal, enter *myResourceGroup*. When you see **myResourceGroup** in the search results, select it.
+1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** from the search results.
-1. Select **Delete resource group**.
+1. Under **Logs**, select **Flow logs**.
-1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+1. In **Network Watcher | Flow logs**, select the checkbox of the flow log.
+
+1. Select **Delete**.
## Next steps
private-5g-core Delete Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/delete-resources.md
+
+ Title: Delete Azure Private 5G Core resources
+
+description: In this how-to guide, you'll learn how to delete all Azure Private 5G Core resources.
++++ Last updated : 07/07/2023+++
+# Delete Azure Private 5G Core resources
+
+In this how-to guide, you'll learn how to delete all resources associated with Azure Private 5G Core (AP5GC). This includes Azure Stack Edge (ASE) resources that are required to deploy AP5GC. You should do this only when advised by your Microsoft support representative; for example, if your deployment has encountered an unrecoverable error.
+
+If you want to delete your entire AP5GC deployment, you must complete all sections of this guide in order or you may be left with resources that cannot be deleted without intervention from Microsoft. You can also follow one or more sections to delete a subset of the resources in your deployment.
+
+If you want to move resources instead, see [Move your private mobile network resources to a different region](region-move-private-mobile-network-resources.md).
+
+> [!CAUTION]
+> This procedure will destroy your AP5GC deployment. You will lose all data that isn't backed up. Do not delete resources that are in use.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+- Make a note of the resource group that contains your private mobile network, which was collected in [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md).
+- Make a note of the resource group that contains your Azure Stack Edge and custom location resources.
+
+## Back up deployment information
+
+All data will be lost when deleting your deployment. Back up any information you'd like to preserve. You can use this information to help set up a new deployment.
+
+1. Refer to [Collect the required information for your SIMs](provision-sims-azure-portal.md#collect-the-required-information-for-your-sims) to take a backup of all the information you'll need to recreate your SIMs.
+1. Depending on your authentication method when signing in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md):
+
+ - If you use Azure AD, save a copy of the Kubernetes Secret Object YAML file you created in [Create Kubernetes Secret Objects](enable-azure-active-directory.md#create-kubernetes-secret-objects).
+ - If you use local usernames and passwords and want to keep using the same credentials, save a copy of the current passwords to a secure location.
+
+1. If you want to retain any traces, [export and save](distributed-tracing-share-traces.md#export-trace-from-the-distributed-tracing-web-gui) them securely before continuing.
+1. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
+
+## Delete private mobile network resources
+
+The private mobile network resources represent your private 5G core network. If you followed the recommendations in this documentation when creating your resources, you should have a single resource group containing all private mobile network resources. You must ensure that you do not delete any unrelated resources.
+
+> [!IMPORTANT]
+> Deleting this resource group will delete the resources for all sites in your deployment. If you only want to delete a single site, see [Delete sites using the Azure portal](delete-a-site.md). You can can then return to this procedure to delete the custom location, delete the AKS cluster and reset ASE if required.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select the resource group containing the private mobile network resources.
+1. Select **Delete resource group**. You will be prompted to enter the resource group name to confirm deletion.
+1. Select **Yes** when prompted to delete the resource group.
+
+## Delete the custom location
+
+The custom location resource represents the physical location of the hardware that runs the packet core software.
+
+1. Navigate to the resource group containing the **Custom location** resource.
+1. Select the tick box for the **Custom location** resource and select **Delete**.
+1. Confirm the deletion.
+
+If you are deleting multiple sites, repeat this step for each site.
+
+## Delete the AKS cluster
+
+The Azure Kubernetes Service (AKS) cluster is an orchestration layer used to manage the packet core software components. To delete the Azure Kubernetes Service (AKS) connected cluster, follow [Remove the Azure Kubernetes service](https://learn.microsoft.com/azure/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge#remove-the-azure-kubernetes-service).
+
+If you are deleting multiple sites, repeat this step for each site.
+
+## Reset ASE
+
+ Azure Stack Edge (ASE) hardware runs the packet core software at the network edge. To reset your ASE device, follow [Reset and reactivate your Azure Stack Edge device](https://learn.microsoft.com/azure/databox-online/azure-stack-edge-reset-reactivate-device).
+
+If you are deleting multiple sites, repeat this step for each site.
+
+## Next steps
+
+To create a new AP5GC deployment, refer to [Commission the AKS cluster](commission-cluster.md) and [Deploy a private mobile network through Azure Private 5G Core - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+
+Once you have created a new deployment, complete the following steps to restore the data you backed up in [Back up deployment information](#back-up-deployment-information).
+
+1. Retrieve your backed-up SIM information and recreate your SIMs by following one of:
+
+ - [Provision new SIMs for Azure Private 5G Core - Azure portal](provision-sims-azure-portal.md)
+ - [Provision new SIMs for Azure Private 5G Core - ARM template](provision-sims-arm-template.md)
+
+1. Depending on your authentication method when signing in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md):
+
+ - If you use Azure AD, [reapply the Secret Object for distributed tracing and the packet core dashboards](enable-azure-active-directory.md#apply-kubernetes-secret-objects).
+ - If you use local usernames and passwords, follow [Access the distributed tracing web GUI](distributed-tracing.md#access-the-distributed-tracing-web-gui) and [Access the packet core dashboards](packet-core-dashboards.md#access-the-packet-core-dashboards) to restore access to your local monitoring tools.
+
+1. If you backed up any packet core dashboards, follow [Importing a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#importing-a-dashboard) in the Grafana documentation to restore them.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Relay (Microsoft.Relay/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Event Grid (Microsoft.EventGrid/topics) / topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | | Azure Event Grid (Microsoft.EventGrid/domains) / domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
-| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net |
+| Azure Web Apps - Azure Function Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net |
| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net | | Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage Mover](reliability-azure-storage-mover.md)| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Virtual Machines](../virtual-machines/reliability-virtual-machines.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Virtual Machines](reliability-virtual-machines.md)|
[Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
+
+ Title: Reliability in Azure Virtual Machines
+description: Find out about reliability in Azure Virtual Machines
+++++ Last updated : 07/18/2023++
+# Reliability in Virtual Machines
+
+This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
+
+For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Reliability recommendations
+
+
+### Reliability recommendations summary
+
+| Category | Priority |Recommendation |
+||--||
+| [**High Availability**](#high-availability) |:::image type="icon" source="media/icon-recommendation-high.svg":::| [VM-1: Run production workloads on two or more VMs using Azure Virtual Machine Scale Sets(VMSS) Flex](#-vm-1-run-production-workloads-on-two-or-more-vms-using-vmss-flex) |
+||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-2: Deploy VMs across availability zones or use VMSS Flex with zones](#-vm-2-deploy-vms-across-availability-zones-or-use-vmss-flex-with-zones) |
+||:::image type="icon" source="media/icon-recommendation-high.svg":::|[VM-3: Migrate VMs using availability sets to VMSS Flex](#-vm-3-migrate-vms-using-availability-sets-to-vmss-flex) |
+||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-5: Use managed disks for VM disks](#-vm-5-use-managed-disks-for-vm-disks)|
+|[**Disaster Recovery**](#disaster-recovery)| :::image type="icon" source="media/icon-recommendation-medium.svg"::: |[ VM-4: Replicate VMs using Azure Site Recovery](#-vm-4-replicate-vms-using-azure-site-recovery) |
+||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-7: Backup data on your VMs with Azure Backup service](#-vm-7-backup-data-on-your-vms-with-azure-backup-service) |
+|[**Performance**](#performance) |:::image type="icon" source="media/icon-recommendation-low.svg"::: | [VM-6: Host application and database data on a data disk](#-vm-6-host-application-and-database-data-on-a-data-disk)|
+||:::image type="icon" source="media/icon-recommendation-high.svg"::: | [VM-8: Production VMs should be using SSD disks](#-vm-8-production-vms-should-be-using-ssd-disks)|
+||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-10: Enable Accelerated Networking (AccelNet)](#-vm-10-enable-accelerated-networking-accelnet) |
+||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-11: Accelerated Networking is enabled, make sure you update the GuestOS NIC driver every 6 months](#-vm-11-when-accelnet-is-enabled-you-must-manually-update-the-guestos-nic-driver) |
+|[**Management**](#management)|:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-9: Watch for VMs in Stopped state](#-vm-9-review-vms-in-stopped-state) |
+||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[VM-22: Use maintenance configurations for the VM](#-vm-22-use-maintenance-configurations-for-the-vm) |
+|[**Security**](#security)|:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-12: VMs should not have a Public IP directly associated](#-vm-12-vms-should-not-have-a-public-ip-directly-associated) |
+||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-13: Virtual Network Interfaces have an NSG associated](#-vm-13-vm-network-interfaces-have-a-network-security-group-nsg-associated) |
+||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-14: IP Forwarding should only be enabled for Network Virtual Appliances](#-vm-14-ip-forwarding-should-only-be-enabled-for-network-virtual-appliances) |
+||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"](#-vm-17-network-access-to-the-vm-disk-should-be-set-to-disable-public-access-and-enable-private-access) |
+||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-19: Enable disk encryption and data at rest encryption by default](#-vm-19-enable-disk-encryption-and-data-at-rest-encryption-by-default) |
+|[**Networking**](#networking) | :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-15: Customer DNS Servers should be configured in the Virtual Network level](#-vm-15-dns-servers-should-be-configured-in-the-virtual-network-level) |
+|[**Storage**](#storage) |:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[VM-16: Shared disks should only be enabled in clustered servers](#-vm-16-shared-disks-should-only-be-enabled-in-clustered-servers) |
+|[**Compliance**](#compliance)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-18: Ensure that your VMs are compliant with Azure Policies](#-vm-18-ensure-that-your-vms-are-compliant-with-azure-policies) |
+|[**Monitoring**](#monitoring)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-20: Enable VM Insights](#-vm-20-enable-vm-insights) |
+||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[VM-21: Configure diagnostic settings for all Azure resources](#-vm-21-configure-diagnostic-settings-for-all-azure-resources) |
++
+### High availability
+
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-1: Run production workloads on two or more VMs using VMSS Flex**
+
+To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, it's recommended that you run production workloads on two or more VMs using VMSS Flex.
+
+To achieve this you can use:
+
+- [Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/overview) to create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
+- **Availability zones**. For more information on availability zones and VMs, see [Availability zone support](#availability-zone-support).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
+
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-2: Deploy VMs across availability zones or use VMSS Flex with zones**
+
+When you create your VMs, use availability zones to protect your applications and data against unlikely datacenter failure. For more information about availability zones for VMs, see [Availability zone support](#availability-zone-support) in this document.
+
+For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zone-enabled).
+
+For information on how to migrate your existing VMs to availability zone support, see [Availability zone support redeployment and migration](#availability-zone-redeployment-and-migration).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-3: Migrate VMs using availability sets to VMSS Flex**
+
+Availability sets will be retired in the near future. Modernize your workloads by migrating them from VMs to VMSS Flex.
+
+With VMSS Flex, you can deploy your VMs in one of two ways:
+
+- Across zones
+- In the same zone, but across fault domains (FDs) and update domains (UD) automatically.
+
+In an N-tier application, it's recommended that you place each application tier into its own VMSS Flex.
+
+# [Azure Resource Graph](#tab/graph)
++
+-
++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-5: Use managed disks for VM disks**
+
+To provide better reliability for VMs in an availability set, use managed disks. Managed disks are sufficiently isolated from each other to avoid single points of failure. Also, managed disks arenΓÇÖt subject to the IOPS limits of VHDs created in a storage account.
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Disaster recovery
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-4: Replicate VMs using Azure Site Recovery**
+When you replicate Azure VMs using Site Recovery, all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication.
+
+To learn how to run a disaster recovery drill, see [Run a test failover](/azure/site-recovery/site-recovery-test-failover-to-azure).
++
+# [Azure Resource Graph](#tab/graph)
++++
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-7: Backup data on your VMs with Azure Backup service**
+
+The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. For more information, see [What is the Azure Backup Service](/azure/backup/backup-overview).
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Performance
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-6: Host application and database data on a data disk**
+
+A data disk is a managed disk thatΓÇÖs attached to a VM. Use the data disk to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Hosting your data on a data disk makes it easy to backup or restore your data. You can also migrate the disk without having to move the entire VM and Operating System. Also, you'll be able to select a different disk SKU, with different type, size, and performance that meet your requirements. For more information on data disks, see [Data Disks](/azure/virtual-machines/managed-disks-overview#data-disk).
+
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-8: Production VMs should be using SSD disks**
++
+Premium SSD disks offer high-performance, low-latency disk support for I/O-intensive applications and production workloads. Standard SSD Disks are a cost-effective storage option optimized for workloads that need consistent performance at lower IOPS levels.
+
+It is recommended that you:
+
+- Use Standard HDD disks for Dev/Test scenarios and less critical workloads at lowest cost.
+- Use Premium SSD disks instead of Standard HDD disks with your premium-capable VMs. For any Single Instance VM using premium storage for all Operating System Disks and Data Disks, Azure guarantees VM connectivity of at least 99.9%.
+
+If you want to upgrade from Standard HDD to Premium SSD disks, consider the following issues:
+
+- Upgrading requires a VM reboot and this process takes 3-5 minutes to complete.
+- If VMs are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+
+
+For more information on Azure managed disks and disks types, see [Azure managed disk types](/azure/virtual-machines/disks-types#premium-ssd).
+++
+# [Azure Resource Graph](#tab/graph)
++++
+### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-10: Enable Accelerated Networking (AccelNet)**
+
+AccelNet enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types.
+
+For more information on Accelerated Networking, see [Accelerated Networking](/azure/virtual-network/accelerated-networking-overview?tabs=redhat.)
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-11: When AccelNet is enabled, you must manually update the GuestOS NIC driver**
+
+When AccelNet is enabled, the default Azure Virtual Network interface in the GuestOS is replaced for a Mellanox interface. As a result, the GuestOS NIC driver is provided from Mellanox, a 3rd party vendor. Although Marketplace images maintained by Microsoft are offered with the latest version of Mellanox drivers, once the VM is deployed, you'll need to manually update GuestOS NIC driver every six months.
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Management
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-9: Review VMs in stopped state**
+VM instances go through different states, including provisioning and power states. If a VM is in a stopped state, the VM may be facing an issue or is no longer necessary and could be removed to help reduce costs.
+
+# [Azure Resource Graph](#tab/graph)
++++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **VM-22: Use maintenance configurations for the VM**
+
+To ensure that VM updates/interruptions are done in a planned time frame, use maintenance configuration settings to schedule and manage updates. For more information on managing VM updates with maintenance configurations, see [Managing VM updates with Maintenance Configurations](../virtual-machines/maintenance-configurations.md).
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Security
+
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-12: VMs should not have a Public IP directly associated**
+
+If a VM requires outbound internet connectivity, it's recommended that you use NAT Gateway or Azure Firewall. NAT Gateway or Azure Firewall help to increase security and resiliency of the service, since both services have much higher availability and [Source Network Address Translation (SNAT)](/azure/load-balancer/load-balancer-outbound-connections) ports. For inbound internet connectivity, it's recommended that you use a load balancing solution such as Azure Load Balancer and Application Gateway.
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-13: VM network interfaces have a Network Security Group (NSG) associated**
+
+It's recommended that you associate a NSG to a subnet, or a network interface, but not both. Since rules in a NSG associated to a subnet can conflict with rules in a NSG associated to a network interface, you can have unexpected communication problems that require troubleshooting. For more information, see [Intra-Subnet traffic](/azure/virtual-network/network-security-group-how-it-works#intra-subnet-traffic).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-14: IP forwarding should only be enabled for network virtual appliances**
+
+IP forwarding enables the virtual machine network interface to:
+
+- Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.
+
+- Send network traffic with a different source IP address than the one assigned to one of a network interfaceΓÇÖs IP configurations.
+
+The IP forwarding setting must be enabled for every network interface that's attached to the VM receiving traffic to be forwarded. A VM can forward traffic whether it has multiple network interfaces, or a single network interface attached to it. While IP forwarding is an Azure setting, the VM must also run an application that's able to forward the traffic, such as firewall, WAN optimization, and load balancing applications.
+
+To learn how to enable or disable IP forwarding, see [Enable or disable IP forwarding](/azure/virtual-network/virtual-network-network-interface?tabs=azure-portal#enable-or-disable-ip-forwarding).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"**
+
+It's recommended that you set VM disk network access to ΓÇ£Disable public access and enable private accessΓÇ¥ and create a private endpoint. To learn how to create a private endpoint, see [Create a private endpoint](/azure/virtual-machines/disks-enable-private-links-for-import-export-portal#create-a-private-endpoint).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-19: Enable disk encryption and data at rest encryption by default**
+
+There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host.
+
+- Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and compliance commitments.
+- Azure Disk Storage Server-Side Encryption (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters.
+- Encryption at host ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters.
+- Confidential disk encryption binds disk encryption keys to the VMΓÇÖs TPM and makes the protected disk content accessible only to the VM.
+
+For more information about managed disk encryption options, see [Overview of managed disk encryption options](../virtual-machines/disk-encryption-overview.md).
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Networking
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-15: DNS Servers should be configured in the Virtual Network level**
+
+Configure the DNS Server in the Virtual Network to avoid name resolution inconsistency across the environment. For more information on name resolution for resources in Azure virtual networks, see [Name resolution for VMs and cloud services](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+### Storage
+
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **VM-16: Shared disks should only be enabled in clustered servers**
+
+Azure shared disks is a feature for Azure managed disks that enables you to attach a managed disk to multiple VMs simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing clustered applications to Azure and should only be used in those situations where the disk will be assigned to more than one VM member of a cluster.
+
+To learn more about how to enable shared disks for managed disks, see [Enable shared disk](/azure/virtual-machines/disks-shared-enable?tabs=azure-portal).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+### Compliance
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-18: Ensure that your VMs are compliant with Azure Policies**
+
+ItΓÇÖs important to keep your virtual machine (VM) secure for the applications that you run. Securing your VMs can include one or more Azure services and features that cover secure access to your VMs and secure storage of your data. For more information on how to keep your VM and applications secure, see [Azure Policy Regulatory Compliance controls for Azure Virtual Machines](/azure/virtual-machines/security-controls-policy).
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Monitoring
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-20: Enable VM Insights**
+
+Enable [VM Insights](/azure/azure-monitor/vm/vminsights-overview) to get more visibility into the health and performance of your virtual machine. VM Insights gives you information on the performance and health of your VMs and virtual machine scale sets, by monitoring their running processes and dependencies on other resources. VM Insights can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues. Insights can also help you understand whether an issue is related to other dependencies.
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **VM-21: Configure diagnostic settings for all Azure resources**
+
+Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration. Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on and are one of the following types:
+
+- **Resource logs** that arenΓÇÖt collected until theyΓÇÖre routed to a destination.
+- **Activity logs** that exist on their own but can be routed to other locations.
+
+Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+
+- **Sources** The type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type.
+- **Destinations**: One or more destinations to send to.
+
+A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), create multiple settings. Each resource can have up to five diagnostic settings.
+
+Fore information, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
++
+# [Azure Resource Graph](#tab/graph)
++++++
+## Availability zone support
++
+Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](availability-zones-service-support.md). The customer will be responsible for configuring and migrating their virtual machines for availability. Refer to the following readiness options below for availability zone enablement:
+
+- See [availability options for VMs](../virtual-machines/availability.md)
+- Review [availability zone service and region support](availability-zones-service-support.md)
+- [Migrate existing VMs](migrate-vm.md) to availability zones
+
+
+### Prerequisites
+
+- Your virtual machine SKUs must be available across the zones in for your region. To review which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+- Your VM SKUs must be available across the zones in your region. To check for VM SKU availability, use one of the following methods:
+
+ - Use PowerShell to [Check VM SKU availability](../virtual-machines/windows/create-PowerShell-availability-zone.md#check-vm-sku-availability).
+ - Use the Azure CLI to [Check VM SKU availability](../virtual-machines/linux/create-cli-availability-zone.md#check-vm-sku-availability).
+ - Go to [Foundational Services](availability-zones-service-support.md#an-icon-that-signifies-this-service-is-foundational-foundational-services).
+
+
+### SLA improvements
+
+Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) increase. For more information, see the [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
+
+#### Create a resource with availability zone enabled
+
+Get started by creating a virtual machine (VM) with availability zone enabled from the following deployment options below:
+- [Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md)
+- [PowerShell](../virtual-machines/windows/create-powershell-availability-zone.md)
+- [Azure portal](../virtual-machines/create-portal-availability-zone.md)
+
+### Zonal failover support
+
+Customers can set up virtual machines to failover to another zone using the Site Recovery service. For more information, see [Site Recovery](../site-recovery/site-recovery-overview.md).
+
+### Fault tolerance
+
+Virtual machines can failover to another server in a cluster, with the VM's operating system restarting on the new server. Customers should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful.
+
+For more information, see the [site recovery processes](../site-recovery/site-recovery-failover.md#before-you-start).
++
+### Zone down experience
+
+During a zone-wide outage, you should expect a brief degradation of performance until the virtual machine service self-healing re-balances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
+
+Customers should also prepare for the possibility that there's an outage of an entire region. If there's a service disruption for an entire region, the locally redundant copies of your data would temporarily be unavailable. If geo-replication is enabled, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
+
+#### Zone outage preparation and recovery
+
+The following guidance is provided for Azure virtual machines in the case of a service disruption of the entire region where your Azure virtual machine application is deployed:
+
+- Configure [Azure Site Recovery](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-1-initiate-a-failover-by-using-azure-site-recovery) for your VMs
+- Check the [Azure Service Health Dashboard](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-2-wait-for-recovery) status if Azure Site Recovery hasn't been configured
+- Review how the [Azure Backup service](../backup/backup-azure-vms-introduction.md) works for VMs
+ - See the [support matrix](../backup/backup-support-matrix-iaas.md) for Azure VM backups
+- Determine which [VM restore option and scenario](../backup/about-azure-vm-restore.md) will work best for your environment
+
+### Low-latency design
+
+Cross Region (secondary region), Cross Subscription (preview), and Cross Zonal (preview) are available options to consider when designing a low-latency virtual machine solution. For more information on these options, see the [supported restore methods](../backup/backup-support-matrix-iaas.md#supported-restore-methods).
+
+>[!IMPORTANT]
+>By opting out of zone-aware deployment, you forego protection from isolation of underlying faults. Use of SKUs that don't support availability zones or opting out from availability zone configuration forces reliance on resources that don't obey zone placement and separation (including underlying dependencies of these resources). These resources shouldn't be expected to survive zone-down scenarios. Solutions that leverage such resources should define a disaster recovery strategy and configure a recovery of the solution in another region.
+
+### Safe deployment techniques
+
+When you opt for availability zones isolation, you should utilize safe deployment techniques for application code, as well as application upgrades. In addition to configuring Azure Site Recovery, below are recommended safe deployment techniques for VMs:
+
+- [Virtual Machine Scale Sets](/azure/virtual-machines/flexible-virtual-machine-scale-sets)
+- [Azure Load Balancer](../load-balancer/load-balancer-overview.md)
+- [Azure Storage Redundancy](../storage/common/storage-redundancy.md)
+++
+ As Microsoft periodically performs planned maintenance updates, there may be rare instances when these updates require a reboot of your virtual machine to apply the required updates to the underlying infrastructure. To learn more, see [availability considerations](../virtual-machines/maintenance-and-updates.md#availability-considerations-during-scheduled-maintenance) during scheduled maintenance.
+
+Follow the health signals below for monitoring before upgrading your next set of nodes in another zone:
+
+- Check the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) for the virtual machines service status for your expected regions
+- Ensure that [replication](../site-recovery/azure-to-azure-quickstart.md) is enabled on your VMs
++
+### Availability zone redeployment and migration
+
+For migrating existing virtual machine resources to a zone redundant configuration, refer to the below resources:
+
+- Move a VM to another subscription or resource group
+ - [CLI](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli)
+ - [PowerShell](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-powershell)
+- [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines)
+- [Move Azure VMs to availability zones](../site-recovery/move-azure-vms-avset-azone.md)
+- [Move region maintenance configuration resources](../virtual-machines/move-region-maintenance-configuration-resources.md)
+
+## Disaster recovery: cross-region failover
+
+In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+Customers can use Cross Region to restore Azure VMs via paired regions. You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
++
+### Cross-region disaster recovery in multi-region geography
+
+While Microsoft is working diligently to restore the virtual machine service for region-wide service disruptions, customers will have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
+
+#### Outage detection, notification, and management
+
+When the hardware or the physical infrastructure for the virtual machine fails unexpectedly. This can include local network failures, local disk failures, or other rack level failures. When detected, the Azure platform automatically migrates (heals) your virtual machine to a healthy physical machine in the same data center. During the healing procedure, virtual machines experience downtime (reboot) and in some cases loss of the temporary drive. The attached OS and data disks are always preserved.
+
+For more detailed information on virtual machine service disruptions, see [disaster recovery guidance](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance).
+
+#### Set up disaster recovery and outage detection
+
+When setting up disaster recovery for virtual machines, understand what [Azure Site Recovery provides](../site-recovery/site-recovery-overview.md#what-does-site-recovery-provide). Enable disaster recovery for virtual machines with the below methods:
+
+- Set up disaster recovery to a [secondary Azure region for an Azure VM](../site-recovery/azure-to-azure-quickstart.md)
+- Create a Recovery Services vault
+ - [Bicep](../site-recovery/quickstart-create-vault-bicep.md)
+ - [ARM template](../site-recovery/quickstart-create-vault-template.md)
+- Enable disaster recovery for [Linux virtual machines](../virtual-machines/linux/tutorial-disaster-recovery.md)
+- Enable disaster recovery for [Windows virtual machines](../virtual-machines/windows/tutorial-disaster-recovery.md)
+- Failover virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md)
+- Failover virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region)
+
+### Single-region geography disaster recovery
+
+With disaster recovery set up, Azure VMs will continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
+
+When you replicate Azure VMs using [Site Recovery](../site-recovery/site-recovery-overview.md), all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. For more information, see [Run a disaster recovery drill to Azure](../site-recovery/tutorial-dr-drill-azure.md).
+
+For more information, see [Azure VMs architectural components](../site-recovery/azure-to-azure-architecture.md#architectural-components) and [region pairing](../virtual-machines/regions.md#region-pairs).
+
+### Capacity and proactive disaster recovery resiliency
+
+Microsoft and its customers operate under the Shared Responsibility Model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated.
+
+For deploying virtual machines, customers can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone.
+
+## Additional guidance
+
+- [Well-Architected Framework for virtual machines](/azure/architecture/framework/services/compute/virtual-machines/virtual-machines-review)
+- [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture)
+- [Accelerated networking with Azure VM disaster recovery](/azure/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking)
+- [Express Route with Azure VM disaster recovery](../site-recovery/azure-vm-disaster-recovery-with-expressroute.md)
+- [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/reliability/availability-zones-overview)
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Azure Container A
This section outlines variations and considerations when using Microsoft Cost Management + Billing features and APIs. +
+### Azure China Commercial Marketplace
+
+To learn which commercial marketplace features are available for Azure China Marketplace operated by 21Vianet, as compared to the Azure global commercial marketplace, see [Feature availability for Azure China Commercial Marketplace operated by 21Vianet](/partner-center/marketplace/azure-in-china-feature-availability).
+ #### Azure Retail Rates API for China
-The [Azure Retail Prices API for China](/rest/api/cost-management/retail-prices/azure-retail-prices-china) article is applicable only to Azure China. The preview API is available only in Azure China and isn't available in Azure Global.
+The [Azure Retail Prices API for China](/rest/api/cost-management/retail-prices/azure-retail-prices-china) article is applicable only to Azure in China and isn't available in Azure Global.
#### Markup - China
-The [Markup - China](../cost-management-billing/manage/markup-china.md) article is applicable only to Azure China. The Markup feature is available only in Azure China and isn't available in Azure Global.
+The [Markup - China](../cost-management-billing/manage/markup-china.md) article is applicable only to Azure China and isn't available in Azure Global.
## Azure in China Account Sign in
search Search Get Started Semantic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-semantic.md
In Azure Cognitive Search, [semantic search](semantic-search-overview.md) is que
This quickstart walks you through the query modifications that invoke semantic search. > [!NOTE]
-> Looking for a Cognitive Search solution with Chat-GPT interaction? See [this demo](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) for details.
+> Looking for a Cognitive Search solution with ChatGPT interaction? See [this demo](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) for details.
## Prerequisites
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Previously updated : 07/07/2023 Last updated : 07/24/2023 # Make outbound connections through a shared private link
-This article explains how to configure private, outbound calls from Azure Cognitive Search to Azure PaaS resources that run within a virtual network.
+This article explains how to configure private, outbound calls from Azure Cognitive Search to an Azure PaaS resource that runs within a virtual network.
-Setting up a private connection allows Azure Cognitive Search to connect to Azure PaaS through a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary.
+Setting up a private connection allows a search service to connect to a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary.
-Shared private link is a premium feature that's billed by usage. The costs of reading from a data source through the private endpoint are billed to your Azure subscription. As the indexer reads data from the data source, network egress charges are billed at the ["inbound data processed"](https://azure.microsoft.com/pricing/details/private-link/) rate.
+Shared private link is a premium feature that's billed by usage. When you set up a shared private link, charges for the private endpoint are added to your Azure invoice. As you use the shared private link, data transfer rates for inbound and outbound access are also invoiced. For details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
> [!NOTE] > If you're setting up a private indexer connection to a SQL Managed Instance, see [this article](search-indexer-how-to-access-private-sql.md) instead.
service-health Service Health Notifications Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-health-notifications-properties.md
Properties.communicationId | The communication with which this event is associat
- Warning - Emergency maintenance - Informational - Standard planned maintenance
-**Information** (properties.incidentType == Information)
+**Information** (properties.incidentType == Informational)
- Informational - Administrator may be required to prevent impact to existing services. **Security** (properties.incidentType == Security)
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
Previously updated : 02/07/2023 Last updated : 07/24/2023 ms.devlang: csharp
using System.IO;
```
-## Connect to the account
+## Authorize access and connect to data resources
-To use the snippets in this article, you need to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account.
+To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Azure Active Directory (Azure AD), an account access key, or a shared access signature (SAS).
-### Connect by using Azure Active Directory (Azure AD)
+### [Azure AD](#tab/azure-ad)
You can use the [Azure identity client library for .NET](/dotnet/api/overview/azure/identity-readme) to authenticate your application with Azure AD.
Create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datala
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Authorize_DataLake.cs" id="Snippet_AuthorizeWithAAD":::
-To learn more about using **DefaultAzureCredential** to authorize access to data, see [How to authenticate .NET applications with Azure services](/dotnet/azure/sdk/authentication#defaultazurecredential).
+To learn more about using `DefaultAzureCredential` to authorize access to data, see [How to authenticate .NET applications with Azure services](/dotnet/azure/sdk/authentication#defaultazurecredential).
-### Connect by using an account key
+### [SAS token](#tab/sas-token)
+
+To use a shared access signature (SAS) token, provide the token as a string and initialize a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) object. If your account URL includes the SAS token, omit the credential parameter.
++
+To learn more about generating and managing SAS tokens, see the following article:
+
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that is authorized with the account key.
You can authorize access to data using your account access keys (Shared Key). Th
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Create a container
-A container acts as a file system for your files. You can create one by calling the [DataLakeServiceClient.CreateFileSystem](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient.createfilesystemasync) method.
+A container acts as a file system for your files. You can create a container by using the following method:
-This example creates a container named `my-file-system`.
+- [DataLakeServiceClient.CreateFileSystem](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient.createfilesystemasync)
+
+This example creates a container and returns a [DataLakeFileSystemClient](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient) object for later use:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_CreateContainer"::: ## Create a directory
-Create a directory reference by calling the [DataLakeFileSystemClient.CreateDirectoryAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.createdirectoryasync) method.
+You can create a directory reference in the container by using the following method:
+
+- [DataLakeFileSystemClient.CreateDirectoryAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.createdirectoryasync)
-This example adds a directory named `my-directory` to a container, and then adds a subdirectory named `my-subdirectory`.
+The following code example adds a directory to a container, then adds a subdirectory and returns a [DataLakeDirectoryClient](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient) object for later use:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_CreateDirectory"::: ## Rename or move a directory
-Rename or move a directory by calling the [DataLakeDirectoryClient.RenameAsync](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient.renameasync) method. Pass the path of the desired directory a parameter.
+You can rename or move a directory by using the following method:
+
+- [DataLakeDirectoryClient.RenameAsync](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient.renameasync)
-This example renames a subdirectory to the name `my-subdirectory-renamed`.
+Pass the path of the desire directory as a parameter. The following code example shows how to rename a subdirectory:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_RenameDirectory":::
-This example moves a directory named `my-subdirectory-renamed` to a subdirectory of a directory named `my-directory-2`.
+The following code example shows how to move a subdirectory from one directory to a different directory:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_MoveDirectory":::
-## Delete a directory
+## Upload a file to a directory
-Delete a directory by calling the [DataLakeDirectoryClient.Delete](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient.delete) method.
+You can upload content to a new or existing file by using the following method:
-This example deletes a directory named `my-directory`.
+- [DataLakeFileClient.UploadAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.uploadasync)
+The following code example shows how to upload a local file to a directory using the `UploadAsync` method:
-## Restore a soft-deleted directory
-You can use the Azure Storage client libraries to restore a soft-deleted directory. Use the following method to list deleted paths for a [DataLakeFileSystemClient](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient) instance:
+You can use this method to create and upload content to a new file, or you can set the `overwrite` parameter to `true` to overwrite an existing file.
-- [GetDeletedPathsAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.getdeletedpathsasync)
+## Append data to a file
-Use the following method to restore a soft-deleted directory:
+You can upload data to be appended to a file by using the following method:
-- [UndeletePathAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.undeletepathasync)
+- [DataLakeFileClient.AppendAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.appendasync)
-The following code example shows how to list deleted paths and restore a soft-deleted directory:
+The following code example shows how to append data to the end of a file using these steps:
+- Create a [DataLakeFileClient](/dotnet/api/azure.storage.files.datalake.datalakefileclient) object to represent the file resource you're working with.
+- Upload data to the file using the [DataLakeFileClient.AppendAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.appendasync) method.
+- Complete the upload by calling the [DataLakeFileClient.FlushAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.flushasync) method to write the previously uploaded data to the file.
-If you rename the directory that contains the soft-deleted items, those items become disconnected from the directory. If you want to restore those items, you have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you receive an error when you attempt to restore those soft-deleted items.
-## Upload a file to a directory
+## Download from a directory
-First, create a file reference in the target directory by creating an instance of the [DataLakeFileClient](/dotnet/api/azure.storage.files.datalake.datalakefileclient) class. Upload a file by calling the [DataLakeFileClient.AppendAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.appendasync) method. Make sure to complete the upload by calling the [DataLakeFileClient.FlushAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.flushasync) method.
+The following code example shows how to download a file from a directory to a local file using these steps:
-This example uploads a text file to a directory named `my-directory`.
+- Create a [DataLakeFileClient](/dotnet/api/azure.storage.files.datalake.datalakefileclient) instance to represent the file that you want to download.
+- Use the [DataLakeFileClient.ReadAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.readasync) method, then parse the return value to obtain a [Stream](/dotnet/api/system.io.stream) object. Use any .NET file processing API to save bytes from the stream to a file.
+This example uses a [BinaryReader](/dotnet/api/system.io.binaryreader) and a [FileStream](/dotnet/api/system.io.filestream) to save bytes to a file.
-> [!TIP]
-> If your file size is large, your code will have to make multiple calls to the [DataLakeFileClient.AppendAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.appendasync). Consider using the [DataLakeFileClient.UploadAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.uploadasync#Azure_Storage_Files_DataLake_DataLakeFileClient_UploadAsync_System_IO_Stream_) method instead. That way, you can upload the entire file in a single call.
->
-> See the next section for an example.
-## Upload a large file to a directory
+## List directory contents
-Use the [DataLakeFileClient.UploadAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.uploadasync#Azure_Storage_Files_DataLake_DataLakeFileClient_UploadAsync_System_IO_Stream_) method to upload large files without having to make multiple calls to the [DataLakeFileClient.AppendAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.appendasync) method.
+You can list directory contents by using the following method and enumerating the result:
+- [FileSystemClient.GetPathsAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.getpathsasync)
-## Download from a directory
+Enumerating the paths in the result may make multiple requests to the service while fetching the values.
-First, create a [DataLakeFileClient](/dotnet/api/azure.storage.files.datalake.datalakefileclient) instance that represents the file that you want to download. Use the [DataLakeFileClient.ReadAsync](/dotnet/api/azure.storage.files.datalake.datalakefileclient.readasync) method, and parse the return value to obtain a [Stream](/dotnet/api/system.io.stream) object. Use any .NET file processing API to save bytes from the stream to a file.
+The following code example prints the names of each file that is located in a directory:
-This example uses a [BinaryReader](/dotnet/api/system.io.binaryreader) and a [FileStream](/dotnet/api/system.io.filestream) to save bytes to a file.
+## Delete a directory
-## List directory contents
+You can delete a directory by using the following method:
-List directory contents by calling the [FileSystemClient.GetPathsAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.getpathsasync) method, and then enumerating through the results.
+- [DataLakeDirectoryClient.Delete](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient.delete)
-This example, prints the names of each file that is located in a directory named `my-directory`.
+The following code example shows how to delete a directory:
+
+## Restore a soft-deleted directory
+
+You can use the Azure Storage client libraries to restore a soft-deleted directory. Use the following method to list deleted paths for a [DataLakeFileSystemClient](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient) instance:
+
+- [GetDeletedPathsAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.getdeletedpathsasync)
+
+Use the following method to restore a soft-deleted directory:
+
+- [UndeletePathAsync](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient.undeletepathasync)
+
+The following code example shows how to list deleted paths and restore a soft-deleted directory:
++
+If you rename the directory that contains the soft-deleted items, those items become disconnected from the directory. If you want to restore those items, you have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you receive an error when you attempt to restore those soft-deleted items.
## Create a user delegation SAS for a directory
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Learn how to install Azure Container Storage Preview for use with A
Previously updated : 07/06/2023 Last updated : 07/24/2023
The initial install uses Azure Arc CLI commands to download a new extension. Rep
During installation, you might be asked to install the `k8s-extension`. Select **Y**. ```azurecli-interactive
-az k8s-extension create --cluster-type managedClusters --cluster-name <cluster name> --resource-group <resource group name> --name <name of extension> --extension-type microsoft.azurecontainerstorage --scope cluster --release-train prod --release-namespace acstor
+az k8s-extension create --cluster-type managedClusters --cluster-name <cluster name> --resource-group <resource group name> --name <name of extension> --extension-type microsoft.azurecontainerstorage --scope cluster --release-train stable --release-namespace acstor
``` Installation takes 10-15 minutes to complete. You can check if the installation completed correctly by running the following command and ensuring that `provisioningState` says **Succeeded**:
storage Queues V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v11-samples-dotnet.md
Previously updated : 04/26/2023 Last updated : 07/24/2023
This article shows code samples that use version 11.x of the Azure Queue Storage
[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)]
-## Create a Queue Storage client
+For code samples using the latest version 12.x client library version, see [Quickstart: Azure Queue Storage client library for .NET](storage-quickstart-queues-dotnet.md).
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#create-the-queue-storage-client)
+## Create a Queue Storage client
The [`CloudQueueClient`](/dotnet/api/microsoft.azure.storage.queue.cloudqueueclient?view=azure-dotnet-legacy&preserve-view=true) class enables you to retrieve queues stored in Queue Storage. Here's one way to create the service client:
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
## Create a queue
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#create-a-queue)
- This example shows how to create a queue: ```csharp
queue.CreateIfNotExists();
## Insert a message into a queue
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#insert-a-message-into-a-queue)
- To insert a message into an existing queue, first create a new [`CloudQueueMessage`](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage?view=azure-dotnet-legacy&preserve-view=true). Next, call the [`AddMessage`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.addmessage?view=azure-dotnet-legacy&preserve-view=true) method. A `CloudQueueMessage` can be created from either a string (in UTF-8 format) or a byte array. The following code example creates a queue (if it doesn't already exist) and inserts the message `Hello, World`: ```csharp
queue.AddMessage(message);
## Peek at the next message
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#peek-at-the-next-message)
- You can peek at the message in the front of a queue without removing it from the queue by calling the [`PeekMessage`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.peekmessage?view=azure-dotnet-legacy&preserve-view=true) method. ```csharp
Console.WriteLine(peekedMessage.AsString);
## Change the contents of a queued message
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#change-the-contents-of-a-queued-message)
- ```csharp // Retrieve storage account from connection string. CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
queue.UpdateMessage(message,
## Dequeue the next message
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#dequeue-the-next-message)
- Your code dequeues a message from a queue in two steps. When you call [`GetMessage`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.getmessage?view=azure-dotnet-legacy&preserve-view=true), you get the next message in a queue. A message returned from `GetMessage` becomes invisible to any other code reading messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the message from the queue, you must also call [`DeleteMessage`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.deletemessage?view=azure-dotnet-legacy&preserve-view=true). This two-step process of removing a message assures that if your code fails to process a message due to hardware or software failure, another instance of your code can get the same message and try again. Your code calls `DeleteMessage` right after the message has been processed. ```csharp
queue.DeleteMessage(retrievedMessage);
## Use the async-await pattern with common Queue Storage APIs
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#use-the-async-await-pattern-with-common-queue-storage-apis)
- ```csharp // Create the queue if it doesn't already exist if(await queue.CreateIfNotExistsAsync())
Console.WriteLine("Deleted message");
## Use additional options for dequeuing messages
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#use-additional-options-for-dequeuing-messages)
- The following code example uses the [`GetMessages`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.getmessages?view=azure-dotnet-legacy&preserve-view=true) method to get 20 messages in one call. Then it processes each message using a `foreach` loop. It also sets the invisibility timeout to five minutes for each message. The timeout starts for all messages at the same time, so after five minutes have passed since the call to `GetMessages`, any messages that haven't been deleted will become visible again. ```csharp
foreach (CloudQueueMessage message in queue.GetMessages(20, TimeSpan.FromMinutes
## Get the queue length
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#get-the-queue-length)
- You can get an estimate of the number of messages in a queue. The [`FetchAttributes`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.fetchattributes?view=azure-dotnet-legacy&preserve-view=true) method returns queue attributes including the message count. The [`ApproximateMessageCount`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.approximatemessagecount?view=azure-dotnet-legacy&preserve-view=true) property returns the last value retrieved by the `FetchAttributes` method, without calling Queue Storage. ```csharp
Console.WriteLine("Number of messages in queue: " + cachedMessageCount);
## Delete a queue
-Related article: [Get started with Azure Queue Storage using .NET](storage-dotnet-how-to-use-queues.md#delete-a-queue)
- To delete a queue and all the messages contained in it, call the [`Delete`](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.delete?view=azure-dotnet-legacy&preserve-view=true) method on the queue object. ```csharp
storage Queues V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v2-samples-python.md
Previously updated : 04/26/2023 Last updated : 07/24/2023
This article shows code samples that use version 2 of the Azure Queue Storage cl
[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)]
-## Create a queue
+For code samples using the latest version 12.x client library version, see [Quickstart: Azure Queue Storage client library for Python](storage-quickstart-queues-python.md).
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#create-a-queue)
+## Create a queue
Add the following `import` directives:
queue_service.decode_function = QueueMessageFormat.binary_base64decode
## Insert a message into a queue
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#insert-a-message-into-a-queue)
- To insert a message into a queue, use the [`put_message`](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#put-message-queue-name--content--visibility-timeout-none--time-to-live-none--timeout-none-) method to create a new message and add it to the queue. ```python
queue_service.put_message(queue_name, message)
## Peek at messages
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#peek-at-messages)
- You can peek at messages without removing them from the queue by calling the [`peek_messages`](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#peek-messages-queue-name--num-messages-none--timeout-none-) method. By default, this method peeks at a single message. ```python
for peeked_message in messages:
## Change the contents of a queued message
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#change-the-contents-of-a-queued-message)
- The following code uses the [`update_message`](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#update-message-queue-name--message-id--pop-receipt--visibility-timeout--content-none--timeout-none-) method to update a message. The visibility timeout is set to 0, meaning the message appears immediately and the content is updated. ```python
for message in messages:
## Get the queue length
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#get-the-queue-length)
- The [`get_queue_metadata`](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#get-queue-metadata-queue-name--timeout-none-) method returns queue properties including `approximate_message_count`. ```python
The result is only approximate because messages can be added or removed after th
## Dequeue messages
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#dequeue-messages)
- When you call [get_messages](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#get-messages-queue-name--num-messages-none--visibility-timeout-none--timeout-none-), you get the next message in the queue by default. A message returned from `get_messages` becomes invisible to any other code reading messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the message from the queue, you must also call [delete_message](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#delete-message-queue-name--message-id--pop-receipt--timeout-none-). ```python
for message in messages:
## Delete a queue
-Related article: [Get started with Azure Queue Storage using Python](storage-python-how-to-use-queue-storage.md#delete-a-queue)
- To delete a queue and all the messages contained in it, call the [`delete_queue`](/azure/developer/python/sdk/storage/azure-storage-queue/azure.storage.queue.queueservice.queueservice?view=storage-py-v2&preserve-view=true#delete-queue-queue-name--fail-not-exist-false--timeout-none-) method. ```python
storage Queues V8 Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v8-samples-java.md
Previously updated : 04/26/2023 Last updated : 07/24/2023
This article shows code samples that use version 8 of the Azure Queue Storage cl
[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)]
-## Create a queue
+For code samples using the latest version 12.x client library version, see [Quickstart: Azure Queue Storage client library for Java](storage-quickstart-queues-java.md).
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-create-a-queue)
+## Create a queue
Add the following `import` directives:
catch (Exception e)
## Add a message to a queue
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-add-a-message-to-a-queue)
- To insert a message into an existing queue, first create a new `CloudQueueMessage`. Next, call the `addMessage` method. A `CloudQueueMessage` can be created from either a string (in UTF-8 format) or a byte array. The following code example creates a queue (if it doesn't exist) and inserts the message `Hello, World`. ```java
catch (Exception e)
## Peek at the next message
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-peek-at-the-next-message)
- You can peek at the message in the front of a queue without removing it from the queue by calling `peekMessage`. ```java
catch (Exception e)
## Change the contents of a queued message
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-change-the-contents-of-a-queued-message)
- The following code sample searches through the queue of messages, locates the first message content that matches `Hello, world`, modifies the message content, and exits. ```java
catch (Exception e)
## Get the queue length
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-get-the-queue-length)
- The `downloadAttributes` method retrieves several values including the number of messages currently in a queue. The count is only approximate because messages can be added or removed after your request. The `getApproximateMessageCount` method returns the last value retrieved by the call to `downloadAttributes`, without calling Queue Storage. ```java
catch (Exception e)
## Dequeue the next message
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-dequeue-the-next-message)
- Your code dequeues a message from a queue in two steps. When you call `retrieveMessage`, you get the next message in a queue. A message returned from `retrieveMessage` becomes invisible to any other code reading messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the message from the queue, you must also call `deleteMessage`. If your code fails to process a message, this two-step process ensures that you can get the same message and try again. Your code calls `deleteMessage` right after the message has been processed. ```java
catch (Exception e)
## Additional options for dequeuing messages
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#additional-options-for-dequeuing-messages)
- The following code example uses the `retrieveMessages` method to get 20 messages in one call. Then it processes each message using a `for` loop. It also sets the invisibility timeout to five minutes (300 seconds) for each message. The timeout starts for all messages at the same time. When five minutes have passed since the call to `retrieveMessages`, any messages not deleted becomes visible again. ```java
catch (Exception e)
## List the queues
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-list-the-queues)
- To obtain a list of the current queues, call the `CloudQueueClient.listQueues()` method, which returns a collection of `CloudQueue` objects. ```java
catch (Exception e)
## Delete a queue
-Related article: [Get started with Azure Queue Storage using Java](storage-java-how-to-use-queue-storage.md#how-to-delete-a-queue)
- To delete a queue and all the messages contained in it, call the `deleteIfExists` method on the `CloudQueue` object. ```java
synapse-analytics Author Sql Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/author-sql-script.md
Select the **Run** button to execute your SQL script. The results are displayed
![new sql script results table](media/author-sql-script/new-sql-script-results-table.png)
+Synapse Studio creates a new session for each SQL script execution. Once a SQL script execution completes, the session is automatically closed.
+
+Temporary tables are only visible to the session in which they were created and are automatically dropped when the session closes.
+ ## Export your results You can export the results to your local storage in different formats (including CSV, Excel, JSON, XML) by selecting "Export results" and choosing the extension.
synapse-analytics Develop Tables Cetas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-cetas.md
For dedicated SQL pool, CETAS usage and syntax, check the [CREATE EXTERNAL TABLE
When using serverless SQL pool, CETAS is used to create an external table and export query results to Azure Storage Blob or Azure Data Lake Storage Gen2.
-For complete syntax, refer to [CREATE EXTERNAL TABLE AS SELECT (Transact-SQL)](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true).
+For complete syntax, refer to [CREATE EXTERNAL TABLE AS SELECT (Transact-SQL)](/sql/t-sql/statements/create-external-table-as-select-transact-sql).
## Examples
update-center Dynamic Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md
+
+ Title: An overview of dynamic scoping (preview)
+description: This article provides information about dynamic scoping (preview), its purpose and advantages.
+ Last updated : 07/05/2023+++++
+# About Dynamic Scoping (preview)
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
+
+Dynamic scoping (preview) is an advanced capability of schedule patching that allows users to:
+
+- Group machines based on criteria such as subscription, resource group, location, resource type, OS Type, and Tags. This becomes the definition of the scope.
+- Associate the scope to a schedule/maintenance configuration to apply updates at scale as per a pre-defined scope.
+
+The criteria will be evaluated at the scheduled run time, which will be the final list of machines that will be patched by the schedule. The machines evaluated during create or edit phase may differ from the group at schedule run time.
+
+## Key benefits
+
+**At Scale and simplified patching** - You don't have to manually change associations between machines and schedules. For example, if you want to remove a machine from a schedule and your scope was defined based on tag(s) criteria, removing the tag on the machine will automatically drop the association. These associations can be dropped and added for multiple machines at scale.
+ > [!NOTE]
+ > Subscription is mandatory for the creation of dynamic scope and you can't edit it after the dynamic scope is created.
+
+**Reusability of the same schedule** - You can associate a schedule to multiple machines dynamically, statically, or both.
+ > [!NOTE]
+ > You can associate one dynamic scope to one schedule.
+
+## Permissions
+
+For dynamic scoping (preview) and configuration assignment, ensure that you have the following permissions:
+
+- Write permissions to create or modify a schedule.
+- Read permissions to assign or read a schedule.
++
+## Prerequisites for Azure VMs
+
+- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.
+- Associate a Schedule with the VM.
+
+> [!NOTE]
+> For Arc VMs, there are no patch orchestration pre-requisites. However, you must associate a schedule with the VM for Schedule patching. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md).
++
+## Next steps
+
+ Learn about deploying updates to your machines to maintain security compliance by reading [deploy updates](deploy-updates.md)
update-center Manage Dynamic Scoping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-dynamic-scoping.md
+
+ Title: Manage various operations of dynamic scoping (preview).
+description: This article describes how to manage dynamic scoping (preview) operations
+++ Last updated : 07/05/2023+++
+# Manage a Dynamic scope
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article describes how to view, add, edit and delete a dynamic scope (preview).
+
+## Add a Dynamic scope (preview)
+To add a Dynamic scope to an existing configuration, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**.
+1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope.
+1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**.
+1. In the **Add a dynamic scope** page, select **subscriptions**(mandatory).
+1. In **Filter by**, choose **Select** and in the **Select Filter by**, specify the Resource group, Resource type, Location, Tags and OS type and then select **Ok**. These filters are optional fields.
+1. In the **Preview of machines based on above scope**, you can view the list of machines for the selected criteria and then select **Add**.
+ > [!NOTE]
+ > The list of machines may be different at run time.
+1. In the **Configure Azure VMs for schedule updates** page, select any one of the following options to provide your consent:
+ 1. **Change the required options to ensure schedule supportability** - this option confirms that you want to update the patch orchestration from existing option to *Customer Managed Schedules (Preview)*: This updates the following two properties on your behalf:
+
+ - *Patch mode = AutomaticByPlatform*
+ - *Set the BypassPlatformSafetyChecksOnUserSchedule = True*.
+ 1. **Continue with supported machines only** - this option confirms that you want to proceed with only the machines that already have patch orchestration set to *Customer Managed Schedules (Preview)*.
+
+ > [!NOTE]
+ > In the **Preview of machines based on above scope** page, you can view only the machines that don't have patch orchestration set to *Customer Managed Schedules (Preview)*.
+
+1. Select **Save** to go back to the Dynamic scopes tab. In this tab, you can view and edit the Dynamic scope that you have created.
++
+## View Dynamic scope (preview)
+
+To view the list of Dynamic scopes (preview) associated to a given maintenance configuration, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update management center (preview)**.
+1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**.
+1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope.
+1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration.
+
+## Edit a Dynamic scope (preview)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**.
+1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope.
+1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon.
+1. In the **Edit Dynamic scope**, select the edit icon in the **Filter By** to edit the filters as needed and select **Ok**.
+ > [!NOTE]
+ > Subscription is mandatory for the creation of dynamic scope and you can't edit it after the dynamic scope is created.
+1. Select **Save**.
+
+## Delete a Dynamic scope (preview)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**.
+1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope.
+1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**.
+
+## View patch history of a Dynamic scope (preview)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope.
++
+## Provide consent to apply updates
+
+Obtaining consent to apply updates is an important step in the workflow of dynamic scoping and listed are the various ways to provide consent.
+
+#### [From Virtual Machine](#tab/vm)
+
+1. In [Azure portal](https://portal.azure.com), go to **+Create a resource** > **Virtual machine** > **Create**.
+1. In **Create a virtual machine**, select **Management** tab and under the **Guest OS Updates**, in **Patch orchestration options**, you can do the following:
+ 1. Select **Azure-orchestrated with user managed schedules (Preview)** to confirm that:
+
+ - Patch Orchestration is set to *Azure orchestration*
+ - Set the Bypass platform safety checks on user schedule = *True*.
+
+ The selection allows you to provide consent to apply the update settings, ensures that auto patching isn't applied and that patching on the VM(s) runs as per the schedule you've defined.
+
+1. Complete the details under **Monitoring**, **Advanced** and **Tags** tabs.
+1. Select **Review + Create** and under the **Management** you can view the values as **Periodic assessment** - *Off* and **Patch orchestration options** - *Azure-orchestrated with user managed schedules (Preview)*.
+1. Select **Create**.
+
+
+#### [From Schedule updates tab](#tab/sc)
+
+1. Follow the steps from 1 to 5 listed in [Add a Dynamic scope (preview)](#add-a-dynamic-scope-preview).
+1. In **Machines** tab, select **Add machine**, In **Select resources** page, select the machines and select **Add**
+1. In **Configure Azure VMs for schedule updates**, select **Continue to schedule updates** option to confirm that:
+
+ - Patch Orchestration is set to *Azure orchestration*
+ - Set the Bypass platform safety checks on user schedule = *True*.
+
+1. Select **Continue to schedule updates** to update the patch mode as **Azure-orchestrated** and enable the scheduled patching for the VMs after obtaining the consent.
+
+#### [From Update Settings](#tab/us)
+
+1. In **Update management center**, go to **Overview** > **Update settings**.
+1. In **Change Update settings**, select **+Add machine** to add the machines.
+1. In the list of machines sorted as per the operating system, go to the **Patch orchestration** option and select **Azure-orchestrated with user managed schedules (Preview)** to confirm that:
+
+ - Patch Orchestration is set to *Azure orchestration*
+ - Set the Bypass platform safety checks on user schedule = *True*
+1. Select **Save**.
+
+ The selection made in this workflow automatically applies the update settings and no consent is explicitly obtained.
++
+## Next steps
+
+* [View updates for single machine](view-updates.md)
+* [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+* [Schedule recurring updates](scheduled-patching.md)
+* [Manage update settings via Portal](manage-update-settings.md)
+* [Manage multiple machines using update management center](manage-multiple-machines.md)
update-center Tutorial Dynamic Grouping For Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/tutorial-dynamic-grouping-for-scheduled-patching.md
+
+ Title: Schedule updates on Dynamic scoping (preview).
+description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale.
+ Last updated : 07/05/2023+++
+#Customer intent: As an IT admin, I want dynamically apply patches on the machines as per a schedule.
++
+# Tutorial: Schedule updates on Dynamic scopes (preview)
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
+
+This tutorial explains how you can create a dynamic scope, and apply patches based on the criteria.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Create and edit groups
+> - Associate a schedule
++
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.
+- Associate a Schedule with the VM.
+
+## Create a Dynamic scope
+
+To create a dynamic scope, follow the steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**.
+1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers).
+1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview).
+1. In **Machines** tab, select **Add machines** to add any individual machines to the maintenance configuration and select **Updates**.
+1. In the **Updates** tab, select the patch classification that you want to include/exclude and select **Tags**.
+1. Provide the tags in **Tags** tab.
+1. Select **Review** and then **Review + Create**.
+
+>[!NOTE]
+> A dynamic scope exists within the context of a schedule only. You can use one schedule to link to a machine, dynamic scope, or both. One dynamic scope cannot have more than one schedule.
+
+## Provide the consent
+Obtaining consent to apply updates is an important step in the workflow of scheduled patching and follow the steps on various ways to [provide the consent](manage-dynamic-scoping.md#provide-consent-to-apply-updates).
+++
+## Next steps
+Learn about [managing multiple machines](manage-multiple-machines.md).
+
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Previously updated : 06/12/2023 Last updated : 07/05/2023 # What's new in Update management center (Preview) [Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview).
+## July 2023
+
+### Dynamic scope (preview)
+
+Dynamic scope (preview) is an advanced capability of schedule patching. You can now create a group of [machines based on a schedule and apply patches](dynamic-scope-overview.md) on those machines at scale. [Learn more](tutorial-dynamic-grouping-for-scheduled-patching.md).
+
+ ## May 2023 ### Customized image support
Update management center (preview) now supports [generalized](../virtual-machine
### Multi-subscription support
-The limit on the number of subscriptions that you can manage using the Update management center (preview) portal has now been removed. You can now manage all your subscriptions using the update management center (preview) portal.
+The limit on the number of subscriptions that you can manage to use the Update management center (preview) portal has now been removed. You can now manage all your subscriptions using the update management center (preview) portal.
## April 2023
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
Follow these steps for the automated deployment process:
1. Enter a unique name for your host pool.
+ > [!NOTE]
+ > The host pool name must not contain spaces.
+ 1. In **Location**, enter a region where Host Pool, Workspace, and VMs machines will be created. The metadata for these objects is stored in the geography associated with the region. For example: East US. > [!NOTE]
virtual-desktop Multimedia Redirection Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md
Multimedia redirection has two key components:
- Video playback redirection, which optimizes video playback experience for streaming sites and websites with embedded videos like YouTube and Facebook. For more information about which sites are compatible with this feature, see [Video playback redirection](#video-playback-redirection). - Call redirection (preview), which optimizes audio calls for WebRTC-based calling apps. For more information about which sites are compatible with this feature, see [Call redirection](#call-redirection).
-Call redirection only affects the connection between the local client device and the telephony app server, as shown in the following diagram.
+ Call redirection only affects the connection between the local client device and the telephony app server, as shown in the following diagram.
+ :::image type="content" source="media/multimedia-redirection-intro/call-redirection.png" alt-text="A diagram depicting the relationship between the telephony web app server, the Azure Virtual Desktop user, the web app, and other callers." lightbox="media/multimedia-redirection-intro/call-redirection.png":::
-Call redirection offloads WebRTC calls from session hosts to local client devices to reduce latency and improve call quality. However, after the connection is established, call quality becomes dependent on the website or app providers just as it would with a non-redirected call.
+ Call redirection offloads WebRTC calls from session hosts to local client devices to reduce latency and improve call quality. However, after the connection is established, call quality becomes dependent on the website or app providers just as it would with a non-redirected call.
## Websites that work with multimedia redirection
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
In this release, we've made the following changes:
- Fixed an issue where, in Azure Arc, Connection Information dialog gave inconsistent information about identity verification. - Added heading-level description to subscribe with URL. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.-- Fixed an issue where the Client doesn't auto-reconnect when the Gateway WebSocket connection shuts down normally.
+- Fixed an issue where the client doesn't auto-reconnect when the Gateway WebSocket connection shuts down normally.
## Updates for version 1.2.4419
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/classic-vm-deprecation.md
Title: We're retiring Azure VMs (classic) on September 1, 2023
+ Title: We're retiring Azure VMs (classic) on September 6, 2023
description: This article provides a high-level overview of the retirement of VMs created using the classic deployment model.
Last updated 02/10/2020
-# Migrate your IaaS resources to Azure Resource Manager by September 1, 2023
+# Migrate your IaaS resources to Azure Resource Manager by September 6, 2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-In 2014, we launched infrastructure as a service (IaaS) on [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). We've been enhancing capabilities ever since. Because Azure Resource Manager now has full IaaS capabilities and other advancements, we deprecated the management of IaaS virtual machines (VMs) through [Azure Service Manager](./migration-classic-resource-manager-faq.yml) (ASM) on February 28, 2020. This functionality will be fully retired on September 1, 2023.
+In 2014, we launched infrastructure as a service (IaaS) on [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). We've been enhancing capabilities ever since. Because Azure Resource Manager now has full IaaS capabilities and other advancements, we deprecated the management of IaaS virtual machines (VMs) through [Azure Service Manager](./migration-classic-resource-manager-faq.yml) (ASM) on February 28, 2020. This functionality will be fully retired on September 6, 2023.
-Today, about 90 percent of the IaaS VMs are using Azure Resource Manager. If you use IaaS resources through ASM, start planning your migration now. Complete it by September 1, 2023, to take advantage of [Azure Resource Manager](../azure-resource-manager/management/index.yml).
+Today, about 90 percent of the IaaS VMs are using Azure Resource Manager. If you use IaaS resources through ASM, start planning your migration now. Complete it by September 6, 2023, to take advantage of [Azure Resource Manager](../azure-resource-manager/management/index.yml).
VMs created using the classic deployment model will follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement. ## How does this affect me? - As of February 28, 2020, customers who didn't utilize IaaS VMs through ASM in the month of February 2020 can no longer create VMs (classic). -- On September 1, 2023, customers will no longer be able to start IaaS VMs by using ASM. Any that are still running or allocated will be stopped and deallocated. -- On September 1, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic).
+- On September 6, 2023, customers will no longer be able to start IaaS VMs by using ASM. Any that are still running or allocated will be stopped and deallocated.
+- On September 6, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic).
This retirement does *not* affect the following Azure services and functionality: - Storage accounts *not* used by VMs (classic)
virtual-machines Quick Cluster Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-cluster-create-terraform.md
+
+ Title: 'Quickstart: Create a Linux VM cluster in Azure using Terraform'
+description: In this article, you learn how to create a Linux VM cluster in Azure using Terraform
+++++ Last updated : 07/24/2023++
+content_well_notification:
+ - AI-contribution
++
+# Quickstart: Create a Linux VM cluster in Azure using Terraform
+
+**Applies to:** :heavy_check_mark: Linux VMs
+
+This article shows you how to create a Linux VM cluster (containing two Linux VM instances) in Azure using Terraform.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a virtual network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+> * Create a subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+> * Create a public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+> * Create a load balancer using [azurerm_lb](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/lb)
+> * Create a load balancer address pool using [azurerm_lb_backend_address_pool](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/lb_backend_address_pool)
+> * Create a network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface)
+> * Create a managed disk using [azurerm_managed_disk](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/managed_disk)
+> * Create a availability set using [azurerm_availability_set](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/availability_set)
+> * Create a Linux virtual machine using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine)
+> * Create an AzAPI resource [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource).
+> * Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action).
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-cluster-linux). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-cluster-linux/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-linux/providers.tf":::
+
+1. Create a file named `ssh.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-linux/ssh.tf":::
+
+1. Create a file named `main.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-linux/main.tf":::
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-linux/variables.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-linux/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [az vm list](/cli/azure/vm#az-vm-list) with a [JMESPath](/cli/azure/query-azure-cli) query to display the names of the virtual machines created in the resource group.
+
+ ```azurecli
+ az vm list \
+ --resource-group $resource_group_name \
+ --query "[].{\"VM Name\":name}" -o table
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [Get-AzVm](/powershell/module/az.compute/get-azvm) to display the names of all the virtual machines in the resource group.
+
+ ```azurepowershell
+ Get-AzVm -ResourceGroupName $resource_group_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)
virtual-machines Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-terraform.md
Previously updated : 08/31/2022 Last updated : 07/24/2023
+content_well_notification:
+ - AI-contribution
# Quickstart: Use Terraform to create a Linux VM
Article tested with the following Terraform and Terraform provider versions: -- [Terraform v1.2.7](https://releases.hashicorp.com/terraform/)-- [AzureRM Provider v.3.20.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)- This article shows you how to create a complete Linux environment and supporting resources with Terraform. Those resources include a virtual network, subnet, public IP address, and more. [!INCLUDE [Terraform abstract](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)] In this article, you learn how to: > [!div class="checklist"]-
-> * Create a virtual network
-> * Create a subnet
-> * Create a public IP address
-> * Create a network security group and SSH inbound rule
-> * Create a virtual network interface card
-> * Connect the network security group to the network interface
-> * Create a storage account for boot diagnostics
-> * Create SSH key
-> * Create a virtual machine
-> * Use SSH to connect to virtual machine
-
-> [!NOTE]
-> The example code in this article is located in the [Microsoft Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-with-infrastructure). See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a virtual network (VNET) using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network).
+> * Create a subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet).
+> * Create a public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip).
+> * Create a network security group using [azurerm_network_security_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_security_group).
+> * Create a network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface).
+> * Create an association between the network security group and the network interface using [azurerm_network_interface_security_group_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_security_group_association).
+> * Generate a random value for a unique storage account name using [random_id](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id).
+> * Create a storage account for boot diagnostics using [azurerm_storage_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account).
+> * Create a Linux VM using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine)
+> * Create an AzAPI resource [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource).
+> * Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action).
## Prerequisites - - [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) ## Implement the Terraform code
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-with-infrastructure). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-vm-with-infrastructure/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+ 1. Create a directory in which to test the sample Terraform code and make it the current directory. 1. Create a file named `providers.tf` and insert the following code:
- [!code-terraform[master](~/terraform_samples/quickstart/101-vm-with-infrastructure/providers.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-with-infrastructure/providers.tf":::
+
+1. Create a file named `ssh.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-with-infrastructure/ssh.tf":::
1. Create a file named `main.tf` and insert the following code:
- [!code-terraform[master](~/terraform_samples/quickstart/101-vm-with-infrastructure/main.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-with-infrastructure/main.tf":::
1. Create a file named `variables.tf` and insert the following code:
- [!code-terraform[master](~/terraform_samples/quickstart/101-vm-with-infrastructure/variables.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-with-infrastructure/variables.tf":::
1. Create a file named `outputs.tf` and insert the following code:
- [!code-terraform[master](~/terraform_samples/quickstart/101-vm-with-infrastructure/outputs.tf)]
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-with-infrastructure/outputs.tf":::
## Initialize Terraform
In this article, you learn how to:
## Verify the results
-To use SSH to connect to the virtual machine, do the following steps:
+#### [Azure CLI](#tab/azure-cli)
-1. Run [terraform output](https://www.terraform.io/cli/commands/output) to get the SSH private key and save it to a file.
+1. Get the Azure resource group name.
```console
- terraform output -raw tls_private_key > id_rsa
+ resource_group_name=$(terraform output -raw resource_group_name)
```
-1. Run [terraform output](https://www.terraform.io/cli/commands/output) to get the virtual machine public IP address.
+1. Run [az vm list](/cli/azure/vm#az-vm-list) with a [JMESPath](/cli/azure/query-azure-cli) query to display the names of the virtual machines created in the resource group.
- ```console
- terraform output public_ip_address
+ ```azurecli
+ az vm list \
+ --resource-group $resource_group_name \
+ --query "[].{\"VM Name\":name}" -o table
```
-1. Use SSH to connect to the virtual machine.
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
```console
- ssh -i id_rsa azureuser@<public_ip_address>
+ $resource_group_name=$(terraform output -raw resource_group_name)
```
- **Key points:**
- - Depending on the permissions of your environment, you might get an error when trying to ssh into the virtual machine using the `id_rsa` key file. If you get an error stating that the private key file is unprotected and can't be used, try running the following command: `chmod 600 id_rsa`, which will restrict read and write access to the owner of the file.
+1. Run [Get-AzVm](/powershell/module/az.compute/get-azvm) to display the names of all the virtual machines in the resource group.
+
+ ```azurepowershell
+ Get-AzVm -ResourceGroupName $resource_group_name
+ ```
++ ## Clean up resources
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
This scope is integrated with [update management center](../update-center/overvi
- [Patch orchestration](automatic-vm-guest-patching.md#patch-orchestration-modes) for virtual machines need to be set to AutomaticByPlatform
- :::image type="content" source="./media/maintenance-configurations/add-schedule-maintenance-window.png" alt-text="Screenshot of the upper maintenance window minimum time specification.":::
+ :::image type="content" source="./media/maintenance-configurations/add-schedule-maintenance-window.png" alt-text="Screenshot of the upper maintenance window time.":::
- The upper maintenance window is 3 hours 55 mins. - A minimum of 1 hour and 30 minutes is required for the maintenance window.
virtual-machines Migration Classic Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-cli.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
These steps show you how to use CLI commands to migrate infrastructure as a service (IaaS) resources from the classic deployment model to the Azure Resource Manager deployment model. The article requires the [Azure classic CLI](/cli/azure/install-classic-cli). Since Azure CLI only applies to Azure Resource Manager resources, it cannot be used for this migration.
virtual-machines Migration Classic Resource Manager Community Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-community-tools.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
This article catalogs the tools that have been provided by the community to assist with migration of IaaS resources from classic to the Azure Resource Manager deployment model.
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](./classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](./classic-vm-deprecation.md#how-does-this-affect-me).
Let's take a deep-dive on migrating from the Azure classic deployment model to the Azure Resource Manager deployment model. We look at resources at a resource and feature level to help you understand how the Azure platform migrates resources between the two deployment models. For more information, please read the service announcement article: [Platform-supported migration of IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-overview.md).
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
This article catalogs the most common errors and mitigations during the migration of IaaS resources from Azure classic deployment model to the Azure Resource Manager stack.
virtual-machines Migration Classic Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-overview.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
virtual-machines Migration Classic Resource Manager Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-plan.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
While Azure Resource Manager offers numerous amazing features, it's critical to plan out your migration journey to make sure things go smoothly. Spending time on planning will ensure that you don't encounter issues while executing migration activities.
There are four general phases of the migration journey:
Depending on your technical requirements size, geographies and operational practices, you might want to consider: 1. Why is Azure Resource Manager desired for your organization? What are the business reasons for a migration?
-2. What are the technical reasons for Azure Resource Manager? What (if any) additional Azure services would you like to leverage?
+2. What are the technical reasons for Azure Resource Manager? What (if any) other Azure services would you like to use?
3. Which application (or sets of virtual machines) is included in the migration? 4. Which scenarios are supported with the migration API? Review the [unsupported features and configurations](migration-classic-resource-manager-overview.md). 5. Will your operational teams now support applications/VMs in both Classic and Azure Resource Manager?
Successful customers have detailed plans where the preceding questions are discu
The following were issues discovered in many of the larger migrations. This isn't an exhaustive list and you should refer to the [unsupported features and configurations](migration-classic-resource-manager-overview.md) for more detail. You may or may not encounter these technical issues but if you do solving these before attempting migration will ensure a smoother experience. -- **Do a Validate/Prepare/Abort Dry Run** - This is perhaps the most important step to ensure Classic to Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and Commit. Validate will read the state of your classic environment and return a result of all issues. However, because some issues might exist in the Azure Resource Manager stack, Validate won't catch everything. The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata from Classic to Azure Resource Manager, but won't commit the move, and won't remove or change anything on the Classic side. The dry run involves preparing the migration, then aborting (**not committing**) the migrations prepare. The goal of validate/prepare/abort dry run is to see all of the metadata in the Azure Resource Manager stack, examine it (*programmatically or in Portal*), and verify that everything migrates correctly, and work through technical issues. It will also give you a sense of migration duration so you can plan for downtime accordingly. A validate/prepare/abort does not cause any user downtime; therefore, it is non-disruptive to application usage.
- - The items below will need to be solved before the dry run, but a dry run test will also safely flush out these preparation steps if they are missed. During enterprise migration, we've found the dry run to be a safe and invaluable way to ensure migration readiness.
+- **Do a Validate/Prepare/Abort Dry Run** - This is perhaps the most important step to ensure Classic to Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and Commit. Validate will read the state of your classic environment and return a result of all issues. However, because some issues might exist in the Azure Resource Manager stack, Validate won't catch everything. The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata from Classic to Azure Resource Manager, but won't commit the move, and won't remove or change anything on the Classic side. The dry run involves preparing the migration, then aborting (**not committing**) the migrations prepare. The goal of validate/prepare/abort dry run is to see all of the metadata in the Azure Resource Manager stack, examine it (*programmatically or in Portal*), and verify that everything migrates correctly, and work through technical issues. It will also give you a sense of migration duration so you can plan for downtime accordingly. A validate/prepare/abort doesn't cause any user downtime; therefore, it's nondisruptive to application usage.
+ - The items below will need to be solved before the dry run, but a dry run test will also safely flush out these preparation steps if they're missed. During enterprise migration, we've found the dry run to be a safe and invaluable way to ensure migration readiness.
- When prepare is running, the control plane (Azure management operations) will be locked for the whole virtual network, so no changes can be made to VM metadata during validate/prepare/abort. But otherwise any application function (RD, VM usage, etc.) will be unaffected. Users of the VMs won't know that the dry run is being executed. -- **Express Route Circuits and VPN**. Currently Express Route Gateways with authorization links cannot be migrated without downtime. For the workaround, see [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md).
+- **Express Route Circuits and VPN**. Currently Express Route Gateways with authorization links can't be migrated without downtime. For the workaround, see [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md).
-- **VM Extensions** - Virtual Machine extensions are potentially one of the biggest roadblocks to migrating running VMs. Remediation of VM Extensions could take upwards of 1-2 days, so plan accordingly. A working Azure agent is needed to report back VM Extension status of running VMs. If the status comes back as bad for a running VM, this will halt migration. The agent itself does not need to be in working order to enable migration, but if extensions exist on the VM, then both a working agent AND outbound internet connectivity (with DNS) will be needed for migration to move forward.
- - If connectivity to a DNS server is lost during migration, all VM Extensions except BGInfo v1.\* need to first be removed from every VM before migration prepare, and subsequently re-added back to the VM after Azure Resource Manager migration. **This is only for VMs that are running.** If the VMs are stopped deallocated, VM Extensions do not need to be removed. **Note:** Many extensions like Azure diagnostics and Defender for Cloud monitoring will reinstall themselves after migration, so removing them is not a problem.
- - In addition, make sure Network Security Groups are not restricting outbound internet access. This can happen with some Network Security Groups configurations. Outbound internet access (and DNS) is needed for VM Extensions to be migrated to Azure Resource Manager.
- - There are two versions of the BGInfo extension: v1 and v2. If the VM was created using the Azure portal or PowerShell, the VM will likely have the v1 extension on it. This extension does not need to be removed and will be skipped (not migrated) by the migration API. However, if the Classic VM was created with the new Azure portal, it will likely have the JSON-based v2 version of BGInfo, which can be migrated to Azure Resource Manager provided the agent is working and has outbound internet access (and DNS).
+- **VM Extensions** - Virtual Machine extensions are potentially one of the biggest roadblocks to migrating running VMs. Remediation of VM Extensions could take upwards of 1-2 days, so plan accordingly. A working Azure agent is needed to report back VM Extension status of running VMs. If the status comes back as bad for a running VM, this will halt migration. The agent itself doesn't need to be in working order to enable migration, but if extensions exist on the VM, then both a working agent AND outbound internet connectivity (with DNS) will be needed for migration to move forward.
+ - If connectivity to a DNS server is lost during migration, all VM Extensions except BGInfo v1.\* need to first be removed from every VM before migration prepare, and subsequently re-added back to the VM after Azure Resource Manager migration. **This is only for VMs that are running.** If the VMs are stopped deallocated, VM Extensions don't need to be removed. **Note:** Many extensions like Azure diagnostics and Defender for Cloud monitoring will reinstall themselves after migration, so removing them isn't a problem.
+ - In addition, make sure Network Security Groups aren't restricting outbound internet access. This can happen with some Network Security Groups configurations. Outbound internet access (and DNS) is needed for VM Extensions to be migrated to Azure Resource Manager.
+ - There are two versions of the BGInfo extension: v1 and v2. If the VM was created using the Azure portal or PowerShell, the VM will likely have the v1 extension on it. This extension doesn't need to be removed and will be skipped (not migrated) by the migration API. However, if the Classic VM was created with the new Azure portal, it will likely have the JSON-based v2 version of BGInfo, which can be migrated to Azure Resource Manager provided the agent is working and has outbound internet access (and DNS).
- **Remediation Option 1**. If you know your VMs won't have outbound internet access, a working DNS service, and working Azure agents on the VMs, then uninstall all VM extensions as part of the migration before Prepare, then reinstall the VM Extensions after migration. - **Remediation Option 2**. If VM extensions are too big of a hurdle, another option is to shutdown/deallocate all VMs before migration. Migrate the deallocated VMs, then restart them on the Azure Resource Manager side. The benefit here is that VM extensions will migrate. The downside is that all public facing Virtual IPs will be lost (this may be a non-starter), and obviously the VMs will shut down causing a much greater impact on working applications. > [!NOTE] > If a Microsoft Defender for Cloud policy is configured against the running VMs being migrated, the security policy needs to be stopped before removing extensions, otherwise the security monitoring extension will be reinstalled automatically on the VM after removing it. -- **Availability Sets** - For a virtual network (vNet) to be migrated to Azure Resource Manager, the Classic deployment (i.e. cloud service) contained VMs must all be in one availability set, or the VMs must all not be in any availability set. Having more than one availability set in the cloud service is not compatible with Azure Resource Manager and will halt migration. Additionally, there cannot be some VMs in an availability set, and some VMs not in an availability set. To resolve this, you will need to remediate or reshuffle your cloud service. Plan accordingly as this might be time consuming.
+- **Availability Sets** - For a virtual network (vNet) to be migrated to Azure Resource Manager, the Classic deployment (i.e. cloud service) contained VMs must all be in one availability set, or the VMs must all not be in any availability set. Having more than one availability set in the cloud service isn't compatible with Azure Resource Manager and will halt migration. Additionally, there can't be some VMs in an availability set, and some VMs not in an availability set. To resolve this, you'll need to remediate or reshuffle your cloud service. Plan accordingly as this might be time consuming.
-- **Web/Worker Role Deployments** - Cloud Services containing web and worker roles cannot migrate to Azure Resource Manager. The web/worker roles must first be removed from the virtual network before migration can start. A typical solution is to just move web/worker role instances to a separate Classic virtual network that is also linked to an ExpressRoute circuit, or to migrate the code to newer PaaS App Services (this discussion is beyond the scope of this document). In the former redeploy case, create a new Classic virtual network, move/redeploy the web/worker roles to that new virtual network, then delete the deployments from the virtual network being moved. No code changes required. The new [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md) capability can be used to peer together the classic virtual network containing the web/worker roles and other virtual networks in the same Azure region such as the virtual network being migrated (**after virtual network migration is completed as peered virtual networks cannot be migrated**), hence providing the same capabilities with no performance loss and no latency/bandwidth penalties. Given the addition of [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md), web/worker role deployments can now easily be mitigated and not block the migration to Azure Resource Manager.
+- **Web/Worker Role Deployments** - Cloud Services containing web and worker roles can't migrate to Azure Resource Manager. The web/worker roles must first be removed from the virtual network before migration can start. A typical solution is to just move web/worker role instances to a separate Classic virtual network that is also linked to an ExpressRoute circuit, or to migrate the code to newer PaaS App Services (this discussion is beyond the scope of this document). In the former redeploy case, create a new Classic virtual network, move/redeploy the web/worker roles to that new virtual network, then delete the deployments from the virtual network being moved. No code changes required. The new [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md) capability can be used to peer together the classic virtual network containing the web/worker roles and other virtual networks in the same Azure region such as the virtual network being migrated (**after virtual network migration is completed as peered virtual networks cannot be migrated**), hence providing the same capabilities with no performance loss and no latency/bandwidth penalties. Given the addition of [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md), web/worker role deployments can now easily be mitigated and not block the migration to Azure Resource Manager.
- **Azure Resource Manager Quotas** - Azure regions have separate quotas/limits for both Classic and Azure Resource Manager. Even though in a migration scenario new hardware isn't being consumed *(we're swapping existing VMs from Classic to Azure Resource Manager)*, Azure Resource Manager quotas still need to be in place with enough capacity before migration can start. Listed below are the major limits we've seen cause problems. Open a quota support ticket to raise the limits.
The following were issues discovered in many of the larger migrations. This isn'
- **Provisioning Timed Out VM Status** - If any VM has the status of **provisioning timed out**, this needs to be resolved pre-migration. The only way to do this is with downtime by deprovisioning/reprovisioning the VM (delete it, keep the disk, and recreate the VM). -- **RoleStateUnknown VM Status** - If migration halts due to a **role state unknown** error message, inspect the VM using the portal and ensure it is running. This error will typically go away on its own (no remediation required) after a few minutes and is often a transient type often seen during a Virtual Machine **start**, **stop**, **restart** operations. **Recommended practice:** re-try migration again after a few minutes.
+- **RoleStateUnknown VM Status** - If migration halts due to a **role state unknown** error message, inspect the VM using the portal and ensure it's running. This error will typically go away on its own (no remediation required) after a few minutes and is often a transient type often seen during a Virtual Machine **start**, **stop**, **restart** operations. **Recommended practice:** re-try migration again after a few minutes.
-- **Fabric Cluster does not exist** - In some cases, certain VMs cannot be migrated for various odd reasons. One of these known cases is if the VM was recently created (within the last week or so) and happened to land an Azure cluster that is not yet equipped for Azure Resource Manager workloads. You will get an error that says **fabric cluster does not exist** and the VM cannot be migrated. Waiting a couple of days will usually resolve this particular problem as the cluster will soon get Azure Resource Manager enabled. However, one immediate workaround is to `stop-deallocate` the VM, then continue forward with migration, and start the VM back up in Azure Resource Manager after migrating.
+- **Fabric Cluster does not exist** - In some cases, certain VMs can't be migrated for various odd reasons. One of these known cases is if the VM was recently created (within the last week or so) and happened to land an Azure cluster that isn't yet equipped for Azure Resource Manager workloads. You'll get an error that says **fabric cluster does not exist** and the VM can't be migrated. Waiting a couple of days will usually resolve this particular problem as the cluster will soon get Azure Resource Manager enabled. However, one immediate workaround is to `stop-deallocate` the VM, then continue forward with migration, and start the VM back up in Azure Resource Manager after migrating.
### Pitfalls to avoid -- Do not take shortcuts and omit the validate/prepare/abort dry run migrations.
+- Don't take shortcuts and omit the validate/prepare/abort dry run migrations.
- Most, if not all, of your potential issues will surface during the validate/prepare/abort steps. ## Migration ### Technical considerations and tradeoffs
-Now you are ready because you have worked through the known issues with your environment.
+Now you're ready because you have worked through the known issues with your environment.
For the real migrations, you might want to consider:
virtual-machines Migration Classic Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-ps.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 6, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
These steps show you how to use Azure PowerShell commands to migrate infrastructure as a service (IaaS) resources from the classic deployment model to the Azure Resource Manager deployment model.
virtual-machines Nc Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series-retirement.md
Title: NC-series retirement
-description: NC-series retirement by August 31, 2023
+description: NC-series retirement by September 6, 2023
Last updated 12/20/2022
-# Migrate your NC and NC_Promo series virtual machines by August 31, 2023
-Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by 1 year to 31 August 2023, for the Azure NC-Series virtual machine to give you more time to plan your migration.
+# Migrate your NC and NC_Promo series virtual machines by September 6, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by 1 year to 6 September 2023, for the Azure NC-Series virtual machine to give you more time to plan your migration.
As we continue to bring modern and optimized virtual machine instances to Azure using the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we're retiring our NC (v1) GPU VM sizes, powered by NVIDIA Tesla K80 GPUs on 31 August 2023.
+With this in mind, we're retiring our NC (v1) GPU VM sizes, powered by NVIDIA Tesla K80 GPUs on 6 September 2023.
## How does the NC-series migration affect me?
-After 31 August 2023, any remaining NC size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 6 September 2023, any remaining NC size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
This VM size retirement only impacts the VM sizes in the [NC-series](nc-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NCasT4 v3](nct4-v3-series.md), and [NC A100 v4](nc-a100-v4-series.md) series virtual machines.
virtual-machines Nc Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets > [!IMPORTANT]
-> NC and NC_Promo series Azure virtual machines (VMs) will be retired on August 31st, 2023. For more information, see the [NC and NC_Promo retirement information](nc-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [GPU compute migration guide](n-series-migration.md).
+> NC and NC_Promo series Azure virtual machines (VMs) will be retired on September 6, 2023. For more information, see the [NC and NC_Promo retirement information](nc-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [GPU compute migration guide](n-series-migration.md).
> > This retirement announcement doesn't apply to NCv3, NCasT4v3 and NC A100 v4 series VMs.
virtual-machines Ncv2 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series-retirement.md
Title: NCv2-series retirement
-description: NCv2-series retirement by August 31, 2023
+description: NCv2-series retirement by September 6, 2023
Last updated 11/21/2022
-# Migrate your NCv2 series virtual machines by August 31, 2023
-Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by 1 year to August 31, 2023, for the Azure NCv2-Series virtual machine to give you more time to plan your migration.
+# Migrate your NCv2 series virtual machines by September 6, 2023
+WeΓÇÖre happy to announce that we're extending the retirement date by one year to September 6, 2023, for the Azure NCv2-Series virtual machine to give you more time to plan your migration.
As we continue to bring modern and optimized virtual machine instances to Azure using the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we are retiring our NC (v2) GPU VM sizes, powered by NVIDIA Tesla P100 GPUs on 31 August 2023.
+
+We are retiring our NC (v2) GPU VM sizes, powered by NVIDIA Tesla P100 GPUs on 6 September 2023.
## How does the NCv2-series migration affect me?
-After 31 August 2023, any remaining NCv2 size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 6 September 2023, any remaining NCv2 size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
-This VM size retirement only impacts the VM sizes in the [NCv2-series](ncv2-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NCasT4 v3](nct4-v3-series.md), and [NC A100 v4](nc-a100-v4-series.md) series virtual machines.
+This VM size retirement only impacts the VM sizes in the [NCv2-series](ncv2-series.md). This doesn't impact the newer [NCv3](ncv3-series.md), [NCasT4 v3](nct4-v3-series.md), and [NC A100 v4](nc-a100-v4-series.md) series virtual machines.
## What actions should I take? You need to resize or deallocate your NC virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](sizes-gpu.md).
virtual-machines Ncv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series.md
> [!IMPORTANT]
-> NCv2 series Azure virtual machines (VMs) will be retired on August 31st, 2023. For more information, see the [NCv2 retirement information](ncv2-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [GPU compute migration guide](n-series-migration.md).
+> NCv2 series Azure virtual machines (VMs) will be retired on September 6, 2023. For more information, see the [NCv2 retirement information](ncv2-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [GPU compute migration guide](n-series-migration.md).
> > This retirement announcement doesn't apply to NCv3, NCasT4v3 and NC A100 v4 series VMs.
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series-retirement.md
Title: ND-series retirement
-description: ND-series retirement by August 31, 2023
+description: ND-series retirement by September 6, 2023
Last updated 02/27/2023
-# Migrate your ND series virtual machines by August 31, 2023
-Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by one year to 31 August 2023, for the Azure ND-Series virtual machine to give you more time to plan your migration.
+# Migrate your ND series virtual machines by September 6, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by one year to 6 September 2023, for the Azure ND-Series virtual machine to give you more time to plan your migration.
As we continue to bring modern and optimized virtual machine instances to Azure leveraging the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we're retiring our ND GPU VM sizes, powered by NVIDIA Tesla P40 GPUs on 31 August 2023.
+With this in mind, we're retiring our ND GPU VM sizes, powered by NVIDIA Tesla P40 GPUs on 6 September 2023.
## How does the ND-series migration affect me?
-After 31 August 2023, any remaining ND size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 6 September 2023, any remaining ND size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
This VM size retirement only impacts the VM sizes in the [ND-series](nd-series.md). This retirement doesn't impact the newer [NCv3](ncv3-series.md), [NC T4 v3](nct4-v3-series.md), and [ND v2](ndv2-series.md) series virtual machines.
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series-retirement.md
Title: NV series retirement
-description: NV series retirement starting September 1, 2023
+description: NV series retirement starting September 6, 2023
Last updated 02/27/2023
-# Migrate your NV and NV_Promo series virtual machines by August 31, 2023
-Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we are extending the retirement date by 1 year to August 31, 2023, for the Azure NV-Series and NV_Promo Series virtual machine to give you more time to plan your migration.
+# Migrate your NV and NV_Promo series virtual machines by September 6, 2023
+WeΓÇÖre happy to announce that we're extending the retirement date by one year to September 6, 2023, for the Azure NV-Series and NV_Promo Series virtual machine to give you more time to plan your migration.
-We continue to bring modern and optimized virtual machine (VM) instances to Azure by using the latest innovations in datacenter technologies. As we innovate, we also thoughtfully plan how we retire aging hardware. With this context in mind, we're retiring our NV-series Azure VM sizes on August 31, 2023.
+We continue to bring modern and optimized virtual machine (VM) instances to Azure by using the latest innovations in datacenter technologies. As we innovate, we also thoughtfully plan how we retire aging hardware. With this context in mind, we're retiring our NV-series Azure VM sizes on September 6, 2023.
## How does the NV series migration affect me?
-After August 31, 2023, any remaining NV and NV_Promo-size VMs remaining in your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
+After September 6, 2023, any remaining NV and NV_Promo-size VMs remaining in your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
The current VM size retirement only affects the VM sizes in the [NV series](nv-series.md). This retirement doesn't affect the [NVv3](nvv3-series.md) and [NVv4](nvv4-series.md) series VMs.
virtual-machines Nv Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series.md
> [!IMPORTANT]
-> NV and NV_Promo series Azure virtual machines (VMs) will be retired on August 31st, 2023. For more information, see the [NV and NV_Promo retirement information](nv-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [NV and NV_Promo series migration guide](nv-series-migration-guide.md).
+> NV and NV_Promo series Azure virtual machines (VMs) will be retired on September 6, 2023. For more information, see the [NV and NV_Promo retirement information](nv-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [NV and NV_Promo series migration guide](nv-series-migration-guide.md).
> > This retirement announcement doesn't apply to NVv3 and NVv4 series VMs.
virtual-machines Quick Cluster Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-cluster-create-terraform.md
+
+ Title: 'Quickstart: Create a Windows VM cluster in Azure using Terraform'
+description: In this article, you learn how to create a Windows VM cluster in Azure using Terraform
+++++ Last updated : 07/24/2023++
+content_well_notification:
+ - AI-contribution
++
+# Quickstart: Create a Windows VM cluster in Azure using Terraform
+
+**Applies to:** :heavy_check_mark: Windows VMs
+
+This article shows you how to create a Windows VM cluster (containing three Windows VM instances) in Azure using Terraform.
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a random value for the Windows VM host name [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string).
+> * Create a random password for the Windows VMs using [random_password](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password).
+> * Create a Windows VM using the [compute module](https://registry.terraform.io/modules/Azure/compute/azurerm).
+> * Create a virtual network along with subnet using the [network module](https://registry.terraform.io/modules/Azure/network/azurerm).
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/UserStory89540/quickstart/101-vm-cluster-windows). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/UserStory89540/quickstart/101-vm-cluster-windows/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-windows/providers.tf":::
+
+1. Create a file named `main.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-windows/main.tf":::
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-windows/variables.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-vm-cluster-windows/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [az vm list](/cli/azure/vm#az-vm-list) with a [JMESPath](/cli/azure/query-azure-cli) query to display the names of the virtual machines created in the resource group.
+
+ ```azurecli
+ az vm list \
+ --resource-group $resource_group_name \
+ --query "[].{\"VM Name\":name}" -o table
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [Get-AzVm](/powershell/module/az.compute/get-azvm) to display the names of all the virtual machines in the resource group.
+
+ ```azurepowershell
+ Get-AzVm -ResourceGroupName $resource_group_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)
virtual-wan How To Palo Alto Cloud Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-palo-alto-cloud-ngfw.md
To create a new virtual WAN, use the steps in the following article:
## Known limitations
-* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West Europe and Australia East. Other Azure regions are on the roadmap.
+* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West US, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other Azure regions are on the roadmap.
* Palo Alto Networks Cloud NGFW can only be deployed in new Virtual WAN hubs deployed with Azure resource tag **"hubSaaSPreview : true"**. Using existing Virtual Hubs with Palo Alto Networks Cloud NGFW is on the roadmap. * Palo Alto Networks Cloud NGFW can't be deployed with Network Virtual Appliances in the Virtual WAN hub. * For routing between Virtual WAN and Palo Alto Networks Cloud NGFW to work properly, your entire network (on-premises and Virtual Networks) must be within RFC-1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). For example, you may not use a subnet such as 40.0.0.0/24 within your Virtual Network or on-premises. Traffic to 40.0.0.0/24 may not be routed properly.
vpn-gateway Create Routebased Vpn Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-powershell.md
Creating a gateway can often take 45 minutes or more, depending on the selected
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 ` -Location "East US" -IpConfigurations $gwipconfig -GatewayType "Vpn" `--VpnType "RouteBased" GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2"
+-VpnType "RouteBased" -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2"
``` ## <a name="viewgw"></a>View the VPN gateway