Updates from: 10/19/2024 01:08:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Policy Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/policy-keys-overview.md
The top-level resource for policy keys in Azure AD B2C is the **Keyset** contain
| Attribute | Required | Remarks | | | | | | `use` | Yes | Usage: Identifies the intended use of the public key. Encrypting data `enc`, or verifying the signature on data `sig`.|
-| `nbf`| No | Activation date and time. |
-| `exp`| No | Expiration date and time. |
+| `nbf`| No | Activation date and time. An override value can be set manually by admins.|
+| `exp`| No | Expiration date and time. An override value can be set manually by admins.|
We recommend setting the key activation and expiration values according to your PKI standards. You might need to rotate these certificates periodically for security or policy reasons. For example, you might have a policy to rotate all your certificates every year.
If an Azure AD B2C keyset has multiple keys, only one of the keys is active at a
- The key activation is based on the **activation date**. - The keys are sorted by activation date in ascending order. Keys with activation dates further into the future appear lower in the list. Keys without an activation date are located at the bottom of the list. - When the current date and time is greater than a key's activation date, Azure AD B2C will activate the key and stop using the prior active key.-- When the current key's expiration time has elapsed and the key container contains a new key with valid *not before* and *expiration* times, the new key will become active automatically.
+- When the current key's expiration time has elapsed and the key container contains a new key with valid *nbf (not before)* and *exp (expiration)* times, the new key will become active automatically. New tokens will be signed with the newly active key. It is possible to keep an expired key published for token validation until disabled by an admin, but this must be requested by [filing a support request](/azure/active-directory-b2c/find-help-open-support-ticket).
+ - When the current key's expiration time has elapsed and the key container *does not* contain a new key with valid *not before* and *expiration* times, Azure AD B2C won't be able to use the expired key. Azure AD B2C will raise an error message within a dependant component of your custom policy. To avoid this issue, you can create a default key without activation and expiration dates as a safety net. - The key's endpoint (JWKS URI) of the OpenId Connect well-known configuration endpoint reflects the keys configured in the Key Container, when the Key is referenced in the [JwtIssuer Technical Profile](./jwt-issuer-technical-profile.md). An application using an OIDC library will automatically fetch this metadata to ensure it uses the correct keys to validate tokens. For more information, learn how to use [Microsoft Authentication Library](../active-directory/develop/msal-b2c-overview.md), which always fetches the latest token signing keys automatically. ++
+## Key caching
+
+When a key is uploaded, the activation flag on the key is set to false by default. You can then set the state of this key to **Enabled**. If a key enabled and valid (current time is between NBF and EXP), then the key will be used.
+
+### Key state
+
+The activation flag property is modifiable within the Azure portal UX allowing admins to disable a key and take it out of rotation.
+ ## Policy key management To get the current active key within a key container, use the Microsoft Graph API [getActiveKey](/graph/api/trustframeworkkeyset-getactivekey) endpoint.
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
When adding a region, you configure:
* The number of scale [units](upgrade-and-scale.md) that region will host.
-* Optional [zone redundancy](../reliability/migrate-api-mgt.md), if that region supports it.
+* Optional [availability zones](../reliability/migrate-api-mgt.md), if that region supports it.
* [Virtual network](virtual-network-concepts.md) settings in the added region, if networking is configured in the existing region or regions.
When adding a region, you configure:
## Prerequisites * If you haven't created an API Management service instance, see [Create an API Management service instance](get-started-create-service-instance.md). Select the Premium service tier.
-* If your API Management instance is deployed in a virtual network, ensure that you set up a virtual network and subnet in the location that you plan to add, and within the same subscription. To enable zone redundancy, also set up a new public IP. See [virtual network prerequisites](api-management-using-with-vnet.md#prerequisites).
+* If your API Management instance is deployed in a virtual network, ensure that you set up a virtual network and subnet in the location that you plan to add, and within the same subscription. See [virtual network prerequisites](api-management-using-with-vnet.md#prerequisites).
## <a name="add-region"> </a>Deploy API Management service to an additional region
This section provides considerations for multi-region deployments when the API M
* Learn more about configuring API Management for [high availability](high-availability.md).
-* Learn more about [zone redundancy](../reliability/migrate-api-mgt.md) to improve the availability of an API Management instance in a region.
+* Learn more about configuring [availability zones](../reliability/migrate-api-mgt.md) to improve the availability of an API Management instance in a region.
* For more information about virtual networks and API Management, see:
api-management High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/high-availability.md
Enabling [zone redundancy](../reliability/migrate-api-mgt.md) for an API Managem
When you enable zone redundancy in a region, consider the number of API Management scale [units](upgrade-and-scale.md) that need to be distributed. Minimally, configure the same number of units as the number of availability zones, or a multiple so that the units are distributed evenly across the zones. For example, if you select 3 availability zones in a region, you could have 3 units so that each zone hosts one unit. > [!NOTE]
-> Use the [capacity](api-management-capacity.md) metric and your own testing to decide the number of scale units that will provide the gateway performance for your needs. Learn more about [scaling and upgrading](upgrade-and-scale.md) your service instance.
+> Use [capacity metrics](api-management-capacity.md) and your own testing to decide the number of scale units that will provide the gateway performance for your needs. Learn more about [scaling and upgrading](upgrade-and-scale.md) your service instance.
## Multi-region deployment
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
description: This article tells how to delete and your Automation account across
Previously updated : 09/09/2024 Last updated : 10/10/2024
To delete your Automation account linked to a Log Analytics workspace in support
1. Sign in to Azure at [https://portal.azure.com](https://portal.azure.com).
-2. Navigate to your Automation account, and select **Linked workspace** under **Related resources**.
+1. Navigate to your Automation account, and select **Linked workspace**.
-3. Select **Go to workspace**.
+1. Under **Related resources**, select **Linked workspace** and then select **Go to workspace**.
-4. Select **Solutions** under **General**.
+4. Under **Classic**, select **Legacy solutions**.
5. On the Solutions page, select one of the following based on the feature(s) deployed in the account:
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
Title: How to create update deployments for Azure Automation Update Management
description: This article describes how to schedule update deployments and review their status. Previously updated : 09/15/2024 Last updated : 10/18/2024
# How to deploy updates and review results > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
+> This article references CentOS, a Linux distribution that has reached the End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
[!INCLUDE [./automation-update-management-retirement-announcement.md](../includes/automation-update-management-retirement-announcement.md)]
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Previously updated : 09/15/2024 Last updated : 10/18/2024 # Manage updates and patches for your VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
+> This article references CentOS, a Linux distribution that has reached the End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
[!INCLUDE [./automation-update-management-retirement-announcement.md](../includes/automation-update-management-retirement-announcement.md)]
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
description: This article provides an overview of the Update Management feature
Previously updated : 09/15/2024 Last updated : 10/18/2024
# Update Management overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
+> This article references CentOS, a Linux distribution that has reached the End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
[!INCLUDE [./automation-update-management-retirement-announcement.md](../includes/automation-update-management-retirement-announcement.md)]
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md
Title: View Azure Automation update assessments
description: This article tells how to view update assessments for Update Management deployments. Previously updated : 09/15/2024 Last updated : 10/18/2024
# View update assessments in Update Management > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
+> This article references CentOS, a Linux distribution that has reached the End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
[!INCLUDE [./automation-update-management-retirement-announcement.md](../includes/automation-update-management-retirement-announcement.md)]
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
# Archive for What's new in Azure Automation? > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
+> This article references CentOS, a Linux distribution that has reached the End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
The primary [What's new in Azure Automation?](whats-new.md) article contains updates for the last six months, while this article contains all the older information.
azure-functions Durable Functions External Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-external-events.md
import azure.functions as func
import azure.durable_functions as df def orchestrator_function(context: df.DurableOrchestrationContext):
- approved = context.wait_for_external_event('Approval')
+ approved = yield context.wait_for_external_event('Approval')
if approved: # approval granted - do the approved action else:
def orchestrator_function(context: df.DurableOrchestrationContext):
event2 = context.wait_for_external_event('Event2') event3 = context.wait_for_external_event('Event3')
- winner = context.task_any([event1, event2, event3])
+ winner = yield context.task_any([event1, event2, event3])
if winner == event1: # ... elif winner == event2:
In this case, the instance ID is hardcoded as *MyInstanceId*.
> [Learn how to implement error handling](durable-functions-error-handling.md) > [!div class="nextstepaction"]
-> [Run a sample that waits for human interaction](durable-functions-phone-verification.md)
+> [Run a sample that waits for human interaction](durable-functions-phone-verification.md)
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
Title: 'Tutorial: Route electric vehicles by using Azure Notebooks (Python) with
description: Tutorial on how to route electric vehicles by using Microsoft Azure Maps routing APIs and Azure Notebooks Previously updated : 04/26/2021 Last updated : 10/11/2024
# Tutorial: Route electric vehicles by using Azure Notebooks (Python)
-Azure Maps is a portfolio of geospatial service APIs that are natively integrated into Azure. These APIs enable developers, enterprises, and ISVs to develop location-aware apps, IoT, mobility, logistics, and asset tracking solutions.
+Azure Maps is a portfolio of geospatial service APIs integrated into Azure, enabling developers to create location-aware applications for various scenarios like IoT, mobility, and asset tracking.
-The Azure Maps REST APIs can be called from languages such as Python and R to enable geospatial data analysis and machine learning scenarios. Azure Maps offers a robust set of [routing APIs] that allow users to calculate routes between several data points. The calculations are based on various conditions, such as vehicle type or reachable area.
+Azure Maps REST APIs support languages like Python and R for geospatial data analysis and machine learning, offering robust [routing APIs] for calculating routes based on conditions such as vehicle type or reachable area.
-In this tutorial, you walk help a driver whose electric vehicle battery is low. The driver needs to find the closest possible charging station from the vehicle's location.
+This tutorial guides users through routing electric vehicles using Azure Maps APIs along with Azure Notebooks and Python to find the closest charging station when the battery is low.
In this tutorial, you will: > [!div class="checklist"]
+>
> * Create and run a Jupyter Notebook file on [Azure Notebooks] in the cloud. > * Call Azure Maps REST APIs in Python. > * Search for a reachable range based on the electric vehicle's consumption model.
-> * Search for electric vehicle charging stations within the reachable range, or isochrone.
+> * Search for electric vehicle charging stations within the reachable range, or [isochrone].
> * Render the reachable range boundary and charging stations on a map. > * Find and visualize a route to the closest electric vehicle charging station based on drive time.
In this tutorial, you will:
* An [Azure Maps account] * A [subscription key]
-* An [Azure storage account]
> [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps]. ## Create an Azure Notebooks project
-To follow along with this tutorial, you need to create an Azure Notebooks project and download and run the Jupyter Notebook file. The Jupyter Notebook file contains Python code, which implements the scenario in this tutorial. To create an Azure Notebooks project and upload the Jupyter Notebook document to it, do the following steps:
+To proceed with this tutorial, it's necessary to create an Azure Notebooks project and download and execute the Jupyter Notebook file. This file contains Python code that demonstrates the scenario presented in this tutorial.
-1. Go to [Azure Notebooks] and sign in. For more information, see [Quickstart: Sign in and set a user ID].
+Follow these steps to create an Azure Notebooks project and upload the Jupyter Notebook document:
+
+1. Go to [Azure Notebooks] and sign in.
1. At the top of your public profile page, select **My Projects**. ![The My Projects button](./media/tutorial-ev-routing/myproject.png)
To follow along with this tutorial, you need to create an Azure Notebooks projec
1. Upload the file from your computer, and then select **Done**.
-1. After the upload has finished successfully, your file is displayed on your project page. Double-click on the file to open it as a Jupyter Notebook.
+1. Once uploaded successfully, your file is displayed on your project page. Double-click on the file to open it as a Jupyter Notebook.
-Try to understand the functionality that's implemented in the Jupyter Notebook file. Run the code, in the Jupyter Notebook file, one cell at a time. You can run the code in each cell by selecting the **Run** button at the top of the Jupyter Notebook app.
+Familiarize yourself with the functionality implemented in the Jupyter Notebook file. Execute the code within the Jupyter Notebook one cell at a time by selecting the **Run** button located at the top of the Jupyter Notebook application.
![The Run button](./media/tutorial-ev-routing/run.png) ## Install project level packages
-To run the code in Jupyter Notebook, install packages at the project level by doing the following steps:
+To run the code in Jupyter Notebook, install packages at the project level by following these steps:
1. Download the [*requirements.txt*] file from the [Azure Maps Jupyter Notebook repository], and then upload it to your project. 1. On the project dashboard, select **Project Settings**.
To run the code in Jupyter Notebook, install packages at the project level by do
1. Under **Environment Setup Steps**, do the following: a. In the first drop-down list, select **Requirements.txt**. b. In the second drop-down list, select your *requirements.txt* file.
- c. In the third drop-down list, select **Python Version 3.6** as your version.
+ c. In the third drop-down list, select the version of Python. Version 3.11 was used when creating this tutorial.
1. Select **Save**. ![Install packages](./media/tutorial-ev-routing/install-packages.png) ## Load the required modules and frameworks
-To load all the required modules and frameworks, run the following script.
+Run the following script to load all the required modules and frameworks.
```Python import time
from IPython.display import Image, display
## Request the reachable range boundary
-A package delivery company has some electric vehicles in its fleet. During the day, electric vehicles need to be recharged without having to return to the warehouse. Every time the remaining charge drops to less than an hour, you search for a set of charging stations that are within a reachable range. Essentially, you search for a charging station when the battery is low on charge. And, you get the boundary information for that range of charging stations.
-
-Because the company prefers to use routes that require a balance of economy and speed, the requested routeType is *eco*. The following script calls the [Get Route Range API] of the Azure Maps routing service. It uses parameters for the vehicle's consumption model. The script then parses the response to create a polygon object of the geojson format, which represents the car's maximum reachable range.
+A package delivery company operates a fleet that includes some electric vehicles. These vehicles need to be recharged during the day without returning to the warehouse. When the remaining charge drops below an hour, a search is conducted to find charging stations within a reachable range. The boundary information for the range of these charging stations is then obtained.
-To determine the boundaries for the electric vehicle's reachable range, run the script in the following cell:
+The requested routeType is eco to balance economy and speed. The following script calls the [Get Route Range] API of the Azure Maps routing service, using parameters related to the vehicle's consumption model. The script then parses the response to create a polygon object in GeoJSON format, representing the car's maximum reachable range.
```python subscriptionKey = "Your Azure Maps key"
timeBudgetInSec=550
routeType="eco" constantSpeedConsumptionInkWhPerHundredkm="50,8.2:130,21.3" - # Get boundaries for the electric vehicle's reachable range. routeRangeResponse = await (await session.get("https://atlas.microsoft.com/route/range/json?subscription-key={}&api-version=1.0&query={}&travelMode={}&vehicleEngineType={}&currentChargeInkWh={}&maxChargeInkWh={}&timeBudgetInSec={}&routeType={}&constantSpeedConsumptionInkWhPerHundredkm={}" .format(subscriptionKey,str(currentLocation[0])+","+str(currentLocation[1]),travelMode, vehicleEngineType, currentChargeInkWh, maxChargeInkWh, timeBudgetInSec, routeType, constantSpeedConsumptionInkWhPerHundredkm))).json()
boundsData = {
## Search for electric vehicle charging stations within the reachable range
-After you've determined the reachable range (isochrone) for the electric vehicle, you can search for charging stations within that range.
+After determining the electric vehicle's reachable range ([isochrone]), you can search for charging stations within that area.
-The following script calls the Azure Maps [Post Search Inside Geometry API]. It searches for charging stations for electric vehicle, within the boundaries of the car's maximum reachable range. Then, the script parses the response to an array of reachable locations.
-
-To search for electric vehicle charging stations within the reachable range, run the following script:
+The following script uses the Azure Maps [Post Search Inside Geometry] API to find charging stations within the vehicleΓÇÖs maximum reachable range. It then parses the response into an array of reachable locations.
```python # Search for electric vehicle stations within reachable range.
for loc in range(len(searchPolyResponse["results"])):
reachableLocations.append(location) ```
-## Upload the reachable range and charging points
-
-It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. Follow the steps outlined in the [How to create data registry] article to upload the boundary data and charging stations data as geojson objects to your [Azure storage account] then register them in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geojson objects you uploaded into your Azure storage account from your source code.
-
-<!
-To upload the boundary and charging point data to Azure Maps Data service, run the following two cells:
-
-```python
-rangeData = {
- "type": "FeatureCollection",
- "features": [
- {
- "type": "Feature",
- "properties": {},
- "geometry": {
- "type": "Polygon",
- "coordinates": [
- polyBounds
- ]
- }
- }
- ]
-}
-
-# Upload the range data to Azure Maps Data service.
-uploadRangeResponse = await session.post("https://us.atlas.microsoft.com/mapData?subscription-key={}&api-version=2.0&dataFormat=geojson".format(subscriptionKey), json = rangeData)
-
-rangeUdidRequest = uploadRangeResponse.headers["Location"]+"&subscription-key={}".format(subscriptionKey)
-
-while True:
- getRangeUdid = await (await session.get(rangeUdidRequest)).json()
- if 'udid' in getRangeUdid:
- break
- else:
- time.sleep(0.2)
-rangeUdid = getRangeUdid["udid"]
-```
-
-```python
-poiData = {
- "type": "FeatureCollection",
- "features": [
- {
- "type": "Feature",
- "properties": {},
- "geometry": {
- "type": "MultiPoint",
- "coordinates": reachableLocations
- }
- }
- ]
-}
-
-# Upload the electric vehicle charging station data to Azure Maps Data service.
-uploadPOIsResponse = await session.post("https://us.atlas.microsoft.com/mapData?subscription-key={}&api-version=2.0&dataFormat=geojson".format(subscriptionKey), json = poiData)
-
-poiUdidRequest = uploadPOIsResponse.headers["Location"]+"&subscription-key={}".format(subscriptionKey)
-
-while True:
- getPoiUdid = await (await session.get(poiUdidRequest)).json()
- if 'udid' in getPoiUdid:
- break
- else:
- time.sleep(0.2)
-poiUdid = getPoiUdid["udid"]
-```
->
- ## Render the charging stations and reachable range on a map
-After you've uploaded the data to the Azure storage account, call the Azure Maps [Get Map Image service]. This service is used to render the charging points and maximum reachable boundary on the static map image by running the following script:
+Call the Azure Maps [Get Map Image service] to render the charging points and maximum reachable boundary on the static map image by running the following script:
```python # Get boundaries for the bounding box.
def getBounds(polyBounds):
return [minLon, maxLon, minLat, maxLat] minLon, maxLon, minLat, maxLat = getBounds(polyBounds)
+polyBoundsFormatted = ('|'.join(map(str, polyBounds))).replace('[','').replace(']','').replace(',','')
+reachableLocationsFormatted = ('|'.join(map(str, reachableLocations))).replace('[','').replace(']','').replace(',','')
-path = "lcff3333|lw3|la0.80|fa0.35||udid-{}".format(rangeUdid)
-pins = "custom|an15 53||udid-{}||https://raw.githubusercontent.com/Azure-Samples/AzureMapsCodeSamples/master/AzureMapsCodeSamples/Common/images/icons/ev_pin.png".format(poiUdid)
+path = "lcff3333|lw3|la0.80|fa0.35||{}".format(polyBoundsFormatted)
+pins = "custom|an15 53||{}||https://raw.githubusercontent.com/Azure-Samples/AzureMapsCodeSamples/e3a684e7423075129a0857c63011e7cfdda213b7/Static/images/icons/ev_pin.png".format(reachableLocationsFormatted)
encodedPins = urllib.parse.quote(pins, safe='')
display(Image(poiRangeMap))
## Find the optimal charging station
-First, you want to determine all the potential charging stations within the reachable range. Then, you want to know which of them can be reached in a minimum amount of time.
-
-The following script calls the Azure Maps [Matrix Routing API]. It returns the specified vehicle location, the travel time, and the distance to each charging station. The script in the next cell parses the response to locate the closest reachable charging station with respect to time.
+First, identify all the potential charging stations within the vehicleΓÇÖs reachable range. Next, determine which of these stations can be accessed in the shortest possible time.
-To find the closest reachable charging station that can be reached in the least amount of time, run the script in the following cell:
+The following script calls the Azure Maps [Matrix Routing] API. It returns the vehicle's location, travel time, and distance to each charging station. The subsequent script parses this response to identify the closest charging station that can be reached in the least amount of time.
```python locationData = {
closestChargeLoc = ",".join(str(i) for i in minDistLoc)
## Calculate the route to the closest charging station
-Now that you've found the closest charging station, you can call the [Get Route Directions API] to request the detailed route from the electric vehicle's current location to the charging station.
-
-To get the route to the charging station and to parse the response to create a geojson object that represents the route, run the script in the following cell:
+After locating the nearest charging station, use the [Get Route Directions] API to obtain detailed directions from the vehicles current location. Run the script in the next cell to generate and parse a GeoJSON object representing the route.
```python # Get the route from the electric vehicle's current location to the closest charging station.
routeData = {
## Visualize the route
-To help visualize the route, follow the steps outlined in the [How to create data registry] article to upload the route data as a geojson object to your [Azure storage account] then register it in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geojson objects you uploaded into your Azure storage account from your source code. Then, call the rendering service, [Get Map Image API], to render the route on the map, and visualize it.
-
-To get an image for the rendered route on the map, run the following script:
+To visualize the route, use the [Get Map Image] API to render it on the map.
```python
-# Upload the route data to Azure Maps Data service .
-routeUploadRequest = await session.post("https://atlas.microsoft.com/mapData?subscription-key={}&api-version=2.0&dataFormat=geojson".format(subscriptionKey), json = routeData)
-
-udidRequestURI = routeUploadRequest.headers["Location"]+"&subscription-key={}".format(subscriptionKey)
-
-while True:
- udidRequest = await (await session.get(udidRequestURI)).json()
- if 'udid' in udidRequest:
- break
- else:
- time.sleep(0.2)
-
-udid = udidRequest["udid"]
- destination = route[-1]
-destination[1], destination[0] = destination[0], destination[1]
+#destination[1], destination[0] = destination[0], destination[1]
-path = "lc0f6dd9|lw6||udid-{}".format(udid)
-pins = "default|codb1818||{} {}|{} {}".format(str(currentLocation[1]),str(currentLocation[0]),destination[1],destination[0])
+routeFormatted = ('|'.join(map(str, route))).replace('[','').replace(']','').replace(',','')
+path = "lc0f6dd9|lw6||{}".format(routeFormatted)
+pins = "default|codb1818||{} {}|{} {}".format(str(currentLocation[1]),str(currentLocation[0]),destination[0],destination[1])
# Get boundaries for the bounding box.
-minLat, maxLat = (float(destination[0]),currentLocation[0]) if float(destination[0])<currentLocation[0] else (currentLocation[0], float(destination[0]))
-minLon, maxLon = (float(destination[1]),currentLocation[1]) if float(destination[1])<currentLocation[1] else (currentLocation[1], float(destination[1]))
+minLon, maxLon = (float(destination[0]),currentLocation[1]) if float(destination[0])<currentLocation[1] else (currentLocation[1], float(destination[0]))
+minLat, maxLat = (float(destination[1]),currentLocation[0]) if float(destination[1])<currentLocation[0] else (currentLocation[0], float(destination[1]))
# Buffer the bounding box by 10 percent to account for the pixel size of pins at the ends of the route. lonBuffer = (maxLon-minLon)*0.1
To explore the Azure Maps APIs that are used in this tutorial, see:
* [Get Route Directions] * [Azure Maps REST APIs]
-## Clean up resources
-
-There are no resources that require cleanup.
- ## Next steps To learn more about Azure Notebooks, see
To learn more about Azure Notebooks, see
[Azure Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook [Azure Maps REST APIs]: /rest/api/maps [Azure Notebooks]: https://notebooks.azure.com
-[Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
-[Get Map Image API]: /rest/api/maps/render/get-map-static-image
+[Get Map Image]: /rest/api/maps/render/get-map-static-image
[Get Map Image service]: /rest/api/maps/render/get-map-static-image
-[Get Route Directions API]: /rest/api/maps/route/getroutedirections
[Get Route Directions]: /rest/api/maps/route/getroutedirections
-[Get Route Range API]: /rest/api/maps/route/getrouterange
[Get Route Range]: /rest/api/maps/route/getrouterange
-[How to create data registry]: how-to-create-data-registries.md
+[isochrone]: glossary.md#isochrone
[Jupyter Notebook document file]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb [manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Matrix Routing API]: /rest/api/maps/route/postroutematrix
+[Matrix Routing]: /rest/api/maps/route/postroutematrix
[Post Route Matrix]: /rest/api/maps/route/postroutematrix
-[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
-[Quickstart: Sign in and set a user ID]: https://notebooks.azure.com
[Render - Get Map Image]: /rest/api/maps/render/get-map-static-image [*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt [routing APIs]: /rest/api/maps/route
azure-netapp-files Access Smb Volume From Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/access-smb-volume-from-windows-client.md
Previously updated : 08/20/2024 Last updated : 10/18/2024 # Access SMB volumes from Microsoft Entra joined Windows virtual machines
The configuration process takes you through five process:
* `$servicePrincipalName`: The SPN details from mounting the Azure NetApp Files volume. Use the CIFS/FQDN format. For example: `CIFS/NETBIOS-1234.CONTOSO.COM` * `$targetApplicationID`: Application (client) ID of the Microsoft Entra application. * `$domainCred`: use `Get-Credential` (should be an AD DS domain administrator)
- * `$cloudCred`: use `Get-Credential` (should be a Microsoft Entra Global Administrator)
+ * `$cloudCred`: use `Get-Credential` (likely a [Hybrid Identity Administrator](/entra/identity/role-based-access-control/permissions-reference#hybrid-identity-administrator))
```powershell $servicePrincipalName = CIFS/NETBIOS-1234.CONTOSO.COM
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
Title: Performance considerations for Azure NetApp Files | Microsoft Docs
+ Title: General performance considerations for Azure NetApp Files | Microsoft Docs
description: Learn about performance for Azure NetApp Files, including the relationship of quota and throughput limit and how to dynamically increase/decrease volume quota. Previously updated : 08/31/2023 Last updated : 10/17/2024
-# Performance considerations for Azure NetApp Files
+# General performance considerations for Azure NetApp Files
> [!IMPORTANT] > This article addresses performance considerations for *regular volumes* only.
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
Title: Azure NetApp Files performance benchmarks for Linux | Microsoft Docs
-description: Describes performance benchmarks Azure NetApp Files delivers for Linux.
+description: Describes performance benchmarks Azure NetApp Files delivers for Linux with a regular volume.
Last updated 03/24/2024
-# Azure NetApp Files performance benchmarks for Linux
+# Azure NetApp Files regular volume performance benchmarks for Linux
-This article describes performance benchmarks Azure NetApp Files delivers for Linux.
+This article describes performance benchmarks Azure NetApp Files delivers for Linux with a [regular volume](azure-netapp-files-understand-storage-hierarchy.md#volumes).
## Linux scale-out
azure-netapp-files Performance Virtual Machine Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-virtual-machine-sku.md
Title: Azure virtual machine SKUs best practices for Azure NetApp Files | Microsoft Docs
-description: Describes Azure NetApp Files best practices about Azure virtual machine SKUs, including differences within and between SKUs.
+ Title: Azure virtual machine stock-keeping units (SKUs) best practices for Azure NetApp Files | Microsoft Docs
+description: Describes Azure NetApp Files best practices about Azure virtual machine stocking-keeping units (SKUs), including differences within and between SKUs.
Last updated 07/02/2021
-# Azure virtual machine SKUs best practices for Azure NetApp Files
+# Azure virtual machine stock-keeping unit best practices for Azure NetApp Files
-This article describes Azure NetApp Files best practices about Azure virtual machine SKUs, including differences within and between SKUs.
+This article describes Azure NetApp Files best practices about Azure virtual machine stock-keeping units (SKUs), including differences within and between SKUs.
## SKU selection considerations Storage performance involves more than the speed of the storage itself. The processor speed and architecture have a lot to do with the overall experience from any particular compute node. As part of the selection process for a given SKU, you should consider the following factors:
-* AMD or Intel: For example, SAS uses a math kernel library designed specifically for Intel processors. In this case, Intel SKUs are preferred over AMD SKU.
-* The F2, E_v3, and D_v3 machine types are each based on more than one chipset. In using Azure Dedicated Hosts, you might select specific models (Broadwell, Cascade Lake, or Skylake when selecting the E type for example). Otherwise, the chipset selection is non-deterministic. If you are deploying an HPC cluster and a consistent experience across the inventory is important, then you can consider single Azure Dedicated Hosts or go with single chipset SKUs such as the E_v4 or D_v4.
+* AMD or Intel: For example, SAS uses a math kernel library designed specifically for Intel processors. In this case, Intel SKUs are preferred over AMD SKU.
+* The F2, E_v3, and D_v3 machine types are each based on more than one chipset. In using Azure Dedicated Hosts, you might select specific models (Broadwell, Cascade Lake, or Skylake when selecting the E type for example). Otherwise, the chipset selection is nondeterministic. If you're deploying an HPC cluster and a consistent experience across the inventory is important, then you can consider single Azure Dedicated Hosts or go with single chipset SKUs such as the E_v4 or D_v4.
* Performance variability with network-attached storage (NAS) has been observed in testing with both the Intel Broadwell based SKUs and the AMD EPYCΓäó 7551 based SKUs. Two issues have been observed:
- * When the accelerated network interface is inappropriately mapped to a sub optimal NUMA Node, read performance decreases significantly. Although mapping the accelerated networking interface to a specific NUMA node is beneficial on newer SKUs, it must be considered a requirement on SKUs with these chipsets (Lv2|E_v3|D_v3).
- * Virtual machines running on the Lv2, or either E_v3 or D_v3 running on a Broadwell chipset are more susceptible to resource contention than when running on other SKUs. When testing using multiple virtual machines running within a single Azure Dedicated Host, running network-based storage workload from one virtual machine has been seen to decrease the performance of network-based storage workloads running from a second virtual machine. The decrease is more pronounced when any of the virtual machines on the node have not had their accelerated network interface/NUMA node optimally mapped. Keep in mind that the E_v3 and D_V3 may between them land on Haswell, Broadwell, Cascade Lake, or Skylake.
+ * When the accelerated network interface is inappropriately mapped to a sub optimal NUMA Node, read performance decreases significantly. Although mapping the accelerated networking interface to a specific NUMA node is beneficial on newer SKUs, it must be considered a requirement on SKUs with these chipsets (Lv2|E_v3|D_v3).
+ * Virtual machines running on the Lv2, or either E_v3 or D_v3 running on a Broadwell chipset are more susceptible to resource contention than when running on other SKUs. When testing using multiple virtual machines running within a single Azure Dedicated Host, running network-based storage workload from one virtual machine has been seen to decrease the performance of network-based storage workloads running from a second virtual machine. The decrease is more pronounced when any of the virtual machines on the node haven't had their accelerated network interface/NUMA node optimally mapped. Keep in mind that the E_v3 and D_V3 may between them land on Haswell, Broadwell, Cascade Lake, or Skylake.
-For the most consistent performance when selecting virtual machines, select from SKUs with a single type of chipset ΓÇô newer SKUs are preferred over the older models where available. Keep in mind that, aside from using a dedicated host, predicting correctly which type of hardware the E_v3 or D_v3 virtual machines land on is unlikely. When using the E_v3 or D_v3 SKU:
+For the most consistent performance when selecting virtual machines, select from SKUs with a single type of chipset ΓÇô newer SKUs are preferred over the older models where available. Keep in mind that, aside from using a dedicated host, predicting correctly which type of hardware the E_v3 or D_v3 virtual machines land on is unlikely. When using the E_v3 or D_v3 SKU:
-* When a virtual machine is turned off, de-allocated, and then turned on again, the virtual machine is likely to change hosts and as such hardware models.
+* When a virtual machine is turned off, deallocated, and then turned on again, the virtual machine is likely to change hosts and as such hardware models.
* When applications are deployed across multiple virtual machines, expect the virtual machines to run on heterogenous hardware. ## Differences within and between SKUs
-The following table highlights the differences within and between SKUs. Note, for example, that the chipset of the underlying E_v3 and D_v3 vary between the Broadwell, Cascade Lake, Skylake, and also in the case of the D_v3.
+The following table highlights the differences within and between SKUs. Note, for example, that the chipset of the underlying E_v3 and D_v3 vary between the Broadwell, Cascade Lake, Skylake, and also in the case of the D_v3.
| Family | Version | Description | Frequency (GHz) | |-|-|-|-|
The following table highlights the differences within and between SKUs. Note,
| F | 2 | Intel® Xeon® Platinum 8168M (Cascade Lake) | 2.7 (3.7) | | F | 2 | Gen 2 Intel® Xeon® Platinum 8272CL (Skylake) | 2.1 (3.8) |
-When preparing a multi-node SAS GRID environment for production, you might notice a repeatable one-hour-and-fifteen-minute variance between analytics runs with no other difference than underlying hardware.
+When preparing a multi-node SAS GRID environment for production, you might notice a repeatable one-hour-and-fifteen-minute variance between analytics runs with no other difference than underlying hardware.
| SKU and hardware platform | Job run times | |-|-|
In both sets of tests, an E32-8_v3 SKU was selected, and RHEL 8.3 was used along
## Best practices
-* Whenever possible, select the E_v4, D_v4, or newer rather than the E_v3 or D_v3 SKUs.
+* Whenever possible, select the E_v4, D_v4, or newer rather than the E_v3 or D_v3 SKUs.
* Whenever possible, select the Ed_v4, Dd_v4, or newer rather than the L2 SKU. ## Next steps
azure-netapp-files Solutions Benefits Azure Netapp Files Electronic Design Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-electronic-design-automation.md
na Previously updated : 03/19/2024 Last updated : 10/18/2024 # Benefits of using Azure NetApp Files for Electronic Design Automation (EDA)
The Azure NetApp Files large volumes feature is ideal for the storage needs of t
* **At 826,000 operations per second:** the performance edge of a single large volume - the application layer peaked at 7ms of latency in our tests, which shows that more operations are possible in a single large volume at a slight cost of latency.
-Tests conducted internally using an EDA benchmark in 2020 found that with a single regular Azure NetApp Files volume, workload as high as 40,000 IOPS could be achieved at the 2ms mark, and 50,000 at the edge.
+Tests conducted using an EDA benchmark found that with a single regular Azure NetApp Files volume, workload as high as 40,000 IOPS could be achieved at the 2ms mark, and 50,000 at the edge. See the table and chart below for regular and large volume side-by-side overview.
| Scenario | I/O Rate at 2ms latency | I/O Rate at performance edge (~7 ms) | MiB/s at 2ms latency | MiB/s performance edge (~7 ms) |
The following chart illustrates the test results.
:::image type="content" source="./media/solutions-benefits-azure-netapp-files-electronic-design-automation/latency-throughput-graph.png" alt-text="Chart comparing latency and throughput between large and regular volumes." lightbox="./media/solutions-benefits-azure-netapp-files-electronic-design-automation/latency-throughput-graph.png":::
-The 2020 internal testing also explored single endpoint limits, the limits were reached with six volumes. Large Volume outperforms the scenario with six regular volumes by 260%.
+The regular volume testing also explored single endpoint limits, the limits were reached with six volumes. Large Volume outperforms the scenario with six regular volumes by 260%. The following table illustrates these results.
| Scenario | I/O Rate at 2ms latency | I/O Rate at performance edge (~7ms) | MiB/s at 2ms latency | MiB/s performance edge (~7ms) | | - | - | - | - | - |
azure-resource-manager Create Visual Studio Deployment Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/create-visual-studio-deployment-project.md
Last updated 03/20/2024
# Creating and deploying Azure resource groups through Visual Studio
+> [!NOTE]
+> The Azure Resource Group project is now in extended support, meaning we will continue to support existing features and capabilities but won't prioritize adding new features.
+
+> [!NOTE]
+> For the best and most secure experience, we strongly recommend updating your Visual Studio installation to the [latest Long-Term Support (LTS) version](/visualstudio/install/update-visual-studio?view=vs-2022). Upgrading will improve both the reliability and overall performance of your Visual Studio environment.
+ With Visual Studio, you can create a project that deploys your infrastructure and code to Azure. For example, you can deploy the web host, website, and code for the website. Visual Studio provides many different starter templates for deploying common scenarios. In this article, you deploy a web app. This article shows how to use [Visual Studio 2019 or later with the Azure development and ASP.NET workloads installed](/visualstudio/install/install-visual-studio). If you use Visual Studio 2017, your experience is largely the same.
You can customize a deployment project by modifying the Resource Manager templat
1. The parameter for the type of storage account is pre-defined with allowed types and a default type. You can leave these values or edit them for your scenario. If you don't want anyone to deploy a **Premium_LRS** storage account through this template, remove it from the allowed types. ```json
- "demoaccountType": {
+ "demoAccountType": {
"type": "string", "defaultValue": "Standard_LRS", "allowedValues": [ "Standard_LRS", "Standard_ZRS", "Standard_GRS",
- "Standard_RAGRS"
+ "Standard_RAGRS",
+ "Premium_LRS"
] } ```
-1. Visual Studio also provides intellisense to help you understand the properties that are available when editing the template. For example, to edit the properties for your App Service plan, navigate to the **HostingPlan** resource, and add a value for the **properties**. Notice that intellisense shows the available values and provides a description of that value.
-
- :::image type="content" source="./media/create-visual-studio-deployment-project/show-intellisense.png" alt-text="Screenshot of Visual Studio editor showing intellisense suggestions for Resource Manager template.":::
-
- You can set **numberOfWorkers** to 1, and save the file.
+1. Navigate to the **HostingPlan** resource, and add a value for the **properties** with some properties.
```json "properties": {
You can customize a deployment project by modifying the Resource Manager templat
} ```
+ You also need to define the `hostingPlanName` parameter:
+
+ ```json
+ "hostingPlanName": {
+ "type": "string",
+ "metadata": {
+ "description": "Hosting paln name."
+ }
+ }
+ ```
+ 1. Open the **WebSite.parameters.json** file. You use the parameters file to pass in values during deployment that customize the resource being deployed. Give the hosting plan a name, and save the file. ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "hostingPlanName": {
At this point, you've deployed the infrastructure for your app, but there's no a
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "hostingPlanName": {
azure-resource-manager Update Visual Studio Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/update-visual-studio-deployment-script.md
Title: Update Visual Studio's template deployment script to use Az PowerShell description: Update the Visual Studio template deployment script from AzureRM to Az PowerShell Previously updated : 09/26/2024 Last updated : 10/18/2024 # Update Visual Studio template deployment script to use Az PowerShell module
+> [!NOTE]
+> The Azure Resource Group project is now in extended support, meaning we will continue to support existing features and capabilities but won't prioritize adding new features.
+
+> [!NOTE]
+> For the best and most secure experience, we strongly recommend updating your Visual Studio installation to the [latest Long-Term Support (LTS) version](/visualstudio/install/update-visual-studio?view=vs-2022). Upgrading will improve both the reliability and overall performance of your Visual Studio environment.
+ Visual Studio 16.4 supports using the Az PowerShell module in the template deployment script. However, Visual Studio doesn't automatically install that module. To use the Az module, you need to take four steps: 1. [Uninstall AzureRM module](/powershell/azure/uninstall-az-ps#uninstall-the-azurerm-module)
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
Last updated 3/22/2024
-# Use Azure VMware Solution with Azure Elastic SAN (Integration in Preview)
+# Use Azure VMware Solution with Azure Elastic SAN
This article explains how to use Azure Elastic SAN as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
The following prerequisites are required to continue.
> The host exposes its Availability Zone. You should use that AZ when deploying other Azure resources for the same subscription. - You have permission to set up new resources in the subscription your private cloud is in.
+- Register the following feature flags for your subscription:
+
+ - iSCSIMultipath
+
+ - ElasticSanDatastore
+
- Reserve a dedicated address block for your external storage. ## Supported host types
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
When planning your NC2 on Azure design, use the following table to understand wh
| Japan East | AN36P | | North Central US | AN36P | | Southeast Asia | AN36P |
+| UAE North | AN36P |
| UK South | AN36P | | West Europe | AN36P | | West US 2 | AN36 |
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
This tutorial shows how to add a custom domain to an Azure Content Delivery Netw
The endpoint name in your content delivery network profile is a subdomain of azureedge.net. By default when delivering content, the content delivery network profile domain gets included in the URL.
-For example, `https://contoso.azureedge.net/photo.png`.
+For example, `https://*.azureedge.net/photo.png`.
Azure Content Delivery Network provides the option of associating a custom domain with a content delivery network endpoint. This option delivers content with a custom domain in your URL instead of the default domain.
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
The service principal of the Contoso application in the Fabrikam tenant is creat
You can see that the status of the Communication Services Teams.ManageCalls and Teams.ManageChats permissions are *Granted for {Directory_name}*.
-If you run into the issue "The app is trying to access a service '1fd5118e-2576-4263-8130-9503064c837a'(Azure Communication Services) that your organization '{GUID}' lacks a service principal for. Contact your IT Admin to review the configuration of your service subscriptions or consent to the application to create the required service principal." your Microsoft Entra tenant lacks a service principal for the Azure Communication Services application. To fix this issue, use PowerShell as a Microsoft Entra administrator to connect to your tenant. Replace `Tenant_ID` with an ID of your Microsoft Entra tenancy.
+If you run into the issue "The app is trying to access a service '00001111-aaaa-2222-bbbb-3333cccc4444'(Azure Communication Services) that your organization '{GUID}' lacks a service principal for. Contact your IT Admin to review the configuration of your service subscriptions or consent to the application to create the required service principal." your Microsoft Entra tenant lacks a service principal for the Azure Communication Services application. To fix this issue, use PowerShell as a Microsoft Entra administrator to connect to your tenant. Replace `Tenant_ID` with an ID of your Microsoft Entra tenancy.
You will require **Application.ReadWrite.All** as shown below.
Install-Module Microsoft.Graph
Then execute the following command to add a service principal to your tenant. Do not modify the GUID of the App ID. ```script
-New-MgServicePrincipal -AppId "1fd5118e-2576-4263-8130-9503064c837a"
+New-MgServicePrincipal -AppId "00001111-aaaa-2222-bbbb-3333cccc4444"
```
communication-services Receive Sms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md
The `SMSReceived` event generated when an SMS is sent to an Azure Communication
```json [{ "id": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
- "topic": "/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "topic": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
"subject": "/phonenumber/15555555555", "data": { "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
To add the Project Synergy application:
1. Run the following cmdlet, replacing *`<TenantID>`* with the tenant ID you noted down in step 5. ```powershell Connect-AzureAD -TenantId "<TenantID>"
- New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect"
+ New-AzureADServicePrincipal -AppId 00001111-aaaa-2222-bbbb-3333cccc4444 -DisplayName "Operator Connect"
``` ## Assign an Admin user to the Project Synergy application
Do the following steps in the tenant that contains your Project Synergy applicat
1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `TrunkManagement.Write`, `partnerSettings.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`. ```powershell # Get the Service Principal ID for Project Synergy (Operator Connect)
- $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $projectSynergyApplicationId = "00001111-aaaa-2222-bbbb-3333cccc4444"
$projectSynergyEnterpriseApplication = Get-MgServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'" # "Application.Read.All" # Required Operator Connect - Project Synergy Roles
Do the following steps in the tenant that contains your Project Synergy applicat
$partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2" $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e" $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
- $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $dataRead = "00001111-aaaa-2222-bbbb-3333cccc4444"
$dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27" $requiredRoles = $trunkManagementRead, $trunkManagementWrite, $partnerSettingsRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite
Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) an
You must enable Azure Communications Gateway within the Operator Connect or Teams Phone Mobile environment. This process requires configuring your environment with two Application IDs: - The Application ID of the system-assigned managed identity that you found in [Find the Application ID for your Azure Communication Gateway resource](#find-the-application-id-for-your-azure-communication-gateway-resource). This Application ID allows Azure Communications Gateway to use the roles that you set up in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway).-- A standard Application ID for an automatically created AzureCommunicationsGateway enterprise application. This ID is always `8502a0ec-c76d-412f-836c-398018e2312b`.
+- A standard Application ID for an automatically created AzureCommunicationsGateway enterprise application. This ID is always `11112222-bbbb-3333-cccc-4444dddd5555`.
To add the Application IDs: 1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration). 1. Add a new **Application Id** for the Application ID that you found for the managed identity.
-1. Add a second **Application Id** for the value `8502a0ec-c76d-412f-836c-398018e2312b`.
+1. Add a second **Application Id** for the value `11112222-bbbb-3333-cccc-4444dddd5555`.
## Register your deployment's domain name in Microsoft Entra
confidential-computing Quick Create Confidential Vm Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm.md
Use this example to create a custom parameter file for a Linux-based confidentia
```Powershell Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
- New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-MgServicePrincipal -AppId 00001111-aaaa-2222-bbbb-3333cccc4444 -DisplayName "Confidential VM Orchestrator"
``` 1. Set up your Azure key vault. For how to use an Azure Key Vault Managed HSM instead, see the next step.
Use this example to create a custom parameter file for a Linux-based confidentia
1. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault. ```azurecli-interactive
- $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ $cvmAgent = az ad sp show --id "00001111-aaaa-2222-bbbb-3333cccc4444" | Out-String | ConvertFrom-Json
az keyvault set-policy --name $KeyVault --object-id $cvmAgent.Id --key-permissions get release ```
Use this example to create a custom parameter file for a Linux-based confidentia
1. Give `Confidential VM Orchestrator` permissions to managed HSM. ```azurecli-interactive
- $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ $cvmAgent = az ad sp show --id "00001111-aaaa-2222-bbbb-3333cccc4444" | Out-String | ConvertFrom-Json
az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.Id --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName ```
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
To create a confidential [disk encryption set](/azure/virtual-machines/linux/dis
For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. [Install Microsoft Graph SDK](/powershell/microsoftgraph/installation) to execute the commands below. ```Powershell Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
- New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-MgServicePrincipal -AppId 00001111-aaaa-2222-bbbb-3333cccc4444 -DisplayName "Confidential VM Orchestrator"
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
For this step you need to be a Global Admin or you need to have the User Access
``` 3. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault. ```Powershell
- $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ $cvmAgent = az ad sp show --id "00001111-aaaa-2222-bbbb-3333cccc4444" | Out-String | ConvertFrom-Json
az keyvault set-policy --name keyVaultName --object-id $cvmAgent.Id --key-permissions get release ``` 4. Create a key in the key vault using [az keyvault key create](/cli/azure/keyvault). For the key type, use RSA-HSM.
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
You can use the Azure portal to create a [confidential VM](confidential-vm-overv
```Powershell Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
- New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-MgServicePrincipal -AppId 00001111-aaaa-2222-bbbb-3333cccc4444 -DisplayName "Confidential VM Orchestrator"
``` ## Create confidential VM
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
To learn more about Microsoft Graph Security, see the [Microsoft Graph Security
| Property | Value | |-|-| | **Application Name** | `MicrosoftGraphSecurityConnector` |
- | **Application ID** | `c4829704-0edc-4c3d-a347-7c4a67586f3c` |
+ | **Application ID** | `00001111-aaaa-2222-bbbb-3333cccc4444` |
||| To grant consent for the connector, your Microsoft Entra tenant administrator can follow either these steps:
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
When you create a container app, secrets are defined using the `--secrets` param
- The parameter accepts a space-delimited set of name/value pairs. - Each pair is delimited by an equals sign (`=`).-- To specify a Key Vault reference, use the format `<SECRET_NAME>=keyvaultref:<KEY_VAULT_SECRET_URI>,identityref:<MANAGED_IDENTITY_ID>`. For example, `queue-connection-string=keyvaultref:https://mykeyvault.vault.azure.net/secrets/queuereader,identityref:/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity`.
+- To specify a Key Vault reference, use the format `<SECRET_NAME>=keyvaultref:<KEY_VAULT_SECRET_URI>,identityref:<MANAGED_IDENTITY_ID>`. For example, `queue-connection-string=keyvaultref:https://mykeyvault.vault.azure.net/secrets/queuereader,identityref:/subscriptions/ffffffff-eeee-dddd-cccc-bbbbbbbbbbb0/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity`.
```azurecli-interactive az containerapp create \
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Later in this article, you give permission to the Microsoft Entra app to act by
| Role | Actions allowed | Role definition ID | | | | |
-| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
-| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 |
+| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e |
+| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f |
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a |
-| SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 |
+| SubscriptionCreator | Create new subscriptions in the given scope of Account. | cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a |
- An EnrollmentReader role can be assigned to a service principal only by a user who has an enrollment writer role. The EnrollmentReader role assigned to a service principal isn't shown in the Azure portal. It gets created by programmatic means and is only for programmatic use. - A DepartmentReader role can be assigned to a service principal only by a user who has an enrollment writer or department writer role.
A service principal can have only one role.
| | | | `properties.principalId` | It's the value of Object ID. See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). |
- | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` |
The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
- Notice that `24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` is a billing role definition ID for an EnrollmentReader.
+ Notice that `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` is a billing role definition ID for an EnrollmentReader.
1. Select **Run** to start the command.
Now you can use the service principal to automatically access EA APIs. The servi
For the EA purchaser role, use the same steps for the enrollment reader. Specify the `roleDefinitionId`, using the following example:
-`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ da6647fb-7651-49ee-be91-c43c4877f0c4"`
+`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f"`
## Assign the department reader role to the service principal
Now you can use the service principal to automatically access EA APIs. The servi
| | | | `properties.principalId` | It's the value of Object ID. See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). |
- | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a` |
The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
- The billing role definition ID of `a0bcee42-bf30-4d1b-926a-48d21664ef71` is for the subscription creator role.
+ The billing role definition ID of `cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a` is for the subscription creator role.
1. Select **Run** to start the command.
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
To [create subscriptions under an enrollment account](programmatically-create-su
{ "value": [ {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"type": "Microsoft.Billing/enrollmentAccounts", "properties": { "principalName": "SignUpEngineering@contoso.com"
To [create subscriptions under an enrollment account](programmatically-create-su
} ```
- Use the `principalName` property to identify the account that you want to grant Azure RBAC Owner access to. Copy the `name` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. It's the object ID of the enrollment account. Paste this value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
+ Use the `principalName` property to identify the account that you want to grant Azure RBAC Owner access to. Copy the `name` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```. It's the object ID of the enrollment account. Paste this value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
# [PowerShell](#tab/azure-powershell)
To [create subscriptions under an enrollment account](programmatically-create-su
```azurepowershell ObjectId | PrincipalName
- 747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx | SignUpEngineering@contoso.com
+ aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb | SignUpEngineering@contoso.com
4cd2fcf6-xxxx-xxxx-xxxx-xxxxxxxxxxxx | BillingPlatformTeam@contoso.com ```
- Use the `principalName` property to identify the account you want to grant Azure RBAC Owner access to. Copy the `ObjectId` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. Paste this object ID somewhere so that you can use it in the next step as the `enrollmentAccountObjectId`.
+ Use the `principalName` property to identify the account you want to grant Azure RBAC Owner access to. Copy the `ObjectId` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```. Paste this object ID somewhere so that you can use it in the next step as the `enrollmentAccountObjectId`.
# [Azure CLI](#tab/azure-cli)
To [create subscriptions under an enrollment account](programmatically-create-su
```json [ {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"principalName": "SignUpEngineering@contoso.com", "type": "Microsoft.Billing/enrollmentAccounts", }, {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"name": "4cd2fcf6-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "principalName": "BillingPlatformTeam@contoso.com", "type": "Microsoft.Billing/enrollmentAccounts",
To [create subscriptions under an enrollment account](programmatically-create-su
- Use the `principalName` property to identify the account that you want to grant Azure RBAC Owner access to. Copy the `name` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. It's the object ID of the enrollment account. Paste this value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
+ Use the `principalName` property to identify the account that you want to grant Azure RBAC Owner access to. Copy the `name` of that account. For example, if you wanted to grant Azure RBAC Owner access to the SignUpEngineering@contoso.com enrollment account, you'd copy ```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```. It's the object ID of the enrollment account. Paste this value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
1. <a id="userObjectId"></a>Get object ID of the user or group you want to give the Azure RBAC Owner role to
To [create subscriptions under an enrollment account](programmatically-create-su
# [REST](#tab/rest-2)
- Run the following command, replacing ```<enrollmentAccountObjectId>``` with the `name` you copied in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). Replace ```<userObjectId>``` with the object ID you copied from the second step.
+ Run the following command, replacing ```<enrollmentAccountObjectId>``` with the `name` you copied in the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). Replace ```<userObjectId>``` with the object ID you copied from the second step.
```json PUT https://management.azure.com/providers/Microsoft.Billing/enrollmentAccounts/<enrollmentAccountObjectId>/providers/Microsoft.Authorization/roleAssignments/<roleAssignmentGuid>?api-version=2015-07-01
To [create subscriptions under an enrollment account](programmatically-create-su
"properties": { "roleDefinitionId": "/providers/Microsoft.Billing/enrollmentAccounts/providers/Microsoft.Authorization/roleDefinitions/<ownerRoleDefinitionId>", "principalId": "<userObjectId>",
- "scope": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "scope": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"createdOn": "2018-03-05T08:36:26.4014813Z", "updatedOn": "2018-03-05T08:36:26.4014813Z", "createdBy": "<assignerObjectId>",
To [create subscriptions under an enrollment account](programmatically-create-su
# [PowerShell](#tab/azure-powershell-2)
- Run the following [New-AzRoleAssignment](../../role-based-access-control/role-assignments-powershell.md) command, replacing ```<enrollmentAccountObjectId>``` with the `ObjectId` collected in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). Replace ```<userObjectId>``` with the object ID collected in the second step.
+ Run the following [New-AzRoleAssignment](../../role-based-access-control/role-assignments-powershell.md) command, replacing ```<enrollmentAccountObjectId>``` with the `ObjectId` collected in the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). Replace ```<userObjectId>``` with the object ID collected in the second step.
```azurepowershell-interactive New-AzRoleAssignment -RoleDefinitionName Owner -ObjectId <userObjectId> -Scope /providers/Microsoft.Billing/enrollmentAccounts/<enrollmentAccountObjectId>
To [create subscriptions under an enrollment account](programmatically-create-su
# [Azure CLI](#tab/azure-cli-2)
- Run the following [az role assignment create](../../role-based-access-control/role-assignments-cli.md) command, replacing ```<enrollmentAccountObjectId>``` with the `name` you copied in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). Replace ```<userObjectId>``` with the object ID collected in the second step.
+ Run the following [az role assignment create](../../role-based-access-control/role-assignments-cli.md) command, replacing ```<enrollmentAccountObjectId>``` with the `name` you copied in the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). Replace ```<userObjectId>``` with the object ID collected in the second step.
```azurecli-interactive az role assignment create --role Owner --assignee-object-id <userObjectId> --scope /providers/Microsoft.Billing/enrollmentAccounts/<enrollmentAccountObjectId>
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md
az ad sp show --id aaaaaaaa-bbbb-cccc-1111-222222222222 --query 'id'
Sign in to Azure PowerShell and use the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) cmdlet: ```sh
-Get-AzADServicePrincipal -ApplicationId aaaaaaaa-bbbb-cccc-1111-222222222222 | Select-Object -Property Id
+Get-AzADServicePrincipal -ApplicationId 00001111-aaaa-2222-bbbb-3333cccc4444 | Select-Object -Property Id
``` Save the `Id` value returned by the command.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
The API response lists the billing accounts that you have access to.
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"properties": { "accountStatus": "Active", "accountType": "Enterprise",
The API response lists the billing accounts that you have access to.
} ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
### [PowerShell](#tab/azure-powershell)
Get-AzBillingAccount
You'll get back a list of all billing accounts that you have access to ```json
-Name : 5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
+Name : aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
DisplayName : Contoso AccountStatus : Active AccountType : Enterprise AgreementType : MicrosoftCustomerAgreement HasReadAccess : True ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
### [Azure CLI](#tab/azure-cli)
You'll get back a list of all billing accounts that you have access to.
"enrollmentAccounts": null, "enrollmentDetails": null, "hasReadAccess": true,
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"soldTo": null, "type": "Microsoft.Billing/billingAccounts" } ] ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
First you get the list of billing profiles under the billing account that you ha
### [REST](#tab/rest) ```json
-GET https://management.azure.com/providers/Microsoft.Billing/billingaccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingprofiles/?api-version=2020-05-01
+GET https://management.azure.com/providers/Microsoft.Billing/billingaccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingprofiles/?api-version=2020-05-01
``` The API response lists all the billing profiles on which you have access to create subscriptions:
The API response lists all the billing profiles on which you have access to crea
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx",
"name": "AW4F-xxxx-xxx-xxx", "properties": { "billingRelationshipType": "Direct",
The API response lists all the billing profiles on which you have access to crea
} ```
- Copy the `id` to next identify the invoice sections underneath the billing profile. For example, copy `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx` and call the following API.
+ Copy the `id` to next identify the invoice sections underneath the billing profile. For example, copy `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx` and call the following API.
```json
-GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoicesections?api-version=2020-05-01
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoicesections?api-version=2020-05-01
``` ### Response
GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/5e9
"totalCount": 1, "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
"name": "SH3V-xxxx-xxx-xxx", "properties": { "displayName": "Development",
GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/5e9
} ```
-Use the `id` property to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
+Use the `id` property to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
### [PowerShell](#tab/azure-powershell) ```azurepowershell
-Get-AzBillingProfile -BillingAccountName 5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
+Get-AzBillingProfile -BillingAccountName aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
``` You'll get the list of billing profiles under this account as part of the response.
PostalCode : 98052
Note the `name` of the billing profile from the above response. The next step is to get the invoice section that you have access to underneath this billing profile. You'll need the `name` of the billing account and billing profile. ```azurepowershell
-Get-AzInvoiceSection -BillingAccountName 5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx -BillingProfileName AW4F-xxxx-xxx-xxx
+Get-AzInvoiceSection -BillingAccountName aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx -BillingProfileName AW4F-xxxx-xxx-xxx
``` You'll get the invoice section returned.
Name : SH3V-xxxx-xxx-xxx
DisplayName : Development ```
-The `name` above is the Invoice section name you need to create a subscription under. Construct your billing scope using the format `/providers/Microsoft.Billing/billingAccounts/<BillingAccountName>/billingProfiles/<BillingProfileName>/invoiceSections/<InvoiceSectionName>`. In this example, this value equates to `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
+The `name` above is the Invoice section name you need to create a subscription under. Construct your billing scope using the format `/providers/Microsoft.Billing/billingAccounts/<BillingAccountName>/billingProfiles/<BillingProfileName>/invoiceSections/<InvoiceSectionName>`. In this example, this value equates to `"/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
### [Azure CLI](#tab/azure-cli) ```azurecli
-az billing profile list --account-name "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx" --expand "InvoiceSections"
+az billing profile list --account-name "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx" --expand "InvoiceSections"
``` This API returns the list of billing profiles and invoice sections under the provided billing account.
This API returns the list of billing profiles and invoice sections under the pro
} ], "hasReadAccess": true,
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx",
"indirectRelationshipInfo": null, "invoiceDay": 5, "invoiceEmailOptIn": true,
This API returns the list of billing profiles and invoice sections under the pro
"value": [ { "displayName": "Field_Led_Test_Ace",
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
"labels": null, "name": "SH3V-xxxx-xxx-xxx", "state": "Active",
This API returns the list of billing profiles and invoice sections under the pro
} ] ```
-Use the `id` property under the invoice section object to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, /providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx.
+Use the `id` property under the invoice section object to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, /providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx.
## Create a subscription for an invoice section
-The following example creates a subscription named *Dev Team subscription* for the *Development* invoice section. The subscription is billed to the *Contoso Billing Profile* billing profile and appears on the *Development* section of its invoice. You use the copied billing scope from the previous step: `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
+The following example creates a subscription named *Dev Team subscription* for the *Development* invoice section. The subscription is billed to the *Contoso Billing Profile* billing profile and appears on the *Development* section of its invoice. You use the copied billing scope from the previous step: `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
### [REST](#tab/rest)
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid
{ "properties": {
- "billingScope": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
+ "billingScope": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
"DisplayName": "Dev Team subscription", "Workload": "Production" }
An in-progress status is returned as an `Accepted` state under `provisioningStat
To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
+Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
```azurepowershell
-New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload "Production"
+New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload "Production"
``` You get the subscriptionId as part of the response from the command.
First, install the extension by running `az extension add --name account` and `a
Run the [az account alias create](/cli/azure/account/alias#az-account-alias-create) following command. ```azurecli
-az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" --display-name "Dev Team Subscription" --workload "Production"
+az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" --display-name "Dev Team Subscription" --workload "Production"
``` You get the subscriptionId as part of the response from the command.
With a request body:
"value": "sampleAlias" }, "billingScope": {
- "value": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
+ "value": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
} }, "mode": "Incremental"
New-AzManagementGroupDeployment `
-ManagementGroupId mg1 ` -TemplateFile azuredeploy.json ` -subscriptionAliasName sampleAlias `
- -billingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
+ -billingScope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
``` ### [Azure CLI](#tab/azure-cli)
az deployment mg create \
--location eastus \ --management-group-id mg1 \ --template-file azuredeploy.json \
- --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx'
+ --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx'
```
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
The API response lists the billing accounts.
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"properties": { "accountStatus": "Active", "accountType": "Partner",
The API response lists the billing accounts.
} ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftPartnerAgreement*. Copy the `name` for the account. For example, to create a subscription for the `Contoso` billing account, copy `99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftPartnerAgreement*. Copy the `name` for the account. For example, to create a subscription for the `Contoso` billing account, copy `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
### [PowerShell](#tab/azure-powershell)
You will get back a list of all billing accounts that you have access to.
"enrollmentAccounts": null, "enrollmentDetails": null, "hasReadAccess": true,
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"soldTo": null, "type": "Microsoft.Billing/billingAccounts" } ] ```
-Use the displayName property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is MicrosoftPartnerAgreement. Copy the name for the account. For example, to create a subscription for the Contoso billing account, copy 99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx. Paste the value somewhere so that you can use it in the next step.
+Use the displayName property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is MicrosoftPartnerAgreement. Copy the name for the account. For example, to create a subscription for the Contoso billing account, copy aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx. Paste the value somewhere so that you can use it in the next step.
## Find customers that have Azure plans
-Make the following request, with the `name` copied from the first step (```99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) to list all customers in the billing account for whom you can create Azure subscriptions.
+Make the following request, with the `name` copied from the first step (```aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) to list all customers in the billing account for whom you can create Azure subscriptions.
### [REST](#tab/rest) ```json
-GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers?api-version=2020-05-01
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers?api-version=2020-05-01
``` The API response lists the customers in the billing account with Azure plans. You can create subscriptions for these customers.
The API response lists the customers in the billing account with Azure plans. Yo
"totalCount": 2, "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
+ "name": "bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"properties": { "billingProfileDisplayName": "Fabrikam toys Billing Profile",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
"displayName": "Fabrikam toys" }, "type": "Microsoft.Billing/billingAccounts/customers" }, {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "properties": { "billingProfileDisplayName": "Contoso toys Billing Profile",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
"displayName": "Contoso toys" }, "type": "Microsoft.Billing/billingAccounts/customers"
The API response lists the customers in the billing account with Azure plans. Yo
```
-Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Paste the value somewhere to use it in later steps.
+Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f`. Paste the value somewhere to use it in later steps.
### [PowerShell](#tab/azure-powershell)
Please use either Azure CLI or REST API to get this value.
### [Azure CLI](#tab/azure-cli) ```azurecli
-az billing customer list --account-name 99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
+az billing customer list --account-name aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
``` The API response lists the customers in the billing account with Azure plans. You can create subscriptions for these customers.
The API response lists the customers in the billing account with Azure plans. Yo
[ { "billingProfileDisplayName": "Fabrikam toys Billing Profile",
- "billingProfileId": "providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "billingProfileId": "providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"displayName": "Fabrikam toys",
- "id": "providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"name": "acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "resellers": null, "type": "Microsoft.Billing/billingAccounts/customers" }, { "billingProfileDisplayName": "Contoso toys Billing Profile",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "Contoso toys",
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resellers": null, "type": "Microsoft.Billing/billingAccounts/customers"
The API response lists the customers in the billing account with Azure plans. Yo
```
-Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/7d15644f-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Paste the value somewhere to use it in later steps.
+Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f`. Paste the value somewhere to use it in later steps.
If you're an Indirect provider in the CSP two-tier model, you can specify a rese
### [REST](#tab/rest)
-Make the following request, with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx```) to list all resellers that are available for a customer.
+Make the following request, with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx```) to list all resellers that are available for a customer.
```json GET "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx?$expand=resellers&api-version=2020-05-01"
The API response lists the resellers for the customer:
```json { "value": [{
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "Microsoft.Billing/billingAccounts/customers", "properties": { "billingProfileDisplayName": "Fabrikam toys Billing Profile",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
"displayName": "Fabrikam toys", "resellers": [ {
Please use either Azure CLI or REST API to get this value.
### [Azure CLI](#tab/azure-cli)
-Make the following request, with the `name` copied from the first step (```99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) and customer `name` copied from the previous step (```acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx```).
+Make the following request, with the `name` copied from the first step (```aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) and customer `name` copied from the previous step (```acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx```).
```azurecli
- az billing customer show --expand "enabledAzurePlans,resellers" --account-name "99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx" --name "acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ az billing customer show --expand "enabledAzurePlans,resellers" --account-name "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx" --name "acba85c9-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
``` The API response lists the resellers for the customer:
The API response lists the resellers for the customer:
```json { "billingProfileDisplayName": "Fabrikam toys Billing Profile",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/YL4M-xxxx-xxx-xxx",
"displayName": "Fabrikam toys", "enabledAzurePlans": [ {
The API response lists the resellers for the customer:
"skuId": "0001" } ],
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "resellers": [ {
Use the `description` property to identify the reseller who is associated with t
## Create a subscription for a customer
-The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription. You use the copied billing scope from previous step: `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription. You use the copied billing scope from previous step: `/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
### [REST](#tab/rest)
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid
{ "properties": {
- "billingScope": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "billingScope": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"DisplayName": "Dev Team subscription", "Workload": "Production" }
GET https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid
"name": "sampleAlias", "type": "Microsoft.Subscription/aliases", "properties": {
- "subscriptionId": "b5bab918-e8a9-4c34-a2e2-ebc1b75b9d74",
+ "subscriptionId": "cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a",
"provisioningState": "Succeeded" } }
Pass the optional *resellerId* copied from the second step in the request body o
To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`.
+Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`.
```azurepowershell
-New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -Workload 'Production"
+New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -Workload 'Production"
``` You get the subscriptionId as part of the response from the command.
First, install the extension by running `az extension add --name account` and `a
Run the following [az account alias create](/cli/azure/account/alias#az-account-alias-create) command. ```azurecli
-az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --display-name "Dev Team Subscription" --workload "Production"
+az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --display-name "Dev Team Subscription" --workload "Production"
``` You get the subscriptionId as part of the response from command.
With a request body:
"value": "sampleAlias" }, "billingScope": {
- "value": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ "value": "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
} }, "mode": "Incremental"
New-AzManagementGroupDeployment `
-ManagementGroupId mg1 ` -TemplateFile azuredeploy.json ` -subscriptionAliasName sampleAlias `
- -billingScope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ -billingScope "/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
``` ### [Azure CLI](#tab/azure-cli)
az deployment mg create \
--location eastus \ --management-group-id mg1 \ --template-file azuredeploy.json \
- --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
+ --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
```
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
The API response lists all enrollment accounts you have access to:
{ "value": [ {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"type": "Microsoft.Billing/enrollmentAccounts", "properties": { "principalName": "SignUpEngineering@contoso.com"
The API response lists all enrollment accounts you have access to:
} ```
-Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `name` of that account. For example, create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. The identifier is the object ID of the enrollment account. Paste the value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
+Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `name` of that account. For example, create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e```. The identifier is the object ID of the enrollment account. Paste the value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
### [PowerShell](#tab/azure-powershell)
Azure responds with a list of enrollment accounts you have access to:
```azurepowershell ObjectId | PrincipalName
-747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx | SignUpEngineering@contoso.com
+aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb | SignUpEngineering@contoso.com
4cd2fcf6-xxxx-xxxx-xxxx-xxxxxxxxxxxx | BillingPlatformTeam@contoso.com ```
-Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `ObjectId` of that account. For example, to create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. Paste the object ID somewhere so that you can use it in the next step as the `enrollmentAccountObjectId`.
+Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `ObjectId` of that account. For example, to create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e```. Paste the object ID somewhere so that you can use it in the next step as the `enrollmentAccountObjectId`.
### [Azure CLI](#tab/azure-cli)
Azure responds with a list of enrollment accounts you have access to:
```json [ {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "name": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"principalName": "SignUpEngineering@contoso.com", "type": "Microsoft.Billing/enrollmentAccounts", }, {
- "id": "/providers/Microsoft.Billing/enrollmentAccounts/747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/enrollmentAccounts/aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
"name": "4cd2fcf6-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "principalName": "BillingPlatformTeam@contoso.com", "type": "Microsoft.Billing/enrollmentAccounts",
Azure responds with a list of enrollment accounts you have access to:
] ```
-Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `name` of that account. For example, to create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```. The identifier is the object ID of the enrollment account. Paste the value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
+Use the `principalName` property to identify the account that you want subscriptions to be billed to. Copy the `name` of that account. For example, to create subscriptions under the SignUpEngineering@contoso.com enrollment account, copy ```aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e```. The identifier is the object ID of the enrollment account. Paste the value somewhere so that you can use it in the next step as `enrollmentAccountObjectId`.
The following example creates a subscription named *Dev Team Subscription* in th
### [REST](#tab/rest)
-Make the following request, replacing `<enrollmentAccountObjectId>` with the `name` copied from the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
+Make the following request, replacing `<enrollmentAccountObjectId>` with the `name` copied from the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
```json POST https://management.azure.com/providers/Microsoft.Billing/enrollmentAccounts/<enrollmentAccountObjectId>/providers/Microsoft.Subscription/createSubscription?api-version=2018-03-01-preview
In the response, as part of the header `Location`, you get back a url that you c
To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the [New-AzSubscription](/powershell/module/az.subscription) command below, replacing `<enrollmentAccountObjectId>` with the `ObjectId` collected in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
+Run the [New-AzSubscription](/powershell/module/az.subscription) command below, replacing `<enrollmentAccountObjectId>` with the `ObjectId` collected in the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
```azurepowershell-interactive New-AzSubscription -OfferType MS-AZR-0017P -Name "Dev Team Subscription" -EnrollmentAccountObjectId <enrollmentAccountObjectId> -OwnerObjectId <userObjectId1>,<servicePrincipalObjectId>
New-AzSubscription -OfferType MS-AZR-0017P -Name "Dev Team Subscription" -Enroll
First, install the preview extension by running `az extension add --name subscription`.
-Run the [az account create](/cli/azure/account#-ext-subscription-az-account-create) command below, replacing `<enrollmentAccountObjectId>` with the `name` you copied in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
+Run the [az account create](/cli/azure/account#-ext-subscription-az-account-create) command below, replacing `<enrollmentAccountObjectId>` with the `name` you copied in the first step (```aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
```azurecli-interactive az account create --offer-type "MS-AZR-0017P" --display-name "Dev Team Subscription" --enrollment-account-object-id "<enrollmentAccountObjectId>" --owner-object-id "<userObjectId>","<servicePrincipalObjectId>"
The API response lists the billing accounts that you have access to.
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"properties": {
- "accountId": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "accountId": "bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"accountStatus": "Active", "accountType": "Enterprise", "agreementType": "MicrosoftCustomerAgreement",
The API response lists the billing accounts that you have access to.
} ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftCustomerAgreement*. Copy the `name` of the account. For example, to create a subscription for the `Contoso` billing account, copy `bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
### Find invoice sections to create subscriptions The charges for your subscription appear on a section of a billing profile's invoice. Use the following API to get the list of invoice sections and billing profiles on which you have permission to create Azure subscriptions.
-Make the following request, replacing `<billingAccountName>` with the `name` copied from the first step (```5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```).
+Make the following request, replacing `<billingAccountName>` with the `name` copied from the first step (```bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```).
```json POST https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/listInvoiceSectionsWithCreateSubscriptionPermission?api-version=2019-10-01-preview
The API response lists all the invoice sections and their billing profiles on wh
{ "value": [{ "billingProfileDisplayName": "Contoso finance",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx",
"enabledAzurePlans": [{ "productId": "DZH318Z0BPS6", "skuId": "0001",
The API response lists all the invoice sections and their billing profiles on wh
"skuDescription": "Microsoft Azure Plan for DevTest" }], "invoiceSectionDisplayName": "Development",
- "invoiceSectionId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx/invoiceSections/GJ77-xxxx-xxx-xxx"
+ "invoiceSectionId": "/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx/invoiceSections/GJ77-xxxx-xxx-xxx"
}, { "billingProfileDisplayName": "Contoso finance",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-xxxx-xxx-xxx",
"enabledAzurePlans": [{ "productId": "DZH318Z0BPS6", "skuId": "0001",
The API response lists all the invoice sections and their billing profiles on wh
"skuDescription": "Microsoft Azure Plan for DevTest" }], "invoiceSectionDisplayName": "Testing",
- "invoiceSectionId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX"
+ "invoiceSectionId": "/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX"
}] } ```
-Use the `invoiceSectionDisplayName` property to identify the invoice section for which you want to create subscriptions. Copy `invoiceSectionId`, `billingProfileId`, and one of the `skuId` for the invoice section. For example, to create a subscription of type `Microsoft Azure plan` for `Development` invoice section, copy `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX`, `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-xxxx-xxx-xxx`, and `0001`. Paste the values somewhere so that you can use them in the next step.
+Use the `invoiceSectionDisplayName` property to identify the invoice section for which you want to create subscriptions. Copy `invoiceSectionId`, `billingProfileId`, and one of the `skuId` for the invoice section. For example, to create a subscription of type `Microsoft Azure plan` for `Development` invoice section, copy `/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX`, `/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-xxxx-xxx-xxx`, and `0001`. Paste the values somewhere so that you can use them in the next step.
### Create a subscription for an invoice section The following example creates a subscription named *Dev Team subscription* of type *Microsoft Azure Plan* for the *Development* invoice section. The subscription is billed to the *Contoso finance's* billing profile and appear on the *Development* section of its invoice.
-Make the following request, replacing `<invoiceSectionId>` with the `invoiceSectionId` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX```). Pass the `billingProfileId` and `skuId` copied from the second step in the request parameters of the API. To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
+Make the following request, replacing `<invoiceSectionId>` with the `invoiceSectionId` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_2019-05-31/billingProfiles/PBFV-XXXX-XXX-XXX/invoiceSections/GJGR-XXXX-XXX-XXX```). Pass the `billingProfileId` and `skuId` copied from the second step in the request parameters of the API. To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
```json POST https://management.azure.com<invoiceSectionId>/providers/Microsoft.Subscription/createSubscription?api-version=2018-11-01-preview
The API response lists the billing accounts.
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
- "name": "99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
+ "name": "cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx",
"properties": {
- "accountId": "5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "accountId": "bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"accountStatus": "Active", "accountType": "Enterprise", "agreementType": "MicrosoftPartnerAgreement",
The API response lists the billing accounts.
} ```
-Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftPartnerAgreement*. Copy the `name` for the account. For example, to create a subscription for the `Contoso` billing account, copy `99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
+Use the `displayName` property to identify the billing account for which you want to create subscriptions. Ensure, the agreementType of the account is *MicrosoftPartnerAgreement*. Copy the `name` for the account. For example, to create a subscription for the `Contoso` billing account, copy `cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx`. Paste the value somewhere so that you can use it in the next step.
### Find customers that have Azure plans
-Make the following request, replacing `<billingAccountName>` with the `name` copied from the first step (```5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) to list all customers in the billing account for whom you can create Azure subscriptions.
+Make the following request, replacing `<billingAccountName>` with the `name` copied from the first step (```bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx```) to list all customers in the billing account for whom you can create Azure subscriptions.
```json GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/customers?api-version=2019-10-01-preview
The API response lists the customers in the billing account with Azure plans. Yo
{ "value": [ {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "name": "2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b",
+ "name": "dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b",
"properties": { "billingProfileDisplayName": "Contoso USD",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/JUT6-xxxx-xxxx-xxxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/JUT6-xxxx-xxxx-xxxx",
"displayName": "Fabrikam toys" }, "type": "Microsoft.Billing/billingAccounts/customers" }, {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/97c3fac4-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/97c3fac4-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "97c3fac4-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "properties": { "billingProfileDisplayName": "Fabrikam sports",
- "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/JUT6-xxxx-xxxx-xxxx",
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/JUT6-xxxx-xxxx-xxxx",
"displayName": "Fabrikam bakery" }, "type": "Microsoft.Billing/billingAccounts/customers"
The API response lists the customers in the billing account with Azure plans. Yo
```
-Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Paste the value somewhere to use it in later steps.
+Use the `displayName` property to identify the customer for which you want to create subscriptions. Copy the `id` for the customer. For example, to create a subscription for `Fabrikam toys`, copy `/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b`. Paste the value somewhere to use it in later steps.
### Optional for Indirect providers: Get the resellers for a customer If you're an Indirect provider in the CSP two-tier model, you can specify a reseller while creating subscriptions for customers.
-Make the following request, replacing `<customerId>` with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx```) to list all resellers that are available for a customer.
+Make the following request, replacing `<customerId>` with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b```) to list all resellers that are available for a customer.
```json GET https://management.azure.com<customerId>?$expand=resellers&api-version=2019-10-01-preview
The API response lists the resellers for the customer:
```json { "value": [{
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "2ed2c490-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "Microsoft.Billing/billingAccounts/customers", "properties": {
The API response lists the resellers for the customer:
} }, {
- "id": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/4ed2c793-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/4ed2c793-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "4ed2c793-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "Microsoft.Billing/billingAccounts/customers", "properties": {
Use the `description` property to identify the reseller to associate with the su
The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription.
-Make the following request, replacing `<customerId>` with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). Pass the optional *resellerId* copied from the second step in the request parameters of the API.
+Make the following request, replacing `<customerId>` with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b```). Pass the optional *resellerId* copied from the second step in the request parameters of the API.
```json POST https://management.azure.com<customerId>/providers/Microsoft.Subscription/createSubscription?api-version=2018-11-01-preview
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Title: Understand admin roles for Enterprise Agreements (EA) in Azure
-description: Learn about Enterprise administrator roles in Azure. You can assign five distinct administrative roles.
+ Title: Understand admin roles for Enterprise Agreements in Azure
+description: Learn about the administrative roles available to manage Azure Enterprise Agreements (EA), including permissions and how to assign them.
- Previously updated : 09/04/2024+ Last updated : 10/18/2024
+#customer intent: As an enterprise administrator, I want learn about the administrative roles available to manage Azure Enterprise Agreements so that manage my enterprise agreement.
-# Managing Azure Enterprise Agreement roles
-
-> [!NOTE]
-> Enterprise administrators have permissions to create new subscriptions under active enrollment accounts. For more information about creating new subscriptions, see [Add a new subscription](direct-ea-administration.md#add-a-subscription).
+# Manage Azure Enterprise Agreement roles
-
-To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign six distinct administrative roles:
+To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign the following six distinct administrative roles.
- Enterprise Administrator - Enterprise Administrator (read only)┬╣
To help manage your organization's usage and spend, Azure customers with an Ente
┬▓ The Bill-To contact can't be added or changed in the Azure portal. It gets added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
+> [!NOTE]
+> Enterprise administrators have permissions to create new subscriptions under active enrollment accounts. For more information about creating new subscriptions, see [Add a new subscription](direct-ea-administration.md#add-a-subscription).
+ The first enrollment administrator that is set up during the enrollment provisioning determines the authentication type of the Bill-to contact account. When the bill-to contact gets added to the Azure portal as a read-only administrator, they're given Microsoft account authentication. For example, if the initial authentication type is set to Mixed, the EA is added as a Microsoft account and the Bill-to contact has read-only EA admin privileges. If the EA admin doesnΓÇÖt approve Microsoft account authorization for an existing Bill-to contact, the EA admin can delete the user in question. Then they can ask the customer to add the user back as a read-only administrator with a Work or School account Only set at enrollment level in the Azure portal.
The Azure portal hierarchy for Cost Management consists of:
The following diagram illustrates simple Azure EA hierarchies. ## Enterprise user roles
The following sections describe the limitations and capabilities of each role.
## User limit for admin roles
+The following table outlines the user limits and permissions for each administrative role in an Enterprise Agreement.
+ |Role| User limit| ||| |Enterprise Administrator|Unlimited|
The following sections describe the limitations and capabilities of each role.
## Organization structure and permissions by role
+The following table shows user limits and permissions associated with each administrative role.
+ |Tasks| Enterprise Administrator|Enterprise Administrator (read only)| EA Purchaser | Department Administrator|Department Administrator (read only)|Account Owner| Partner| ||||||||| |View Enterprise Administrators|Γ£ö|Γ£ö| Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
The following sections describe the limitations and capabilities of each role.
- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.-- ⁶ A subscription owner, reservation purchaser or savings plan purchaser can purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation/savings plan purchase-enabled flags. Enterprise administrators can purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) can view all purchased reservations and savings plans. The reservation/savings plan purchase-enabled flags don't affect the EA administrator roles. The Enterprise Admin (read-only) role holder isn't permitted to make purchases. However, if a user with that role also holds either a subscription owner, reservation purchaser or savings plan purchaser permission, the user can purchase reservations and/or savings plans, regardless of the flags.
+- ⁶ A subscription owner, reservation purchaser, or savings plan purchaser can purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation/savings plan purchase-enabled flags. Enterprise administrators can purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) can view all purchased reservations and savings plans. The reservation/savings plan purchase-enabled flags don't affect the EA administrator roles. The Enterprise Admin (read-only) role holder isn't permitted to make purchases. However, if a user with that role also holds either a subscription owner, reservation purchaser or savings plan purchaser permission, the user can purchase reservations and/or savings plans, regardless of the flags.
## Add a new enterprise administrator
When new Account Owners (AO) are added to an Azure EA enrollment for the first t
> [!NOTE] > If the Account Owner is a service account and doesn't have an email, use an In-Private session to sign in to the Azure portal and navigate to Cost Management to be prompted to accept the activation welcome email.
-Once they activate their account, the account status is updated from _pending_ to _active_. The account owner needs to read the `Warning` message and select **Continue**. New users might get prompted to enter their first and last name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
+Once they activate their account, the account status is updated from **Pending** to **Active**. The account owner needs to read the content and select **Yes, I wish to continue**. New users might get prompted to enter their first and family name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
> [!NOTE] > A subscription is associated with one and only one account. The warning message includes details that warn the Account Owner that accepting the offer will move the subscriptions associated with the Account to the new Enrollment.
Direct EA admins can add department admins in the Azure portal. For more informa
## Usage and costs access by role
+The following table shows usage and costs access by administrative role.
+ |Tasks| Enterprise Administrator|Enterprise Administrator (read only)|EA Purchaser|Department Administrator|Department Administrator (read only) |Account Owner| Partner| ||||||||| |View credit balance including Azure Prepayment|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
cost-management-billing Nutanix Bare Metal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/nutanix-bare-metal.md
+
+ Title: Save costs with reservations for Nutanix Cloud Clusters on Azure BareMetal infrastructure
+description: Save costs with Nutanix on Azure BareMetal reservations by committing to a reservation for your provisioned throughput units.
+++++ Last updated : 10/18/2024+
+# customer intent: As a billing administrator, I want to learn about saving costs with Nutanix Cloud Clusters on Azure BareMetal Infrastructure Reservations and buy one.
++
+# Save costs with reservations for Nutanix Cloud Clusters on Azure BareMetal infrastructure
+
+You can save money on [Nutanix Cloud Clusters (NC2) on Azure](../../baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md) with reservations. The reservation discount automatically applies to the running NC2 workload on Azure hosts that match the reservation scope and attributes. A reservation purchase covers only the compute part of your usage and doesn't include software licensing costs.
+
+## Purchase restriction considerations
+
+Reservations for NC2 on Azure BareMetal Infrastructure are available with some exceptions.
+
+- **Clouds** - Reservations are available only in the regions listed on the [Supported regions](../../baremetal-infrastructure/workloads/nc2-on-azure/architecture.md#supported-regions) page.
+- **Capacity restrictions** - In rare circumstances, Azure limits the purchase of new reservations for NC2 on Azure host SKUs because of low capacity in a region.
+
+## Reservation scope
+
+When you purchase a reservation, you choose a scope that determines which resources get the reservation discount. A reservation applies to your usage within the purchased scope.
+
+To choose a subscription scope, use the Scope list at the time of purchase. You can change the reservation scope after purchase.
+
+- **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only.
+- **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription.
+- **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription is moved to different billing context, the benefit no longer applies to the subscription. It continues to apply to other subscriptions in the billing context.
+ - For enterprise customers, the billing context is the EA enrollment. The reservation shared scope would include multiple Microsoft Entra tenants in an enrollment.
+ - For Microsoft Customer Agreement customers, the billing scope is the billing profile.
+ - For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator.
+- **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. The management group scope applies to all subscriptions throughout the entire management group hierarchy. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription.
+
+For more information on Azure reservations, see [What are Azure Reservations](save-compute-costs-reservations.md).
+
+## Purchase requirements
+
+To purchase reservation:
+
+- You must be in the **Owner** role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+- For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the EA portal. Or, if that setting is disabled, you must be an EA Admin on the subscription.
+- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy an Azure Nutanix reservation.
+
+## Purchase Nutanix on Azure BareMetal reservation
+
+You can purchase a Nutanix on Azure BareMetal reservation through the [Azure portal](https://portal.azure.com/). You can pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](prepare-buy-reservation.md).
+
+To purchase reserved capacity:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations** and then select **Nutanix on Azure BareMetal** to buy a new reservation.
+3. Select a subscription. Use the Subscription list to choose the subscription that gets used to pay for the reservation. The payment method of the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement, or pay-as-you-go (offer numbers: MS-AZR-0003P or MS-AZR-0023P).
+ 1. For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
+ 2. For a pay-as-you-go subscription, the charges are billed to the credit card or invoice payment method on the subscription.
+4. Select a scope.
+5. Select AN36 or AN36P and select a region to choose an Azure region that gets covered by the reservation and select **Add to cart**.
+6. The number of instances to purchase within the reservation. The quantity is the number of running NC2 hosts that can get the billing discount.
+7. Select **Next: Review + Buy** and review your purchase choices and their prices.
+8. Select **Buy now**.
+9. After purchase, you can select **View this Reservation** to see your purchase status.
+
+After you purchase a reservation, it gets applied automatically to any existing usage that matches the terms of the reservation.
+
+## Usage data and reservation usage
+
+Your usage that gets a reservation discount has an effective price of zero. You can see which NC2 instance received the reservation discount for each reservation.
+
+For more information about how reservation discounts appear in usage data:
+
+- For EA customers, see [Understand Azure reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- For individual subscriptions, see [Understand Azure reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
+
+## Exchange or refund a reservation
+
+Exchange is allowed between NC2 AN36, AN36P, and AN64 SKUs. You can exchange or refund a reservation, with certain limitations. For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).
+
+## Reservation expiration
+
+When a reservation expires, any usage that you were covered under that reservation is billed at the pay-as-you go rate. Reservations are set to autorenewal on at time of purchase and you can choose to change the option as necessary at time of or after purchase.
+
+An email notification is sent 30 days before the reservation expires, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Related content
+
+- To learn more about Azure reservations, see the following articles:
+ - [What are Azure Reservations?](save-compute-costs-reservations.md)
+ - [Manage Azure Reservations](manage-reserved-vm-instance.md)
+ - [Understand Azure Reservations discount](understand-reservation-charges.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Disk Storage](/azure/virtual-machines/disks-reserved-capacity) - [Microsoft Fabric](fabric-capacity.md) - [Microsoft Sentinel - Pre-Purchase](../../sentinel/billing-pre-purchase-plan.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Nutanix on Azure BareMetal](nutanix-bare-metal.md)
- [SAP HANA Large Instances](prepay-hana-large-instances-reserved-capacity.md) - [Software plans](/azure/virtual-machines/linux/prepay-suse-software-charges?toc=/azure/cost-management-billing/reservations/toc.json) - [SQL Database](/azure/azure-sql/database/reserved-capacity-overview?toc=/azure/cost-management-billing/reservations/toc.json)
data-factory Airflow Sync Github Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-sync-github-repository.md
Sample request:
```rest HTTP
-PUT https://management.azure.com/subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01
+PUT https://management.azure.com/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01
``` Sample body:
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<Your-ResourceGroup-Name>/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"' #For using preview that allows you to only stop/ start triggers that are modified, please comment out the above line and uncomment the below line. Make sure the package.json contains the build-preview command.
- #customCommand: 'run build-preview export $(Build.Repository.LocalPath) /subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourceGroups/GartnerMQ2021/providers/Microsoft.DataFactory/factories/Dev-GartnerMQ2021-DataFactory "ArmTemplate"'
+ #customCommand: 'run build-preview export $(Build.Repository.LocalPath) /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/GartnerMQ2021/providers/Microsoft.DataFactory/factories/Dev-GartnerMQ2021-DataFactory "ArmTemplate"'
displayName: 'Validate and Generate ARM template' # Publish the artifact to be used as a source for a release pipeline.
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md
PATCH https://management.azure.com/subscriptions/<subsID>/resourceGroups/<resour
}, "identity": { "type": "SystemAssigned",
- "principalId": "765ad4ab-XXXX-XXXX-XXXX-51ed985819dc",
- "tenantId": "72f988bf-XXXX-XXXX-XXXX-2d7cd011db47"
+ "principalId": "aaaaaaaa-bbbb-cccc-1111-222222222222",
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee"
}, "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<dataFactoryName>", "type": "Microsoft.DataFactory/factories",
PS C:\> (Get-AzDataFactoryV2 -ResourceGroupName <resourceGroupName> -Name <dataF
PrincipalId TenantId -- --
-765ad4ab-XXXX-XXXX-XXXX-51ed985819dc 72f988bf-XXXX-XXXX-XXXX-2d7cd011db47
+aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb aaaabbbb-0000-cccc-1111-dddd2222eeee
``` You can get the application ID by copying above principal ID, then running below Microsoft Entra ID command with principal ID as parameter. ```powershell
-PS C:\> Get-AzADServicePrincipal -ObjectId 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
+PS C:\> Get-AzADServicePrincipal -ObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
-ServicePrincipalNames : {76f668b3-XXXX-XXXX-XXXX-1b3348c75e02, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=}
-ApplicationId : 76f668b3-XXXX-XXXX-XXXX-1b3348c75e02
+ServicePrincipalNames : {00001111-aaaa-2222-bbbb-3333cccc4444, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=}
+ApplicationId : 00001111-aaaa-2222-bbbb-3333cccc4444
DisplayName : ADFV2DemoFactory
-Id : 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
+Id : aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
Type : ServicePrincipal ```
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"name":"<dataFactoryName>", "identity":{ "type":"SystemAssigned",
- "principalId":"554cff9e-XXXX-XXXX-XXXX-90c7d9ff2ead",
- "tenantId":"72f988bf-XXXX-XXXX-XXXX-2d7cd011db47"
+ "principalId":"bbbbbbbb-cccc-dddd-2222-333333333333",
+ "tenantId":"aaaabbbb-0000-cccc-1111-dddd2222eeee"
}, "id":"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<dataFactoryName>", "type":"Microsoft.DataFactory/factories",
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
The RWX permission or the dataset property isn't set correctly.
When you use the ADLS Gen2 as a sink in the data flow (to preview data, debug/trigger run, etc.) and the partition setting in **Optimize** tab in the **Sink** stage isn't default, you might find the job fails with the following error message:
-`Job failed due to reason: Error while reading file abfss:REDACTED_LOCAL_PART@prod.dfs.core.windows.net/import/data/e3342084-930c-4f08-9975-558a3116a1a9/part-00000-tid-7848242374008877624-5df7454e-7b14-4253-a20b-d20b63fe9983-1-1-c000.csv. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.`
+`Job failed due to reason: Error while reading file abfss:REDACTED_LOCAL_PART@prod.dfs.core.windows.net/import/data/e3342084-930c-4f08-9975-558a3116a1a9/part-00000-tid-7848242374008877624-aaaabbbb-0000-cccc-1111-dddd2222eeee-1-1-c000.csv. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.`
#### Cause
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
You can use an existing Microsoft Entra group or create a new one using Azure AD
6de75f3c-8b2f-4bf4-b9f8-78cc60a18050 SSISIrGroup ```
-3. Add the specified system/user-assigned managed identity for your ADF to the group. You can follow the [Managed identity for Data Factory or Azure Synapse](./data-factory-service-identity.md) article to get the Object ID of specified system/user-assigned managed identity for your ADF (e.g. 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc, but do not use the Application ID for this purpose).
+3. Add the specified system/user-assigned managed identity for your ADF to the group. You can follow the [Managed identity for Data Factory or Azure Synapse](./data-factory-service-identity.md) article to get the Object ID of specified system/user-assigned managed identity for your ADF (e.g. aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb, but do not use the Application ID for this purpose).
```powershell
- Add-AzureAdGroupMember -ObjectId $Group.ObjectId -RefObjectId 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
+ Add-AzureAdGroupMember -ObjectId $Group.ObjectId -RefObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
``` You can also check the group membership afterwards.
data-factory Monitor Data Factory Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-data-factory-reference.md
Log Analytics inherits the schema from Azure Monitor with the following exceptio
| Property | Type | Description | Example | | | | | | | **Level** |String | The level of the diagnostic logs. For activity-run logs, set the property value to 4. | `4` |
-| **correlationId** |String | The unique ID for tracking a particular request. | `319dc6b4-f348-405e-b8d7-aafc77b73e77` |
+| **correlationId** |String | The unique ID for tracking a particular request. | `aaaa0000-bb11-2222-33cc-444444dddddd` |
| **time** | String | The time of the event in the timespan UTC format `YYYY-MM-DDTHH:MM:SS.00000Z`. | `2017-06-28T21:00:27.3534352Z` | |**activityRunId**| String| The ID of the activity run. | `3a171e1f-b36e-4b80-8a54-5625394f4354` | |**pipelineRunId**| String| The ID of the pipeline run. | `9f6069d6-e522-4608-9f99-21807bfc3c70` |
Log Analytics inherits the schema from Azure Monitor with the following exceptio
| Property | Type | Description | Example | | | | | | | **Level** |String | The level of the diagnostic logs. For activity-run logs, set the property value to 4. | `4` |
-| **correlationId** |String | The unique ID for tracking a particular request. | `319dc6b4-f348-405e-b8d7-aafc77b73e77` |
+| **correlationId** |String | The unique ID for tracking a particular request. | `aaaa0000-bb11-2222-33cc-444444dddddd` |
| **time** | String | The time of the event in the timespan UTC format `YYYY-MM-DDTHH:MM:SS.00000Z`. | `2017-06-28T21:00:27.3534352Z` | |**runId**| String| The ID of the pipeline run. | `9f6069d6-e522-4608-9f99-21807bfc3c70` | |**resourceId**| String | The ID associated with the data factory resource. | `/SUBSCRIPTIONS/<subID>/RESOURCEGROUPS/<resourceGroupName>/PROVIDERS/MICROSOFT.DATAFACTORY/FACTORIES/<dataFactoryName>` |
Log Analytics inherits the schema from Azure Monitor with the following exceptio
| Property | Type | Description | Example | | | | | | | **Level** |String | The level of the diagnostic logs. For activity-run logs, set the property value to 4. | `4` |
-| **correlationId** |String | The unique ID for tracking a particular request. | `319dc6b4-f348-405e-b8d7-aafc77b73e77` |
+| **correlationId** |String | The unique ID for tracking a particular request. | `aaaa0000-bb11-2222-33cc-444444dddddd` |
| **time** | String | The time of the event in the timespan UTC format `YYYY-MM-DDTHH:MM:SS.00000Z`. | `2017-06-28T21:00:27.3534352Z` | |**triggerId**| String| The ID of the trigger run. | `08587023010602533858661257311` | |**resourceId**| String | The ID associated with the data factory resource. | `/SUBSCRIPTIONS/<subID>/RESOURCEGROUPS/<resourceGroupName>/PROVIDERS/MICROSOFT.DATAFACTORY/FACTORIES/<dataFactoryName>` |
Here are the log attributes of conditions related to event messages that are gen
| **time** | String | The time of event in UTC format: `YYYY-MM-DDTHH:MM:SS.00000Z` | `2017-06-28T21:00:27.3534352Z` | | **operationName** | String | Set to `YourSSISIRName-SSISPackageEventMessageContext` | `mysqlmissisir-SSISPackageEventMessageContext` | | **category** | String | The category of diagnostic logs | `SSISPackageEventMessageContext` |
-| **correlationId** | String | The unique ID for tracking a particular operation | `e55700df-4caf-4e7c-bfb8-78ac7d2f28a0` |
+| **correlationId** | String | The unique ID for tracking a particular operation | `bbbb1111-cc22-3333-44dd-555555eeeeee` |
| **dataFactoryName** | String | The name of your data factory | `MyADFv2` | | **integrationRuntimeName** | String | The name of your SSIS IR | `MySSISIR` | | **level** | String | The level of diagnostic logs | `Informational` |
Here are the log attributes of event messages that are generated by SSIS package
| **time** | String | The time of event in UTC format: `YYYY-MM-DDTHH:MM:SS.00000Z` | `2017-06-28T21:00:27.3534352Z` | | **operationName** | String | Set to `YourSSISIRName-SSISPackageEventMessages` | `mysqlmissisir-SSISPackageEventMessages` | | **category** | String | The category of diagnostic logs | `SSISPackageEventMessages` |
-| **correlationId** | String | The unique ID for tracking a particular operation | `e55700df-4caf-4e7c-bfb8-78ac7d2f28a0` |
+| **correlationId** | String | The unique ID for tracking a particular operation | `bbbb1111-cc22-3333-44dd-555555eeeeee` |
| **dataFactoryName** | String | The name of your data factory | `MyADFv2` | | **integrationRuntimeName** | String | The name of your SSIS IR | `MySSISIR` | | **level** | String | The level of diagnostic logs | `Informational` |
Here are the log attributes of executable statistics that are generated by SSIS
| **time** | String | The time of event in UTC format: `YYYY-MM-DDTHH:MM:SS.00000Z` | `2017-06-28T21:00:27.3534352Z` | | **operationName** | String | Set to `YourSSISIRName-SSISPackageExecutableStatistics` | `mysqlmissisir-SSISPackageExecutableStatistics` | | **category** | String | The category of diagnostic logs | `SSISPackageExecutableStatistics` |
-| **correlationId** | String | The unique ID for tracking a particular operation | `e55700df-4caf-4e7c-bfb8-78ac7d2f28a0` |
+| **correlationId** | String | The unique ID for tracking a particular operation | `bbbb1111-cc22-3333-44dd-555555eeeeee` |
| **dataFactoryName** | String | The name of your data factory | `MyADFv2` | | **integrationRuntimeName** | String | The name of your SSIS IR | `MySSISIR` | | **level** | String | The level of diagnostic logs | `Informational` |
Here are the log attributes of runtime statistics for data flow components that
| **time** | String | The time of event in UTC format: `YYYY-MM-DDTHH:MM:SS.00000Z` | `2017-06-28T21:00:27.3534352Z` | | **operationName** | String | Set to `YourSSISIRName-SSISPackageExecutionComponentPhases` | `mysqlmissisir-SSISPackageExecutionComponentPhases` | | **category** | String | The category of diagnostic logs | `SSISPackageExecutionComponentPhases` |
-| **correlationId** | String | The unique ID for tracking a particular operation | `e55700df-4caf-4e7c-bfb8-78ac7d2f28a0` |
+| **correlationId** | String | The unique ID for tracking a particular operation | `bbbb1111-cc22-3333-44dd-555555eeeeee` |
| **dataFactoryName** | String | The name of your data factory | `MyADFv2` | | **integrationRuntimeName** | String | The name of your SSIS IR | `MySSISIR` | | **level** | String | The level of diagnostic logs | `Informational` |
Here are the log attributes of data movements through each leg of data flow pipe
| **time** | String | The time of event in UTC format: `YYYY-MM-DDTHH:MM:SS.00000Z` | `2017-06-28T21:00:27.3534352Z` | | **operationName** | String | Set to `YourSSISIRName-SSISPackageExecutionDataStatistics` | `mysqlmissisir-SSISPackageExecutionDataStatistics` | | **category** | String | The category of diagnostic logs | `SSISPackageExecutionDataStatistics` |
-| **correlationId** | String | The unique ID for tracking a particular operation | `e55700df-4caf-4e7c-bfb8-78ac7d2f28a0` |
+| **correlationId** | String | The unique ID for tracking a particular operation | `bbbb1111-cc22-3333-44dd-555555eeeeee` |
| **dataFactoryName** | String | The name of your data factory | `MyADFv2` | | **integrationRuntimeName** | String | The name of your SSIS IR | `MySSISIR` | | **level** | String | The level of diagnostic logs | `Informational` |
deployment-environments How To Configure Extensibility Model Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-extensibility-model-custom-image.md
The main steps you'll follow when using a container image are:
- For a private registry, give the DevCenter ACR permissions. 1. Add your image location to the `runner` parameter in your environment definition 1. Deploy environments that use your custom image.
-
++ The first step in the process is to choose the type of image you want to use. Select the corresponding tab to see the process. ### [Use a sample container image](#tab/sample/)
You can see the sample Bicep container image in the ADE sample repository under
For more information about how to create environment definitions that use the ADE container images to deploy your Azure resources, see [Add and configure an environment definition](configure-environment-definition.md). ::: zone-end
+Use a custom image to configure a Terraform image.
+ ::: zone pivot="pulumi" ### Use a sample container image provided by Pulumi
if [ -z "$deploymentOutput" ]; then
fi echo "{\"outputs\": $deploymentOutput}" > $ADE_OUTPUTS ```- ::: zone-end ::: zone pivot="terraform"
stackout=$(pulumi stack output --json | jq -r 'to_entries|.[]|{(.key): {type: "s
echo "{\"outputs\": ${stackout:-{\}}}" > $ADE_OUTPUTS ``` ::: zone-end+ + ## Build an image You can build your image using the Docker CLI. Ensure the [Docker Engine is installed](https://docs.docker.com/desktop/) on your computer. Then, navigate to the directory of your Dockerfile, and run the following command:
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
These events are triggered when a client creates, replaces, or deletes a blob by
"contentType": "image/jpeg", "contentLength": 105891, "blobType": "BlockBlob",
+ "accessTier": "Archive",
+ "previousTier": "Cool",
"url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg", "sequencer": "000000000000000000000000000089A4000000000018d6ea", "storageDiagnostics": {
These events are triggered when a client creates, replaces, or deletes a blob by
"contentType": "image/jpeg", "contentLength": 105891, "blobType": "BlockBlob",
+ "accessTier": "Archive",
+ "previousTier": "Cool",
"url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg", "sequencer": "000000000000000000000000000089A4000000000018d6ea", "storageDiagnostics": {
The data object has the following properties:
| `contentType` | string | The content type specified for the blob. | | `contentLength` | integer | The size of the blob in bytes. | | `blobType` | string | The type of blob. Valid values are either "BlockBlob" or "PageBlob". |
+| `accessTier` | string | The target tier of the blob. Appears only for the event BlobTierChanged. |
+| `previousTier` | string | The source tier of the blob. Appears only for the event BlobTierChanged. If the blob is inferring the tier from the storage account, this field will not appear. |
| `contentOffset` | number | The offset in bytes of a write operation taken at the point where the event-triggering application completed writing to the file. <br>Appears only for events triggered on blob storage accounts that have a hierarchical namespace.| | `destinationUrl` |string | The url of the file that will exist after the operation completes. For example, if a file is renamed, the `destinationUrl` property contains the url of the new file name. <br>Appears only for events triggered on blob storage accounts that have a hierarchical namespace.| | `sourceUrl` |string | The url of the file that exists before the operation is done. For example, if a file is renamed, the `sourceUrl` contains the url of the original file name before the rename operation. <br>Appears only for events triggered on blob storage accounts that have a hierarchical namespace. |
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
Here's the code to send events to an event hub. The main steps in the code are:
package main import (
- "context"
+ "context"
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
) func main() {
+ // create an Event Hubs producer client using a connection string to the namespace and the event hub
+ producerClient, err := azeventhubs.NewProducerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", nil)
- // create an Event Hubs producer client using a connection string to the namespace and the event hub
- producerClient, err := azeventhubs.NewProducerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", nil)
+ if err != nil {
+ panic(err)
+ }
- if err != nil {
- panic(err)
- }
+ defer producerClient.Close(context.TODO())
- defer producerClient.Close(context.TODO())
+ // create sample events
+ events := createEventsForSample()
- // create sample events
- events := createEventsForSample()
+ // create a batch object and add sample events to the batch
+ newBatchOptions := &azeventhubs.EventDataBatchOptions{}
- // create a batch object and add sample events to the batch
- newBatchOptions := &azeventhubs.EventDataBatchOptions{}
+ batch, err := producerClient.NewEventDataBatch(context.TODO(), newBatchOptions)
- batch, err := producerClient.NewEventDataBatch(context.TODO(), newBatchOptions)
+ if err != nil {
+ panic(err)
+ }
- for i := 0; i < len(events); i++ {
- err = batch.AddEventData(events[i], nil)
- }
+ for i := 0; i < len(events); i++ {
+ err = batch.AddEventData(events[i], nil)
- // send the batch of events to the event hub
- producerClient.SendEventDataBatch(context.TODO(), batch, nil)
+ if err != nil {
+ panic(err)
+ }
+ }
+
+ // send the batch of events to the event hub
+ err = producerClient.SendEventDataBatch(context.TODO(), batch, nil)
+
+ if err != nil {
+ panic(err)
+ }
} func createEventsForSample() []*azeventhubs.EventData {
- return []*azeventhubs.EventData{
- {
- Body: []byte("hello"),
- },
- {
- Body: []byte("world"),
- },
- }
+ return []*azeventhubs.EventData{
+ {
+ Body: []byte("hello"),
+ },
+ {
+ Body: []byte("world"),
+ },
+ }
} ```
Here's the code to receive events from an event hub. The main steps in the code
package main import (
- "context"
- "errors"
- "fmt"
- "time"
-
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs/checkpoints"
- "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
+ "context"
+ "errors"
+ "fmt"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs/checkpoints"
+ "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
) func main() {
- // create a container client using a connection string and container name
- checkClient, err := container.NewClientFromConnectionString("AZURE STORAGE CONNECTION STRING", "CONTAINER NAME", nil)
-
- // create a checkpoint store that will be used by the event hub
- checkpointStore, err := checkpoints.NewBlobStore(checkClient, nil)
+ // create a container client using a connection string and container name
+ checkClient, err := container.NewClientFromConnectionString("AZURE STORAGE CONNECTION STRING", "CONTAINER NAME", nil)
+
+ if err != nil {
+ panic(err)
+ }
- if err != nil {
- panic(err)
- }
+ // create a checkpoint store that will be used by the event hub
+ checkpointStore, err := checkpoints.NewBlobStore(checkClient, nil)
- // create a consumer client using a connection string to the namespace and the event hub
- consumerClient, err := azeventhubs.NewConsumerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", azeventhubs.DefaultConsumerGroup, nil)
+ if err != nil {
+ panic(err)
+ }
- if err != nil {
- panic(err)
- }
+ // create a consumer client using a connection string to the namespace and the event hub
+ consumerClient, err := azeventhubs.NewConsumerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", azeventhubs.DefaultConsumerGroup, nil)
- defer consumerClient.Close(context.TODO())
+ if err != nil {
+ panic(err)
+ }
- // create a processor to receive and process events
- processor, err := azeventhubs.NewProcessor(consumerClient, checkpointStore, nil)
+ defer consumerClient.Close(context.TODO())
- if err != nil {
- panic(err)
- }
+ // create a processor to receive and process events
+ processor, err := azeventhubs.NewProcessor(consumerClient, checkpointStore, nil)
- // for each partition in the event hub, create a partition client with processEvents as the function to process events
- dispatchPartitionClients := func() {
- for {
- partitionClient := processor.NextPartitionClient(context.TODO())
+ if err != nil {
+ panic(err)
+ }
- if partitionClient == nil {
- break
- }
+ // for each partition in the event hub, create a partition client with processEvents as the function to process events
+ dispatchPartitionClients := func() {
+ for {
+ partitionClient := processor.NextPartitionClient(context.TODO())
- go func() {
- if err := processEvents(partitionClient); err != nil {
- panic(err)
- }
- }()
- }
- }
+ if partitionClient == nil {
+ break
+ }
- // run all partition clients
- go dispatchPartitionClients()
+ go func() {
+ if err := processEvents(partitionClient); err != nil {
+ panic(err)
+ }
+ }()
+ }
+ }
- processorCtx, processorCancel := context.WithCancel(context.TODO())
- defer processorCancel()
+ // run all partition clients
+ go dispatchPartitionClients()
- if err := processor.Run(processorCtx); err != nil {
- panic(err)
- }
+ processorCtx, processorCancel := context.WithCancel(context.TODO())
+ defer processorCancel()
+
+ if err := processor.Run(processorCtx); err != nil {
+ panic(err)
+ }
} func processEvents(partitionClient *azeventhubs.ProcessorPartitionClient) error {
- defer closePartitionResources(partitionClient)
- for {
- receiveCtx, receiveCtxCancel := context.WithTimeout(context.TODO(), time.Minute)
- events, err := partitionClient.ReceiveEvents(receiveCtx, 100, nil)
- receiveCtxCancel()
-
- if err != nil && !errors.Is(err, context.DeadlineExceeded) {
- return err
- }
-
- fmt.Printf("Processing %d event(s)\n", len(events))
-
- for _, event := range events {
- fmt.Printf("Event received with body %v\n", string(event.Body))
- }
-
- if len(events) != 0 {
- if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1]); err != nil {
- return err
- }
- }
- }
+ defer closePartitionResources(partitionClient)
+ for {
+ receiveCtx, receiveCtxCancel := context.WithTimeout(context.TODO(), time.Minute)
+ events, err := partitionClient.ReceiveEvents(receiveCtx, 100, nil)
+ receiveCtxCancel()
+
+ if err != nil && !errors.Is(err, context.DeadlineExceeded) {
+ return err
+ }
+
+ fmt.Printf("Processing %d event(s)\n", len(events))
+
+ for _, event := range events {
+ fmt.Printf("Event received with body %v\n", string(event.Body))
+ }
+
+ if len(events) != 0 {
+ if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1], nil); err != nil {
+ return err
+ }
+ }
+ }
} func closePartitionResources(partitionClient *azeventhubs.ProcessorPartitionClient) {
- defer partitionClient.Close(context.TODO())
+ defer partitionClient.Close(context.TODO())
}- ``` ## Run receiver and sender apps
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
In addition to these storage-related features and all capabilities and protocol
> - Event Hubs Premium supports TLS 1.2 or greater. > - The Premium tier isn't available in all regions. Try to create a namespace in the Azure portal. See the supported regions in the **Location** dropdown list on the **Create Namespace** page.
-You can purchase 1, 2, 4, 8, and 16 processing units (PUs) for each namespace. Because the Premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle like it is in the Standard tier. The throughput depends on the work you ask Event Hubs to do, which is similar to the Dedicated tier. The effective ingest and stream throughput per PU depends on various factors, such as the:
+You can purchase 1, 2, 4, 6, 8, 10, 12, and 16 processing units (PUs) for each namespace. Because the Premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle like it is in the Standard tier. The throughput depends on the work you ask Event Hubs to do, which is similar to the Dedicated tier. The effective ingest and stream throughput per PU depends on various factors, such as the:
* Number of producers and consumers. * Payload size.
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
For more information about the autoinflate feature, see [Automatically scale thr
## Processing units
- [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a **Processing Unit** (PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
+ [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a **Processing Unit** (PU). You can purchase 1, 2, 4, 6, 8, 10, 12, or 16 processing Units for each Event Hubs Premium namespace.
How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
expressroute Expressroute Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-connectivity-models.md
If you're colocated in a facility with a cloud exchange, you can request for vir
## <a name="Ethernet"></a>Point-to-point Ethernet connections
-You can connect your on-premises datacenters or offices to the Microsoft cloud through point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2 connections, or managed Layer 3 connections between your site and the Microsoft cloud.
+You can connect your on-premises datacenters or offices to the Microsoft cloud through point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2 connections.
## <a name="IPVPN"></a>Any-to-any (IPVPN) networks
You can connect directly into the Microsoft global network at a peering location
* Configure your ExpressRoute connection. * [Create an ExpressRoute circuit](expressroute-howto-circuit-portal-resource-manager.md) * [Configure routing](expressroute-howto-routing-portal-resource-manager.md)
- * [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-portal-resource-manager.md)
+ * [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-portal-resource-manager.md)
expressroute Expressroute For Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-for-cloud-solution-providers.md
Previously updated : 06/30/2023 Last updated : 10/18/2024 # ExpressRoute for Cloud Solution Providers (CSP)
Microsoft provides hyper-scale services for traditional resellers and distributo
ExpressRoute is composed of a pair of circuits for high availability that are attached to a single customer's subscription(s) and can't be shared by multiple customers. Each circuit should be terminated in a different router to maintain the high availability. > [!NOTE]
-> There are limits to the bandwidth and number of connections possible on each ExpressRoute circuit. If a single customer's needs exceed these limits, they will require multiple ExpressRoute circuits for their hybrid network implementation.
+> There are limits to the bandwidth and number of connections possible on each ExpressRoute circuit. If a single customer's needs exceed these limits, they will require multiple ExpressRoute circuits for their hybrid network implementation. For more information, see [ExpressRoute limits](../azure-resource-manager/management/azure-subscription-service-limits.md#expressroute-limits).
> Microsoft Azure provides a growing number of services that you can offer to your customers. ExpressRoute helps you and your customers take advantage of these services by providing high-speed low latency access to the Microsoft Azure environment.
The choices between these two options are based on your customerΓÇÖs needs and y
ExpressRoute supports network speeds from 50 Mb/s to 10 Gb/s. This allows customers to purchase the amount of network bandwidth needed for their unique environment. > [!NOTE]
-> Network bandwidth can be increased as needed without disrupting communications, but to reduce the network speed requires tearing down the circuit and recreating it at the lower network speed.
->
->
+> Network bandwidth can be increased as needed without disrupting communications, but to reduce the network speed requires tearing down the circuit and recreating it at the lower network speed. For more information, see [Modify an ExpressRoute circuit](expressroute-howto-circuit-portal-resource-manager.md#modify)
+>
ExpressRoute supports the connection of multiple VNets to a single ExpressRoute circuit for better utilization of the higher-speed connections. A single ExpressRoute circuit can be shared among multiple Azure subscriptions owned by the same customer.
expressroute Expressroute Howto Linkvnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-cli.md
The response contains the authorization key and status:
"authorizationKey": "0a7f3020-541f-4b4b-844a-5fb43472e3d7", "authorizationUseStatus": "Available", "etag": "W/\"010353d4-8955-4984-807a-585c21a22ae0\"",
-"id": "/subscriptions/81ab786c-56eb-4a4d-bb5f-f60329772466/resourceGroups/ExpressRouteResourceGroup/providers/Microsoft.Network/expressRouteCircuits/MyCircuit/authorizations/MyAuthorization1",
+"id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ExpressRouteResourceGroup/providers/Microsoft.Network/expressRouteCircuits/MyCircuit/authorizations/MyAuthorization1",
"name": "MyAuthorization1", "provisioningState": "Succeeded", "resourceGroup": "ExpressRouteResourceGroup"
expressroute Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/gateway-migration.md
Historically, users had to use the Resize-AzVirtualNetworkGateway PowerShell com
With the guided gateway migration experience you can deploy a second virtual network gateway in the same GatewaySubnet and Azure automatically transfers the control plane and data path configuration from the old gateway to the new one. During the migration process, there will be two virtual network gateways in operation within the same GatewaySubnet. This feature is designed to support migrations without downtime. However, users may experience brief connectivity issues or interruptions during the migration process.
+> [!NOTE]
+> The total time required for the migration to complete can take up to one hour. During this period, the gateway will remain locked, and no changes will be permitted.
+ Gateway migration is recommended if you have a non-Az enabled Gateway SKU or a non-Az enabled Gateway Basic IP Gateway SKU. | Migrate from Non-Az enabled Gateway SKU | Migrate to Az-enabled Gateway SKU |
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
The following example JSON snippet shows a health probe log entry for a failed h
"records": [ { "time": "2021-02-02T07:15:37.3640748Z",
- "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE",
+ "resourceId": "/SUBSCRIPTIONS/mySubscriptionID/RESOURCEGROUPS/myResourceGroup/PROVIDERS/MICROSOFT.CDN/PROFILES/MyProfile",
"category": "FrontDoorHealthProbeLog", "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write", "properties": {
The following example JSON snippet shows a health probe log entry for a failed h
"httpVerb": "HEAD", "result": "OriginError", "httpStatusCode": "400",
- "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/",
- "originName": "afdxprivatepreview.blob.core.windows.net",
- "originIP": "52.239.224.228:80",
+ "probeURL": "http://www.example.com:80/",
+ "originName": "www.example.com",
+ "originIP": "PublicI:Port",
"totalLatencyMilliseconds": "141", "connectionLatencyMilliseconds": "68", "DNSLatencyMicroseconds": "1814"
frontdoor Front Door Route Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md
The following table shows which routing rule the incoming request gets matched t
| www\.contoso.com/path/zzz | B | >[!WARNING]
-> If there are no routing rules for an exact-match frontend host with a catch-all route Path (`/*`), then there will not be a match to any routing rule.
+> If there are no routing rules for an exact-match frontend host without a catch-all route path (/*), then no routing rule will be matched.
> > Example configuration: >
healthcare-apis Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/availability-zones.md
+
+ Title: Azure Health Data Services Availability Zones
+description: Overview of Availability Zones for Azure Health Data Services
++++++ Last updated : 10/15/2024+++
+# Availability Zones for Azure Health Data Services
+
+The goal of high availability in Azure Health Data Services is to minimize impact on customer workloads from service maintenance operations and outages. Azure Health Data Services provides zone redundant availability using availability zones (AZs) for high availability and business continuity. To understand more about availability zones, visit [What are Azure availability zones?](/azure/reliability/availability-zones-overview?tabs=azure-cli).
+
+Zone redundant availability provides resiliency by protecting against outages within a region. This is achieved using zone-redundant storage (ZRS), which replicates your data across three availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. Zone redundant availability minimizes the risk of data loss if there are zone failures within the primary region.<br>
+For information on regions, see [Products availability by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+> [!NOTE]
+> Currently the availability zone feature is being provided to customers at no additional charge. In the future, charges will be incurred with the availability zone feature.
+
+## Region availability
+
+Here's a list of the availability zones for Azure Health Data Services.
+
+- Australia East
+- Cental India
+- Japan East
+- Korea Central
+- Southeast Asia
+- France Central
+- North Europe
+- West Europe
+- UK South
+- Sweden Central
+- Germany West Central*
+- Qatar Central*
+- East US*
+- East US 2
+- South Central US*
+- West US 2*
+- West US 3*
+- Canada Central
+
+Zones marked with a star ("*") have quota issues due to high demand. Enabling AZ features in these zones may take longer.
+
+### Limitations
+
+Consider the following limitations when configuring an availability zone.
+
+- Azure Health Data Service FHIR&reg; service instances allow customers to set AZ settings only once and can't be modified.
+- FHIR service with data volume support beyond 4 TB needs to specify the AZ configuration during the service instance creation.
+- When this feature is available as self-serve, any FHIR service instance created in Azure Health Data Services needs to specify the AZ configuration during the service instance creation.
+
+## Recovery Time Objective and Recovery Point Objective
+
+The time required for an application to fully recover is known as the Recovery Time Objective (RTO). It's also the maximum period (time interval) of recent data updates the application can tolerate losing when recovering from an unplanned disruptive event.<br>
+The potential data loss is known as Recovery Point Objective (RPO).
+With zone redundant availability, Azure Health Data Services FHIR service provides an RTO of less than 10 minutes, and RPO of 0.
+
+## Impact during zone-wide outages
+
+During a zone-wide outage, no action is needed from the customer during zone recovery. Customers should be prepared to experience a brief interruption of communication to provisioned resources. Impact to regions due to a zone outage is communicated on [Azure status history](https://azure.status.microsoft/status/history/).
+
+## Enabling an availability zone
+
+To enable the availability zone on a specific instance, customers need to submit a support ticket with following details.
+
+- Name of the subscription
+- Name of the FHIR service instance
+- Name of the resource group
+
+More information can be found atΓÇ»[Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR&reg; provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## **October 2024**
+
+### FHIR service
+
+**Bug fixes**
+
+- Export Validation: An issue was identified where exports proceeded despite invalid search parameters. We're introducing a change that prevents exports under these conditions. This feature is currently behind a strict validation flag and will become the default behavior on or after October 30.
+- Search Parameter Inclusion: We resolved an issue where additional search parameters (for instance, `_include`, `_has`) didn't return all expected results, sometimes omitting the next link.
+- Export Job Execution: A rare occurrence of `System.ObjectDisposedException` during export job completion has been addressed by preventing premature exits.
+- HTTP Status Code Update: The HTTP status code for invalid parameters during `$reindex` job creation is now updated to 400, ensuring better error handling.
+- Search Parameter Cleanup: A fix has been implemented to ensure complete cleanup of search parameters in the database when triggered with delete API calls, addressing issues related to incomplete deletions.
+ ## **August 2024** ### FHIR service
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table for details about resolution dates or possible workarounds.
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
-|For FHIR instances created after August 19,2024, diagnostic logs aren't available in log analytics workspace. |September 19,2024 9:00 am PST| -- | -- |
+|For FHIR instances created after August 19,2024, diagnostic logs aren't available in log analytics workspace. |September 19,2024 9:00 am PST| -- | October 17,2024 9:00 am PST |
|For FHIR instances created after August 19,2024, in metrics blade - Total requests, Total latency, and Total errors metrics are not being populated. |September 19,2024 9:00 am PST| -- | -- | |For FHIR instances created after August 19,2024, changes in private link configuration at the workspace level causes FHIR service to be stuck in 'Updating' state. |September 24,2024 9:00 am PST| Accounts deployed prior to September 27,2024 and facing this issue can follow the steps: <br> 1. Remove private endpoint from the Azure Health Data Services workspace having this issue. On Azure blade, go to Workspace and then click on Networking blade. In networking blade, select existing private link connection and click on 'Remove' <br> 2. Create new private connection to link to the workspace.| September 27,2024 9:00 am PST | |Changes in private link configuration at the workspace level don't propagate to the child services.|September 4,2024 9:00 am PST| To fix this issue a service reprovisioning is required. To reprovision the service, reach out to FHIR service team| September 17,2024 9:00am PST|
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
This article describes features, enhancements, and bug fixes released in 2024 for the FHIR&reg; service, DICOM&reg; service, and MedTech service in Azure Health Data Services.
+## October 2024
+
+### Azure Health Data Services
+
+### FHIR service
+
+#### Bug fixes
+
+- Export Validation: An issue was identified where exports proceeded despite invalid search parameters. We're introducing a change that prevents exports under these conditions. This feature is currently behind a strict validation flag and will become the default behavior on or after October 30.
+- Search Parameter Inclusion: We resolved an issue where additional search parameters (for instance, `_include`, `_has`) didn't return all expected results, sometimes omitting the next link.
+- Export Job Execution: A rare occurrence of `System.ObjectDisposedException` during export job completion has been addressed by preventing premature exits.
+- HTTP Status Code Update: The HTTP status code for invalid parameters during `$reindex` job creation is now updated to 400, ensuring better error handling.
+- Search Parameter Cleanup: A fix has been implemented to ensure complete cleanup of search parameters in the database when triggered with delete API calls, addressing issues related to incomplete deletions.
+- Descending Sort Issue: Resolved an issue where descending sort operations returned no resources if the sorted field had no data in the database, even when relevant resources existed.
+- Authentication Failure Handling: Added a new catch block to manage authentication failures when import requests are executed with managed identity turned off.
+ ## September 2024 ### Azure Health Data Services
This article describes features, enhancements, and bug fixes released in 2024 fo
### FHIR service #### Enhanced Export Efficiency
-The export functionality has been improved to optimize memory usage. With this change the export process now pushes data to blob storage one resource at a time, reducing memory consumption.
+The export functionality has been improved to optimize memory usage. With this change, the export process now pushes data to blob storage one resource at a time, reducing memory consumption.
## August 2024
The export functionality has been improved to optimize memory usage. With this c
### FHIR service #### Import operation error handling
-1. The import operation returns a HTTP 400 error when a search parameter resource is ingested via the import process. This change is intended to prevent search parameters from being placed in an invalid state when ingested with an import operation.
-2. The import operation will return a HTTP 400 status code, as opposed to the previous HTTP 500 status code, in cases where configuration issues with the storage account occur. This update aims to improve error handling associated with managed identities during import operations.
+1. The import operation returns an HTTP 400 error when a search parameter resource is ingested via the import process. This change is intended to prevent search parameters from being placed in an invalid state when ingested with an import operation.
+2. The import operation returns an HTTP 400 status code, as opposed to the previous HTTP 500 status code, in cases where configuration issues with the storage account occur. This update aims to improve error handling associated with managed identities during import operations.
## July 2024
Updating Status Code from HTTP 500 to HTTP 400
During a patch operation, if the payload requested an update for a resource type other than parameter, an internal server error (HTTP 500) was initially thrown. This has been updated to throw an HTTP 400 error instead. #### Performance enhancement
-Query optimization is added when searching FHIR resources with a data range. This query optimization will help with efficient querying as one combined CTE is generated.
+Query optimization is added when searching FHIR resources with a data range. This query optimization helps with efficient querying as one combined CTE is generated.
## May 2024
The scaling logic for import operations is improved, enabling multiple jobs to b
#### Bug fixes - **Fixed: HTTP status code for long-running requests**. FHIR requests that take longer than 100 seconds to execute return an HTTP 408 status code instead of HTTP 500. -- **Fixed: History request in bundle**. Prior to the fix, history request in a bundle returned HTTP status code 404.
+- **Fixed: History request in bundle**. Before the fix, history request in a bundle returned HTTP status code 404.
#### Stand-alone FHIR converter (preview)
iot-operations Concept Dataflow Conversions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-conversions.md
Last updated 08/03/2024 #CustomerIntent: As an operator, I want to understand how to use dataflow conversions to transform data.+ # Convert data by using dataflow conversions
iot-operations Concept Dataflow Enrich https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-enrich.md
Last updated 08/13/2024 #CustomerIntent: As an operator, I want to understand how to create a dataflow to enrich data sent to endpoints.+ # Enrich data by using dataflows
iot-operations Concept Dataflow Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md
Last updated 09/24/2024
ai-usage: ai-assisted #CustomerIntent: As an operator, I want to understand how to use the dataflow mapping language to transform data.+ # Map data by using dataflows
iot-operations Concept Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-schema-registry.md
Output schemas are associated with dataflow destinations are only used for dataf
Note: The Delta schema format is used for both Parquet and Delta output. For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.+
+To upload an output schema, see [Upload schema](#upload-schema).
+
+## Upload schema
+
+Input schema can be uploaded in the operations experience portal as [mentioned previously](#input-schema). You can also upload a schema using a Bicep template.
+
+### Example with Bicep template
+
+Create a Bicep `.bicep` file, and add the schema content to it at the top as a variable. This example is a Delta schema that corresponds to the OPC UA data from [quickstart](../get-started-end-to-end-sample/quickstart-add-assets.md).
+
+```bicep
+// Delta schema content matching OPC UA data from quickstart
+// For ADLS Gen2, ADX, and Fabric destinations
+var opcuaSchemaContent = '''
+{
+ "$schema": "Delta/1.0",
+ "type": "object",
+ "properties": {
+ "type": "struct",
+ "fields": [
+ {
+ "name": "temperature",
+ "type": {
+ "type": "struct",
+ "fields": [
+ {
+ "name": "SourceTimestamp",
+ "type": "string",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "Value",
+ "type": "integer",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "StatusCode",
+ "type": {
+ "type": "struct",
+ "fields": [
+ {
+ "name": "Code",
+ "type": "integer",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "Symbol",
+ "type": "string",
+ "nullable": true,
+ "metadata": {}
+ }
+ ]
+ },
+ "nullable": true,
+ "metadata": {}
+ }
+ ]
+ },
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "Tag 10",
+ "type": {
+ "type": "struct",
+ "fields": [
+ {
+ "name": "SourceTimestamp",
+ "type": "string",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "Value",
+ "type": "integer",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "StatusCode",
+ "type": {
+ "type": "struct",
+ "fields": [
+ {
+ "name": "Code",
+ "type": "integer",
+ "nullable": true,
+ "metadata": {}
+ },
+ {
+ "name": "Symbol",
+ "type": "string",
+ "nullable": true,
+ "metadata": {}
+ }
+ ]
+ },
+ "nullable": true,
+ "metadata": {}
+ }
+ ]
+ },
+ "nullable": true,
+ "metadata": {}
+ }
+ ]
+ }
+}
+'''
+```
+
+Then, define schema resource along with pointers to the existing Azure IoT Operation instance, custom location, and schema registry resources that you have from deploying Azure IoT Operations.
+
+```bicep
+// Replace placeholder values with your actual resource names
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param schemaRegistryName string = '<SCHEMA_REGISTRY_NAME>'
+
+// Pointers to existing resources from AIO deployment
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource schemaRegistry 'Microsoft.DeviceRegistry/schemaRegistries@2024-09-01-preview' existing = {
+ name: schemaRegistryName
+}
+
+// Name and version of the schema
+param opcuaSchemaName string = 'opcua-output-delta'
+param opcuaSchemaVer string = '1'
+
+// Define the schema resource to be created and instantiate a version
+resource opcSchema 'Microsoft.DeviceRegistry/schemaRegistries/schemas@2024-09-01-preview' = {
+ parent: schemaRegistry
+ name: opcuaSchemaName
+ properties: {
+ displayName: 'OPC UA Delta Schema'
+ description: 'This is a OPC UA delta Schema'
+ format: 'Delta/1.0'
+ schemaType: 'MessageSchema'
+ }
+}
+resource opcuaSchemaVersion 'Microsoft.DeviceRegistry/schemaRegistries/schemas/schemaVersions@2024-09-01-preview' = {
+ parent: opcSchema
+ name: opcuaSchemaVer
+ properties: {
+ description: 'Schema version'
+ schemaContent: opcuaSchemaContent
+ }
+}
+```
+
+After you've defined the schema content and resources, you can deploy the Bicep template to create the schema in the schema registry.
+
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Adlsv2 Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md
Title: Configure dataflow endpoints for Azure Data Lake Storage Gen2
description: Learn how to configure dataflow endpoints for Azure Data Lake Storage Gen2 in Azure IoT Operations. + Previously updated : 10/02/2024 Last updated : 10/16/2024 ai-usage: ai-assisted #CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Azure Data Lake Storage Gen2 in Azure IoT Operations so that I can send data to Azure Data Lake Storage Gen2.
To configure a dataflow endpoint for Azure Data Lake Storage Gen2, we suggest us
### Use managed identity authentication
-1. Get the managed identity of the Azure IoT Operations Preview Arc extension.
+First, in Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
-1. Assign a role to the managed identity that grants permission to write to the storage account, such as *Storage Blob Data Contributor*. To learn more, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
+Then, assign a role to the managed identity that grants permission to write to the storage account, such as *Storage Blob Data Contributor*. To learn more, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
-1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
+Finally, create the *DataflowEndpoint* resource and specify the managed identity authentication method. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
- ```yaml
- apiVersion: connectivity.iotoperations.azure.com/v1beta1
- kind: DataflowEndpoint
- metadata:
- name: adls
- spec:
- endpointType: DataLakeStorage
- dataLakeStorageSettings:
- host: https://<account>.blob.core.windows.net
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
- ```
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: DataLakeStorage
+ dataLakeStorageSettings:
+ host: https://<ACCOUNT>.blob.core.windows.net
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
+```
+
+# [Bicep](#tab/bicep)
+
+Create a Bicep `.bicep` file with the following content.
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param host string = 'https://<ACCOUNT>.blob.core.windows.net'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource adlsGen2Endpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: endpointName
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'DataLakeStorage'
+ dataLakeStorageSettings: {
+ host: host
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ }
+ }
+}
+```
+
+Then, deploy via Azure CLI.
+
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
++ If you need to override the system-assigned managed identity audience, see the [System-assigned managed identity](#system-assigned-managed-identity) section. ### Use access token authentication
-1. Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
+Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
-1. Create the *DataflowEndpoint* resource and specify the access token authentication method.
+Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token as well as other placeholder values.
- ```yaml
- apiVersion: connectivity.iotoperations.azure.com/v1beta1
- kind: DataflowEndpoint
- metadata:
- name: adls
- spec:
- endpointType: DataLakeStorage
- dataLakeStorageSettings:
- host: https://<account>.blob.core.windows.net
- authentication:
- method: AccessToken
- accessTokenSettings:
- secretRef: my-sas
- ```
+# [Kubernetes](#tab/kubernetes)
-## Configure dataflow destination
-
-Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings. The following example is a dataflow configuration that uses the MQTT endpoint for the source and Azure Data Lake Storage Gen2 as the destination. The source data is from the MQTT topics `thermostats/+/telemetry/temperature/#` and `humidifiers/+/telemetry/humidity/#`. The destination sends the data to Azure Data Lake Storage table `telemetryTable`.
+Create a Kubernetes manifest `.yaml` file with the following content.
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
+kind: DataflowEndpoint
metadata:
- name: my-dataflow
+ name: <ENDPOINT_NAME>
namespace: azure-iot-operations spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mq
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
- - operationType: Destination
- destinationSettings:
- endpointRef: adls
- # dataDestination should be the storage container name
- dataDestination: telemetryTable
+ endpointType: DataLakeStorage
+ dataLakeStorageSettings:
+ host: https://<ACCOUNT>.blob.core.windows.net
+ authentication:
+ method: AccessToken
+ accessTokenSettings:
+ secretRef: <SAS_SECRET_NAME>
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+# [Bicep](#tab/bicep)
+
+Create a Bicep `.bicep` file with the following content.
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param host string = 'https://<ACCOUNT>.blob.core.windows.net'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource adlsGen2Endpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: endpointName
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'DataLakeStorage'
+ dataLakeStorageSettings: {
+ host: host
+ authentication: {
+ method: 'AccessToken'
+ accessTokenSettings: {
+ secretRef: '<SAS_SECRET_NAME>'
+ }
+ }
+ }
+}
+```
-> [!NOTE]
-> Using the ADLSv2 endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+Then, deploy via Azure CLI.
-To customize the endpoint settings, see the following sections for more information.
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
++ ### Available authentication methods
Using the system-assigned managed identity is the recommended authentication met
Before creating the dataflow endpoint, assign a role to the managed identity that has write permission to the storage account. For example, you can assign the *Storage Blob Data Contributor* role. To learn more about assigning roles to blobs, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
-In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
+To use system-assigned managed identity, specify the managed identity authentication method in the *DataflowEndpoint* resource. In most cases, you don't need to specify other settings. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
+
+# [Kubernetes](#tab/kubernetes)
```yaml dataLakeStorageSettings:
dataLakeStorageSettings:
systemAssignedManagedIdentitySettings: {} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataLakeStorageSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+}
+```
+++ If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml dataLakeStorageSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: https://<account>.blob.core.windows.net
+ audience: https://<ACCOUNT>.blob.core.windows.net
```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataLakeStorageSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {
+ audience: 'https://<ACCOUNT>.blob.core.windows.net'
+ }
+ }
+}
+```
+++ #### Access token Using an access token is an alternative authentication method. This method requires you to create a Kubernetes secret with the SAS token and reference the secret in the *DataflowEndpoint* resource.
To enhance security and follow the principle of least privilege, you can generat
Create a Kubernetes secret with the SAS token. Don't include the question mark `?` that might be at the beginning of the token. ```bash
-kubectl create secret generic my-sas \
+kubectl create secret generic <SAS_SECRET_NAME> \
--from-literal=accessToken='sv=2022-11-02&ss=b&srt=c&sp=rwdlax&se=2023-07-22T05:47:40Z&st=2023-07-21T21:47:40Z&spr=https&sig=<signature>' \ -n azure-iot-operations ```
-Create the *DataflowEndpoint* resource with the secret reference.
+You can also use the IoT Operations portal to create and manage the secret. To learn more, see [Create and manage secrets in Azure IoT Operations Preview](../deploy-iot-ops/howto-manage-secrets.md).
+
+Finally, create the *DataflowEndpoint* resource with the secret reference.
+
+# [Kubernetes](#tab/kubernetes)
```yaml dataLakeStorageSettings: authentication: method: AccessToken accessTokenSettings:
- secretRef: my-sas
+ secretRef: <SAS_SECRET_NAME>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+dataLakeStorageSettings: {
+ authentication: {
+ method: 'AccessToken'
+ accessTokenSettings: {
+ secretRef: '<SAS_SECRET_NAME>'
+ }
+ }
+}
``` ++ #### User-assigned managed identity To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml dataLakeStorageSettings: authentication:
dataLakeStorageSettings:
tenantId: <ID> ```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataLakeStorageSettings: {
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ userAssignedManagedIdentitySettings: {
+ cliendId: '<ID>'
+ tenantId: '<ID>'
+ }
+ }
+}
+```
+++ ## Advanced settings You can set advanced settings for the Azure Data Lake Storage Gen2 endpoint, such as the batching latency and message count.
Use the `batching` settings to configure the maximum number of messages and the
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
-Set the values in the dataflow endpoint custom resource.
+# [Kubernetes](#tab/kubernetes)
```yaml dataLakeStorageSettings:
dataLakeStorageSettings:
latencySeconds: 100 maxMessages: 1000 ```+
+# [Bicep](#tab/bicep)
+
+```bicep
+dataLakeStorageSettings: {
+ ...
+ batching: {
+ latencySeconds: 100
+ maxMessages: 1000
+ }
+}
+```
++
iot-operations Howto Configure Adx Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md
Title: Configure dataflow endpoints for Azure Data Explorer
description: Learn how to configure dataflow endpoints for Azure Data Explorer in Azure IoT Operations. + Previously updated : 09/20/2024 Last updated : 10/16/2024 ai-usage: ai-assisted #CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Azure Data Explorer in Azure IoT Operations so that I can send data to Azure Data Explorer.
To send data to Azure Data Explorer in Azure IoT Operations Preview, you can con
- A [configured dataflow profile](howto-configure-dataflow-profile.md) - An **Azure Data Explorer cluster**. Follow the **Full cluster** steps in the [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-and-database). The *free cluster* option doesn't work for this scenario. + ## Create an Azure Data Explorer database 1. In the Azure portal, create a database in your Azure Data Explorer *full* cluster.
To send data to Azure Data Explorer in Azure IoT Operations Preview, you can con
.alter database ['<DATABASE_NAME>'] policy streamingingestion enable ```
- Alternatively, you can enable streaming ingestion on the entire cluster. See [Enable streaming ingestion on an existing cluster](/azure/data-explorer/ingest-data-streaming#enable-streaming-ingestion-on-an-existing-cluster).
+ Alternatively, enable streaming ingestion on the entire cluster. See [Enable streaming ingestion on an existing cluster](/azure/data-explorer/ingest-data-streaming#enable-streaming-ingestion-on-an-existing-cluster).
1. In Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
-1. In your Azure Data Explorer database, under **Security + networking** select **Permissions** > **Add** > **Ingestor**. Search for the Azure IoT Operations extension name then add it.
+1. In your Azure Data Explorer database (not cluster), under **Overview** select **Permissions** > **Add** > **Ingestor**. Search for the Azure IoT Operations extension name then add it.
+ ## Create an Azure Data Explorer dataflow endpoint
-Create the dataflow endpoint resource with your cluster and database information. We suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
+Create the dataflow endpoint resource with your cluster and database information. We suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: DataflowEndpoint metadata:
- name: adx
+ name: <ENDPOINT_NAME>
namespace: azure-iot-operations spec: endpointType: DataExplorer dataExplorerSettings:
- host: <cluster>.<region>.kusto.windows.net
- database: <database-name>
+ host: 'https://<CLUSTER>.<region>.kusto.windows.net'
+ database: <DATABASE_NAME>
authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings: {} ```
-## Configure dataflow destination
+Then apply the manifest file to the Kubernetes cluster.
-Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
+```bash
+kubectl apply -f <FILE>.yaml
+```
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mq
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
- - operationType: Destination
- destinationSettings:
- endpointRef: adx
- dataDestination: database-name
+# [Bicep](#tab/bicep)
+
+Create a Bicep `.bicep` file with the following content.
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param hostName string = 'https://<CLUSTER>.<region>.kusto.windows.net'
+param databaseName string = '<DATABASE_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource adxEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: endpointName
+ extendedLocation: {
+ name: customLocationName
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'DataExplorer'
+ dataExplorerSettings: {
+ host: hostName
+ database: databaseName
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ }
+ }
+}
```
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+Then, deploy via Azure CLI.
-> [!NOTE]
-> Using the Azure Data Explorer endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
-To customize the endpoint settings, see the following sections for more information.
+ ### Available authentication methods
Before you create the dataflow endpoint, assign a role to the managed identity t
In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience `https://api.kusto.windows.net`.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml dataExplorerSettings: authentication:
dataExplorerSettings:
systemAssignedManagedIdentitySettings: {} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataExplorerSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+}
+```
+++ If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml dataExplorerSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: https://<audience URL>
+ audience: https://<AUDIENCE_URL>
```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataExplorerSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {
+ audience: 'https://<AUDIENCE_URL>'
+ }
+ }
+}
+```
+++ #### User-assigned managed identity To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml dataExplorerSettings: authentication:
dataExplorerSettings:
tenantId: <ID> ```
+# [Bicep](#tab/bicep)
+
+```bicep
+dataExplorerSettings: {
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ userAssignedManagedIdentitySettings: {
+ clientId: '<ID>'
+ tenantId: '<ID>'
+ }
+ }
+}
+```
+++ ## Advanced settings You can set advanced settings for the Azure Data Explorer endpoint, such as the batching latency and message count.
Use the `batching` settings to configure the maximum number of messages and the
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
-Set the values in the dataflow endpoint custom resource.
+# [Kubernetes](#tab/kubernetes)
```yaml dataExplorerSettings:
dataExplorerSettings:
latencySeconds: 100 maxMessages: 1000 ```+
+# [Bicep](#tab/bicep)
+
+```bicep
+dataExplorerSettings: {
+ batching: {
+ latencySeconds: 100
+ maxMessages: 1000
+ }
+}
+```
++
iot-operations Howto Configure Dataflow Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
Title: Configure dataflow endpoints in Azure IoT Operations
description: Configure dataflow endpoints to create connection points for data sources. + Last updated 09/17/2024
iot-operations Howto Configure Dataflow Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md
Title: Configure dataflow profile in Azure IoT Operations
description: How to configure a dataflow profile in Azure IoT Operations to change a dataflow behavior. + Last updated 08/29/2024
iot-operations Howto Configure Fabric Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md
Title: Configure dataflow endpoints for Microsoft Fabric OneLake
description: Learn how to configure dataflow endpoints for Microsoft Fabric OneLake in Azure IoT Operations. + Previously updated : 10/02/2024 Last updated : 10/16/2024 ai-usage: ai-assisted #CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Microsoft Fabric OneLake in Azure IoT Operations so that I can send data to Microsoft Fabric OneLake.
To send data to Microsoft Fabric OneLake in Azure IoT Operations Preview, you ca
To configure a dataflow endpoint for Microsoft Fabric OneLake, we suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
-# [Kubernetes](#tab/kubernetes)
-
-1. Get the managed identity of the Azure IoT Operations Preview Arc extension.
-
-1. In the Microsoft Fabric workspace you created, select **Manage access** > **+ Add people or groups**.
-
-1. Search for the Azure IoT Operations Preview Arc extension by its name, and select the app ID GUID value that you found in the previous step.
+First, in Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
-1. Select **Contributor** as the role, then select **Add**.
+Then, in the Microsoft Fabric workspace you created, select **Manage access** > **+ Add people or groups**. Search for the Azure IoT Operations Preview Arc extension by its name and select it. Select **Contributor** as the role, then select **Add**.
-1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
+Finally, create the *DataflowEndpoint* resource and specify the managed identity authentication method. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
- ```yaml
- apiVersion: connectivity.iotoperations.azure.com/v1beta1
- kind: DataflowEndpoint
- metadata:
- name: fabric
- spec:
- endpointType: FabricOneLake
- fabricOneLakeSettings:
- # The default Fabric OneLake host URL in most cases
- host: https://onelake.dfs.fabric.microsoft.com
- oneLakePathType: Tables
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
- names:
- workspaceName: <EXAMPLE-WORKSPACE-NAME>
- lakehouseName: <EXAMPLE-LAKEHOUSE-NAME>
- ```
-
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
-This Bicep template file from [Bicep File for Microsoft Fabric OneLake dataflow Tutorial](https://gist.github.com/david-emakenemi/289a167c8fa393d3a7dce274a6eb21eb) deploys the necessary resources for dataflows to Fabric OneLake.
+Create a Kubernetes manifest `.yaml` file with the following content.
-1. Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iotoperations
+spec:
+ endpointType: FabricOneLake
+ fabricOneLakeSettings:
+ # The default Fabric OneLake host URL in most cases
+ host: https://onelake.dfs.fabric.microsoft.com
+ oneLakePathType: Tables
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ names:
+ workspaceName: <WORKSPACE_NAME>
+ lakehouseName: <LAKEHOUSE_NAME>
+```
-1. Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
+Then apply the manifest file to the Kubernetes cluster.
-```azurecli
-az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```bash
+kubectl apply -f <FILE>.yaml
```
-This endpoint is the destination for the dataflow that receives messages to Fabric OneLake.
+# [Bicep](#tab/bicep)
+
+Create a Bicep `.bicep` file with the following content.
```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param lakehouseName string = '<LAKEHOUSE_NAME>'
+param workspaceName string = '<WORKSPACE_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
resource oneLakeEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = { parent: aioInstance
- name: 'onelake-ep'
+ name: endpointName
extendedLocation: {
- name: customLocation.id
+ name: customLocationName
type: 'CustomLocation' } properties: {
resource oneLakeEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@20
systemAssignedManagedIdentitySettings: {} } oneLakePathType: 'Tables'
- host: 'https://msit-onelake.dfs.fabric.microsoft.com'
+ host: 'https://onelake.dfs.fabric.microsoft.com'
names: {
- lakehouseName: '<EXAMPLE-LAKEHOUSE-NAME>'
- workspaceName: '<EXAMPLE-WORKSPACE-NAME>'
- }
- batching: {
- latencySeconds: 5
- maxMessages: 10000
+ lakehouseName: lakehouseName
+ workspaceName: workspaceName
}
+ ...
} } } ```--
-## Configure dataflow destination
-
-Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
-
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mq
- dataSources:
- *
- - operationType: Destination
- destinationSettings:
- endpointRef: fabric
-```
-
-To customize the endpoint settings, see the following sections for more information.
-
-### Fabric OneLake host URL
-Use the `host` setting to specify the Fabric OneLake host URL. Usually, it's `https://onelake.dfs.fabric.microsoft.com`.
+Then, deploy via Azure CLI.
-```yaml
-fabricOneLakeSettings:
- host: https://onelake.dfs.fabric.microsoft.com
-```
-
-However, if this host value doesn't work and you're not getting data, try checking for the URL from the Properties of one of the precreated lakehouse folders.
-
-![Screenshot of properties shortcut menu to get lakehouse URL.](media/howto-configure-fabric-endpoint/lakehouse-name.png)
-
-The host value should look like `https://xyz.dfs.fabric.microsoft.com`.
-
-To learn more, see [Connecting to Microsoft OneLake](/fabric/onelake/onelake-access-api).
-
-### OneLake path type
-
-Use the `oneLakePathType` setting to specify the type of path in the Fabric OneLake. The default value is `Tables`, which is used for the Tables folder in the lakehouse typically in Delta Parquet format.
-
-```yaml
-fabricOneLakeSettings:
- oneLakePathType: Tables
-```
-
-Another possible value is `Files`. Use this value for the Files folder in the lakehouse, which is unstructured and can be in any format.
-
-```yaml
-fabricOneLakeSettings:
- oneLakePathType: Files
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource dataflow_onelake 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
- parent: defaultDataflowProfile
- name: 'dataflow-onelake3'
- extendedLocation: {
- name: customLocation.id
- type: 'CustomLocation'
- }
- properties: {
- mode: 'Enabled'
- operations: [
- {
- operationType: 'Source'
- sourceSettings: {
- endpointRef: defaultDataflowEndpoint.name
- dataSources: array('azure-iot-operations/data/thermostat')
- }
- }
- {
- operationType: 'BuiltInTransformation'
- builtInTransformationSettings: {
- map: [
- {
- inputs: array('*')
- output: '*'
- }
- ]
- schemaRef: 'aio-sr://${opcuaSchemaName}:${opcuaSchemaVer}'
- serializationFormat: 'Delta' // Can also be 'Parquet'
- }
- }
- {
- operationType: 'Destination'
- destinationSettings: {
- endpointRef: oneLakeEndpoint.name
- dataDestination: 'opc'
- }
- }
- ]
- }
-}
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
```
-The `BuiltInTransformation` in this Bicep file transforms the data flowing through the dataflow pipeline. It applies a pass-through operation, mapping all input fields `(inputs: array('*'))` directly to the output `(output: '*')`, without altering the data.
-
-It also references the defined OPC-UA schema to ensure the data is structured according to the OPC UA protocol. The transformation then serializes the data in Delta format (or Parquet if specified).
-
-This step ensures that the data adheres to the required schema and format before being sent to the destination.
-
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
-
-> [!NOTE]
-> Using the Fabric OneLake dataflow endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
- ### Available authentication methods The following authentication methods are available for Microsoft Fabric OneLake dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
To learn more, see [Give access to a workspace](/fabric/get-started/give-access-
Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication with Azure Data Explorer.
+In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience.
# [Kubernetes](#tab/kubernetes)
-In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience.
- ```yaml fabricOneLakeSettings: authentication:
fabricOneLakeSettings:
{} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+fabricOneLakeSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+}
+```
+++ If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+# [Kubernetes](#tab/kubernetes)
+ ```yaml fabricOneLakeSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: https://contoso.onelake.dfs.fabric.microsoft.com
+ audience: https://<ACCOUNT>.onelake.dfs.fabric.microsoft.com
``` # [Bicep](#tab/bicep) ```bicep fabricOneLakeSettings: {
- authentication: {
- method: 'SystemAssignedManagedIdentity'
- systemAssignedManagedIdentitySettings: {
- audience: 'https://contoso.onelake.dfs.fabric.microsoft.com'
- }
- }
- ...
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {
+ audience: 'https://<ACCOUNT>.onelake.dfs.fabric.microsoft.com'
}
+ }
+}
```
fabricOneLakeSettings:
```bicep fabricOneLakeSettings: {
- authentication: {
- method: 'UserAssignedManagedIdentity'
- userAssignedManagedIdentitySettings: {
- clientId: '<clientId>'
- tenantId: '<tenantId>'
- }
- }
- ...
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ userAssignedManagedIdentitySettings: {
+ clientId: '<clientId>'
+ tenantId: '<tenantId>'
}
+ }
+}
```
fabricOneLakeSettings: {
You can set advanced settings for the Fabric OneLake endpoint, such as the batching latency and message count. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+### OneLake path type
+
+The `oneLakePathType` setting determines the type of path to use in the OneLake path. The default value is `Tables`, which is the recommended path type for the most common use cases. The `Tables` path type is a table in the OneLake lakehouse that is used to store the data. It can also be set as `Files`, which is a file in the OneLake lakehouse that is used to store the data. The `Files` path type is useful when you want to store the data in a file format that is not supported by the `Tables` path type.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+fabricOneLakeSettings:
+ oneLakePathType: Tables # Or Files
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+fabricOneLakeSettings: {
+ oneLakePathType: 'Tables'
+}
+```
+++
+### Batching
+ Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination. | Field | Description | Required |
For example, to configure the maximum number of messages to 1000 and the maximum
# [Kubernetes](#tab/kubernetes)
-Set the values in the dataflow endpoint custom resource.
- ```yaml fabricOneLakeSettings: batching:
fabricOneLakeSettings:
# [Bicep](#tab/bicep)
-The bicep file has the values in the dataflow endpoint resource.
-
-<!-- TODO Add a way for users to override the file with values using the az stack group command >
- ```bicep
-batching: {
- latencySeconds: 5
- maxMessages: 10000
+fabricOneLakeSettings: {
+ batching: {
+ latencySeconds: 100
+ maxMessages: 1000
+ }
} ```>+
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
Title: Configure Azure Event Hubs and Kafka dataflow endpoints in Azure IoT Oper
description: Learn how to configure dataflow endpoints for Kafka in Azure IoT Operations. + Last updated 10/02/2024
If you're using Azure Event Hubs, create an Azure Event Hubs namespace and a Kaf
To configure a dataflow endpoint for a Kafka endpoint, we suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
+First, in Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
+
+Then, assign the managed identity to the Event Hubs namespace with the `Azure Event Hubs Data Sender` or `Azure Event Hubs Data Receiver` role using the name of the extension.
+
+Finally, create the *DataflowEndpoint* resource. Use your own values to replace the placeholder values like `<ENDPOINT_NAME>`.
+ # [Portal](#tab/portal) 1. In the [operations experience](https://iotoperations.azure.com/), select the **Dataflow endpoints** tab.
To configure a dataflow endpoint for a Kafka endpoint, we suggest using the mana
| Setting | Description | | -- | - | | Name | The name of the dataflow endpoint. |
- | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net:9093`. Include port number `9093` in the host setting for Event Hubs. |
+ | Host | The hostname of the Kafka broker in the format `<NAMEPSACE>.servicebus.windows.net:9093`. Include port number `9093` in the host setting for Event Hubs. |
| Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *SASL*. | | SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. | | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. | | Username reference of token secret | The reference to the username in the SASL token secret. Required if using *SASL*. |
-# [Kubernetes](#tab/kubernetes)
-
-1. Get the managed identity of the Azure IoT Operations Arc extension.
-1. Assign the managed identity to the Event Hubs namespace with the `Azure Event Hubs Data Sender` or `Azure Event Hubs Data Receiver` role.
-1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
-
- ```yaml
- apiVersion: connectivity.iotoperations.azure.com/v1beta1
- kind: DataflowEndpoint
- metadata:
- name: eventhubs
- namespace: azure-iot-operations
- spec:
- endpointType: Kafka
- kafkaSettings:
- host: <HOST>.servicebus.windows.net:9093
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
- tls:
- mode: Enabled
- consumerGroupId: mqConnector
- ```
-
-The Kafka topic, or individual event hub, is configured later when you create the dataflow. The Kafka topic is the destination for the dataflow messages.
+1. Select **Apply** to provision the endpoint.
-#### Use connection string for authentication to Event Hubs
+# [Kubernetes](#tab/kubernetes)
-To use connection string for authentication to Event Hubs, update the `authentication` section of the Kafka settings to use the `Sasl` method and configure the `saslSettings` with the `saslType` as `Plain` and the `secretRef` with the name of the secret that contains the connection string.
+Create a Kubernetes manifest `.yaml` file with the following content.
```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
spec:
+ endpointType: Kafka
kafkaSettings:
+ host: <NAMESPACE>.servicebus.windows.net:9093
authentication:
- method: Sasl
- saslSettings:
- saslType: Plain
- secretRef: <YOUR-TOKEN-SECRET-NAME>
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
tls: mode: Enabled ```
-In the example, the `secretRef` is the name of the secret that contains the connection string. The secret must be in the same namespace as the Kafka dataflow resource. The secret must have both the username and password as key-value pairs. For example:
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
+```
+
+# [Bicep](#tab/bicep)
+
+Create a Bicep `.bicep` file with the following content.
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param schemaRegistryName string = '<SCHEMA_REGISTRY_NAME>'
+param hostName string = '<HOST>.servicebus.windows.net:9093'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstanceName
+ name: endpointName
+ extendedLocation: {
+ name: customLocationName
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Kafka'
+ kafkaSettings: {
+ host: hostName
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ }
+ }
+}
+```
+
+Then, deploy via Azure CLI.
+
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+++
+> [!NOTE]
+> The Kafka topic, or individual event hub, is configured later when you create the dataflow. The Kafka topic is the destination for the dataflow messages.
+
+#### Use connection string for authentication to Event Hubs
+
+To use connection string for authentication to Event Hubs, use the SASL authentication method and configure with SASL type as "Plain" and configure name of the secret that contains the connection string.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **SASL**.
+
+Enter the following settings for the endpoint:
+
+| Setting | Description |
+| | - |
+| SASL type | Choose `Plain`. |
+| Synced secret name | The name of the Kubernetes secret that contains the connection string. |
+| Username reference or token secret | The reference to the username or token secret used for SASL authentication. |
+| Password reference of token secret | The reference to the password or token secret used for SASL authentication. |
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: Plain
+ secretRef: <SECRET_NAME>
+ tls:
+ mode: Enabled
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ authentication: {
+ method: 'Sasl'
+ SaslSettings: {
+ saslType: 'Plain'
+ secretRef: '<SECRET_NAME>'
+ }
+ }
+ tls: {
+ mode: 'Enabled'
+ }
+ ...
+}
+```
+++
+Here, the secret reference points to secret that contains the connection string. The secret must be in the same namespace as the Kafka dataflow resource. The secret must have both the username and password as key-value pairs. For example:
```bash kubectl create secret generic cs-secret -n azure-iot-operations \
kubectl create secret generic cs-secret -n azure-iot-operations \
> [!TIP] > Scoping the connection string to the namespace (as opposed to individual event hubs) allows a dataflow to send and receive messages from multiple different event hubs and Kafka topics. -- #### Limitations Azure Event Hubs [doesn't support all the compression types that Kafka supports](../../event-hubs/azure-event-hubs-kafka-overview.md#compression). Only GZIP compression is supported in Azure Event Hubs premium and dedicated tiers currently. Using other compression types might result in errors.
To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host,
| -- | - | | Name | The name of the dataflow endpoint. | | Host | The hostname of the Kafka broker in the format `<Kafa-broker-host>:xxxx`. Include port number in the host setting. |
- | Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, *SASL*, or *X509 certificate*. |
+ | Authentication method| The method used for authentication. Choose *SASL* or *X509 certificate*. |
| SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. | | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. | | Username reference of token secret | The reference to the username in the SASL token secret. Required if using *SASL*. |
To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host,
1. Select **Apply** to provision the endpoint. > [!NOTE]
-> Currently, the operations experience doesn't support using a Kafka dataflow endpoint as a source. You can create a dataflow with a source Kafka dataflow endpoint using the Kubernetes or Bicep.
+> Currently, the operations experience doesn't support using a Kafka dataflow endpoint as a source. You can create a dataflow with a source Kafka dataflow endpoint using Kubernetes or Bicep.
# [Kubernetes](#tab/kubernetes)
spec:
method: Sasl saslSettings: saslType: ScramSha256
- secretRef: <YOUR-TOKEN-SECRET-NAME>
+ secretRef: <SECRET_NAME>
tls: mode: Enabled consumerGroupId: mqConnector ``` --
-## Use the endpoint in a dataflow source or destination
-
-Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's source or destination settings.
+# [Bicep](#tab/bicep)
-# [Portal](#tab/portal)
-
-1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab. If creating a new dataflow, select **Create dataflow** and replace `<new-dataflow>` with a name for the dataflow.
-1. In the editor, select the source endpoint. Kafka endpoints can be used as both source and destination. Currently, you can only use the portal to create a dataflow with a Kafka endpoint as a destination. Use a Kubernetes custom resource or Bicep to create a dataflow with a Kafka endpoint as a source.
-1. Choose the Kafka dataflow endpoint that you created previously.
-1. Specify the Kafka topic where messages are sent.
-
- :::image type="content" source="media/howto-configure-kafka-endpoint/dataflow-mq-kafka.png" alt-text="Screenshot using operations experience to create a dataflow with an MQTT source and Azure Event Hubs destination.":::
-
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mq
- dataSources:
- *
- - operationType: Destination
- destinationSettings:
- endpointRef: kafka
+```bicep
+resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: '<ENDPOINT NAME>'
+ extendedLocation: {
+ name: '<CUSTOM_LOCATION_NAME>'
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Kafka'
+ host: '<KAFKA-HOST>:<PORT>'
+ kafkaSettings: {
+ authentication: {
+ method: 'Sasl'
+ SaslSettings: {
+ saslType: 'Plain'
+ secretRef: '<SECRET_NAME>'
+ }
+ }
+ tls: {
+ mode: 'Enabled'
+ }
+ consumerGroupId: mqConnector
+ }
+ }
+}
```
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
- To customize the endpoint settings, see the following sections for more information. + ### Available authentication methods
-The following authentication methods are available for Kafka broker dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+The following authentication methods are available for Kafka broker dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
#### SASL
+To use SASL for authentication, specify the SASL authentication method and configure SASL type as well as a secret reference with the name of the secret that contains the SASL token.
+ # [Portal](#tab/portal) In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **SASL**.
Enter the following settings for the endpoint:
# [Kubernetes](#tab/kubernetes)
-To use SASL for authentication, update the `authentication` section of the Kafka settings to use the `Sasl` method and configure the `saslSettings` with the `saslType` and the `secretRef` with the name of the secret that contains the SASL token.
- ```yaml kafkaSettings: authentication: method: Sasl saslSettings:
- saslType: Plain
- secretRef: <YOUR-TOKEN-SECRET-NAME>
+ saslType: Plain # Or ScramSha256, ScramSha512
+ secretRef: <SECRET_NAME>
```
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ authentication: {
+ method: 'Sasl' // Or ScramSha256, ScramSha512
+ SaslSettings: {
+ saslType: 'Plain'
+ secretRef: '<SECRET_NAME>'
+ }
+ }
+}
+```
+++ The supported SASL types are: - `Plain`
kubectl create secret generic sasl-secret -n azure-iot-operations \
--from-literal=token='your-sasl-token' ``` -
+<!-- TODO: double check! -->
#### X.509
+To use X.509 for authentication, update the authentication section of the Kafka settings to use the X509Certificate method and specify reference to the secret that holds the X.509 certificate.
+ # [Portal](#tab/portal) In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **X509 certificate**.
Enter the following settings for the endpoint:
# [Kubernetes](#tab/kubernetes)
-To use X.509 for authentication, update the `authentication` section of the Kafka settings to use the `X509Certificate` method and configure the `x509CertificateSettings` with the `secretRef` with the name of the secret that contains the X.509 certificate.
- ```yaml kafkaSettings: authentication: method: X509Certificate x509CertificateSettings:
- secretRef: <YOUR-TOKEN-SECRET-NAME>
+ secretRef: <SECRET_NAME>
```
+# [Bicep](#tab/bicep)
++
+```bicep
+kafkaSettings: {
+ authentication: {
+ method: 'X509Certificate'
+ x509CertificateSettings: {
+ secretRef: '<SECRET_NAME>'
+ }
+ }
+}
+```
+++ The secret must be in the same namespace as the Kafka dataflow resource. Use Kubernetes TLS secret containing the public certificate and private key. For example: ```bash
kubectl create secret tls my-tls-secret -n azure-iot-operations \
--key=path/to/key/file ``` -- #### System-assigned managed identity To use system-assigned managed identity for authentication, first assign a role to the Azure IoT Operation managed identity that grants permission to send and receive messages from Event Hubs, such as Azure Event Hubs Data Owner or Azure Event Hubs Data Sender/Receiver. To learn more, see [Authenticate an application with Microsoft Entra ID to access Event Hubs resources](../../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs).
+Then, specify the managed identity authentication method in the Kafka settings. In most cases, you don't need to specify other settings.
+ # [Portal](#tab/portal) In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**. # [Kubernetes](#tab/kubernetes)
-Update the `authentication` section of the DataflowEndpoint Kafka settings to use the `SystemAssignedManagedIdentity` method. In most cases, you can set the `systemAssignedManagedIdentitySettings` with an empty object.
- ```yaml kafkaSettings: authentication:
kafkaSettings:
{} ```
-This sets the audience to the default value, which is the same as the Event Hubs namespace host value in the form of `https://<NAMESPACE>.servicebus.windows.net`. However, if you need to override the default audience, you can set the `audience` field to the desired value. The audience is the resource that the managed identity is requesting access to. For example:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ ...
+ properties: {
+ ...
+ kafkaSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ ...
+ }
+ }
+}
+```
+++
+This configuration creates a managed identity with the default audience, which is the same as the Event Hubs namespace host value in the form of `https://<NAMESPACE>.servicebus.windows.net`. However, if you need to override the default audience, you can set the `audience` field to the desired value.
+
+# [Portal](#tab/portal)
+
+Not supported in the operations experience.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: <YOUR-AUDIENCE-OVERRIDE-VALUE>
+ audience: <YOUR_AUDIENCE_OVERRIDE_VALUE>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {
+ audience: '<YOUR_AUDIENCE_OVERRIDE_VALUE>'
+ }
+ }
+}
``` #### User-assigned managed identity
+To use user-managed identity for authentication, you must first deploy Azure IoT Operations with secure settings enabled. To learn more, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+Then, specify the user-assigned managed identity authentication method in the Kafka settings along with the client ID, tenant ID, and scope of the managed identity.
+ # [Portal](#tab/portal) In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **User assigned managed identity**.
-Enter the user assigned managed identity client ID and tenant ID in the appropriate fields.
+Enter the user assigned managed identity client ID, tenant ID, and scope in the appropriate fields.
# [Kubernetes](#tab/kubernetes)
-To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method.
-- ```yaml kafkaSettings: authentication: method: UserAssignedManagedIdentity userAssignedManagedIdentitySettings:
- {}
+ clientId: <CLIENT_ID>
+ tenantId: <TENANT_ID>
+ scope: <SCOPE>
```
-<!-- TODO: Add link to WLIF docs -->
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ UserAssignedManagedIdentitySettings: {
+ clientId: '<CLIENT_ID>'
+ tenantId: '<TENANT_ID>'
+ scope: '<SCOPE>'
+ }
+ }
+ ...
+}
+```
#### Anonymous
-To use anonymous authentication, update the `authentication` section of the Kafka settings to use the `Anonymous` method.
+To use anonymous authentication, update the authentication section of the Kafka settings to use the Anonymous method.
+
+# [Portal](#tab/portal)
+
+Not yet supported in the operations experience. See [known issues](../troubleshoot/known-issues.md).
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings:
kafkaSettings:
{} ```
+# [Bicep](#tab/bicep)
+
+Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
+++ ## Advanced settings
-You can set advanced settings for the Kafka dataflow endpoint such as TLS, trusted CA certificate, Kafka messaging settings, batching, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+You can set advanced settings for the Kafka dataflow endpoint such as TLS, trusted CA certificate, Kafka messaging settings, batching, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint resource.
# [Portal](#tab/portal)
In the operations experience, select the **Advanced** tab for the dataflow endpo
:::image type="content" source="media/howto-configure-kafka-endpoint/kafka-advanced.png" alt-text="Screenshot using operations experience to set Kafka dataflow endpoint advanced settings.":::
-| Setting | Description |
-| | - |
-| Consumer group ID | The ID of the consumer group for the Kafka endpoint. The consumer group ID is used to identify the consumer group that the dataflow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker. |
-| Compression | The compression type used for messages sent to Kafka topics. Supported types are `None`, `Gzip`, `Snappy`, and `Lz4`. Compression helps to reduce the network bandwidth and storage space required for data transfer. However, compression also adds some overhead and latency to the process. This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer. |
-| Copy MQTT properties | Whether to copy MQTT message properties to Kafka message headers. For more information, see [Copy MQTT properties](#copy-mqtt-properties). |
-| Kafka acknowledgement | The level of acknowledgement requested from the Kafka broker. Supported values are `None`, `All`, `One`, and `Zero`. For more information, see [Kafka acknowledgements](#kafka-acknowledgements). |
-| Partition handling strategy | The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics. Supported values are `Default`, `Static`, `Topic`, and `Property`. For more information, see [Partition handling strategy](#partition-handling-strategy). |
-| TLS mode enabled | Enables TLS for the Kafka endpoint. |
-| Trusted CA certificate config map | The ConfigMap containing the trusted CA certificate for the Kafka endpoint. This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka dataflow resource. For more information, see [Trusted CA certificate](#trusted-ca-certificate). |
-| Batching enabled | Enables batching. Batching allows you to group multiple messages together and compress them as a single unit, which can improve the compression efficiency and reduce the network overhead. This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer. |
-| Batching latency | The maximum time interval in milliseconds that messages can be buffered before being sent. If this interval is reached, then all buffered messages are sent as a batch, regardless of how many or how large they are. |
-| Maximum bytes | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many they are or how long they are buffered. |
-| Message count | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large they are or how long they are buffered. |
-| Cloud event attributes | The CloudEvents attributes to include in the Kafka messages. |
- # [Kubernetes](#tab/kubernetes)
-### TLS settings
+Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
+
+```yaml
+kafkaSettings:
+ consumerGroupId: <ID>
+ compression: Gzip
+ copyMqttProperties: true
+ kafkaAcknowledgement: All
+ partitionHandlingStrategy: Default
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
+ batching:
+ enabled: true
+ latencyMs: 1000
+ maxMessages: 100
+ maxBytes: 1024
+```
+
+# [Bicep](#tab/bicep)
+
+Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
+
+```bicep
+kafkaSettings: {
+ consumerGroupId: '<ID>'
+ compression: 'Gzip'
+ copyMqttProperties: true
+ kafkaAcknowledgement: 'All'
+ partitionHandlingStrategy: 'Default'
+ tls: {
+ mode: 'Enabled'
+ }
+ trustedCaCertificateConfigMapRef: '<YOUR_CA_CERTIFICATE>'
+ batching: {
+ enabled: true
+ latencyMs: 1000
+ maxMessages: 100
+ maxBytes: 1024
+ }
+}
+```
-Under `kafkaSettings.tls`, you can configure additional settings for the TLS connection to the Kafka broker.
++
+### TLS settings
#### TLS mode
-To enable or disable TLS for the Kafka endpoint, update the `mode` setting in the TLS settings. For example:
+To enable or disable TLS for the Kafka endpoint, update the `mode` setting in the TLS settings.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the checkbox next to **TLS mode enabled**.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings: tls:
- mode: Enabled
+ mode: Enabled # Or Disabled
```
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ tls: {
+ mode: 'Enabled' // Or Disabled
+ }
+}
+```
+++ The TLS mode can be set to `Enabled` or `Disabled`. If the mode is set to `Enabled`, the dataflow uses a secure connection to the Kafka broker. If the mode is set to `Disabled`, the dataflow uses an insecure connection to the Kafka broker. #### Trusted CA certificate
-To configure the trusted CA certificate for the Kafka endpoint, update the `trustedCaCertificateConfigMapRef` setting in the TLS settings. For example:
+Configure the trusted CA certificate for the Kafka endpoint to establish a secure connection to the Kafka broker. This setting is important if the Kafka broker uses a self-signed certificate or a certificate signed by a custom CA that isn't trusted by default.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Trusted CA certificate config map** field to specify the ConfigMap containing the trusted CA certificate.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings: tls:
- trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE>
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
```
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ tls: {
+ trustedCaCertificateConfigMapRef: '<YOUR_CA_CERTIFICATE>'
+ }
+}
+```
+++ This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka dataflow resource. For example: ```bash kubectl create configmap client-ca-configmap --from-file root_ca.crt -n azure-iot-operations ```
-This setting is important if the Kafka broker uses a self-signed certificate or a certificate signed by a custom CA that isn't trusted by default.
+> [!TIP]
+> When connecting to Azure Event Hubs, the CA certificate isn't required because the Event Hubs service uses a certificate signed by a public CA that is trusted by default.
-However in the case of Azure Event Hubs, the CA certificate isn't required because the Event Hubs service uses a certificate signed by a public CA that is trusted by default.
+### Consumer group ID
-### Kafka messaging settings
+The consumer group ID is used to identify the consumer group that the dataflow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker.
-Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
+# [Portal](#tab/portal)
-#### Consumer group ID
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Consumer group ID** field to specify the consumer group ID.
-To configure the consumer group ID for the Kafka endpoint, update the `consumerGroupId` setting in the Kafka settings. For example:
+# [Kubernetes](#tab/kubernetes)
```yaml spec: kafkaSettings:
- consumerGroupId: fromMq
+ consumerGroupId: <ID>
```
-The consumer group ID is used to identify the consumer group that the dataflow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker.
+# [Bicep](#tab/bicep)
-<!-- TODO: check for accuracy -->
+```bicep
+kafkaSettings: {
+ consumerGroupId: '<ID>'
+}
+```
-This setting takes effect only if the endpoint is used as a source (that is, the dataflow is a consumer).
+
-#### Compression
+<!-- TODO: check for accuracy -->
-To configure the compression type for the Kafka endpoint, update the `compression` setting in the Kafka settings. For example:
+This setting takes effect only if the endpoint is used as a source (that is, the dataflow is a consumer).
-```yaml
-kafkaSettings:
- compression: Gzip
-```
+### Compression
The compression field enables compression for the messages sent to Kafka topics. Compression helps to reduce the network bandwidth and storage space required for data transfer. However, compression also adds some overhead and latency to the process. The supported compression types are listed in the following table.
The compression field enables compression for the messages sent to Kafka topics.
| `Snappy` | Snappy compression and batching are applied. Snappy is a fast compression algorithm that offers moderate compression ratio and speed. | | `Lz4` | LZ4 compression and batching are applied. LZ4 is a fast compression algorithm that offers low compression ratio and high speed. |
+To configure compression:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Compression** field to specify the compression type.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ compression: Gzip # Or Snappy, Lz4
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ compression: 'Gzip' // Or Snappy, Lz4
+}
+```
+
+
+ This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
-#### Batching
+### Batching
Aside from compression, you can also configure batching for messages before sending them to Kafka topics. Batching allows you to group multiple messages together and compress them as a single unit, which can improve the compression efficiency and reduce the network overhead.
Aside from compression, you can also configure batching for messages before send
| `maxMessages` | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large they are or how long they are buffered. If not set, the default value is 100000. | No | | `maxBytes` | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many they are or how long they are buffered. The default value is 1000000 (1 MB). | No |
-An example of using batching is:
+For example, if you set latencyMs to 1000, maxMessages to 100, and maxBytes to 1024, messages are sent either when there are 100 messages in the buffer, or when there are 1,024 bytes in the buffer, or when 1,000 milliseconds elapse since the last send, whichever comes first.
+
+To configure batching:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Batching enabled** field to enable batching. Use the **Batching latency**, **Maximum bytes**, and **Message count** fields to specify the batching settings.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings:
kafkaSettings:
maxBytes: 1024 ```
-In the example, messages are sent either when there are 100 messages in the buffer, or when there are 1,024 bytes in the buffer, or when 1,000 milliseconds elapse since the last send, whichever comes first.
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ batching: {
+ enabled: true
+ latencyMs: 1000
+ maxMessages: 100
+ maxBytes: 1024
+ }
+}
+```
++ This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
-#### Partition handling strategy
+### Partition handling strategy
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics. Kafka partitions are logical segments of a Kafka topic that enable parallel processing and fault tolerance. Each message in a Kafka topic has a partition and an offset, which are used to identify and order the messages.
By default, a dataflow assigns messages to random partitions, using a round-robi
| `Topic` | Uses the MQTT topic name from the dataflow source as the key for partitioning. This means that messages with the same MQTT topic name are sent to the same partition. This can help to achieve better message ordering and data locality. | | `Property` | Uses an MQTT message property from the dataflow source as the key for partitioning. Specify the name of the property in the `partitionKeyProperty` field. This means that messages with the same property value are sent to the same partition. This can help to achieve better message ordering and data locality based on a custom criterion. |
-An example of using partition handling strategy is:
+For example, if you set the partition handling strategy to `Property` and the partition key property to `device-id`, messages with the same `device-id` property are sent to the same partition.
+
+To configure the partition handling strategy:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Partition handling strategy** field to specify the partition handling strategy. Use the **Partition key property** field to specify the property used for partitioning if the strategy is set to `Property`.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings:
- partitionStrategy: Property
- partitionKeyProperty: device-id
+ partitionHandlingStrategy: Default # Or Static, Topic, Property
+ partitionKeyProperty: <PROPERTY_NAME>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ partitionHandlingStrategy: 'Default' // Or Static, Topic, Property
+ partitionKeyProperty: '<PROPERTY_NAME>'
+}
```
-This means that messages with the same "device-id" property are sent to the same partition.
+
-#### Kafka acknowledgements
+### Kafka acknowledgments
-Kafka acknowledgements (acks) are used to control the durability and consistency of messages sent to Kafka topics. When a producer sends a message to a Kafka topic, it can request different levels of acknowledgements from the Kafka broker to ensure that the message is successfully written to the topic and replicated across the Kafka cluster.
+Kafka acknowledgments (acks) are used to control the durability and consistency of messages sent to Kafka topics. When a producer sends a message to a Kafka topic, it can request different levels of acknowledgments from the Kafka broker to ensure that the message is successfully written to the topic and replicated across the Kafka cluster.
This setting takes effect only if the endpoint is used as a destination (that is, the dataflow is a producer). | Value | Description | | -- | -- |
-| `None` | The dataflow doesn't wait for any acknowledgements from the Kafka broker. This is the fastest but least durable option. |
+| `None` | The dataflow doesn't wait for any acknowledgments from the Kafka broker. This is the fastest but least durable option. |
| `All` | The dataflow waits for the message to be written to the leader partition and all follower partitions. This is the slowest but most durable option. This is also the default option| | `One` | The dataflow waits for the message to be written to the leader partition and at least one follower partition. |
-| `Zero` | The dataflow waits for the message to be written to the leader partition but doesn't wait for any acknowledgements from the followers. This is faster than `One` but less durable. |
+| `Zero` | The dataflow waits for the message to be written to the leader partition but doesn't wait for any acknowledgments from the followers. This is faster than `One` but less durable. |
<!-- TODO: double check for accuracy -->
-An example of using Kafka acknowledgements is:
+For example, if you set the Kafka acknowledgement to `All`, the dataflow waits for the message to be written to the leader partition and all follower partitions before sending the next message.
+
+To configure the Kafka acknowledgments:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Kafka acknowledgement** field to specify the Kafka acknowledgement level.
+
+# [Kubernetes](#tab/kubernetes)
```yaml kafkaSettings:
- kafkaAcks: All
+ kafkaAcknowledgement: All # Or None, One, Zero
```
-This means that the dataflow waits for the message to be written to the leader partition and all follower partitions.
+# [Bicep](#tab/bicep)
-#### Copy MQTT properties
+```bicep
+kafkaSettings: {
+ kafkaAcknowledgement: 'All' // Or None, One, Zero
+}
+```
+++
+### Copy MQTT properties
By default, the copy MQTT properties setting is enabled. These user properties include values such as `subject` that stores the name of the asset sending the message.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use checkbox next to **Copy MQTT properties** field to enable or disable copying MQTT properties.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml kafkaSettings:
- copyMqttProperties: Enabled
+ copyMqttProperties: Enabled # Or Disabled
```
-To disable copying MQTT properties, set the value to `Disabled`.
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ copyMqttProperties: 'Enabled' // Or Disabled
+}
+```
++ The following sections describe how MQTT properties are translated to Kafka user headers and vice versa when the setting is enabled.
-##### Kafka endpoint is a destination
+#### Kafka endpoint is a destination
When a Kafka endpoint is a dataflow destination, all MQTT v5 specification defined properties are translated Kafka user headers. For example, an MQTT v5 message with "Content Type" being forwarded to Kafka translates into the Kafka **user header** `"Content Type":{specifiedValue}`. Similar rules apply to other built-in MQTT properties, defined in the following table.
Dataflows never receive these properties from an MQTT Broker. Thus, a dataflow n
* Topic Alias * Subscription Identifiers
-###### The Message Expiry Interval property
+##### The Message Expiry Interval property
The [Message Expiry Interval](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901112) specifies how long a message can remain in an MQTT broker before being discarded.
Examples:
* A dataflow receives an MQTT message with Message Expiry Interval = 3600 seconds. The corresponding destination is temporarily disconnected but is able to reconnect. 1,000 seconds pass before this MQTT Message is sent to the Target. In this case, the destination's message has its Message Expiry Interval set as 2600 (3600 - 1000) seconds. * The dataflow receives an MQTT message with Message Expiry Interval = 3600 seconds. The corresponding destination is temporarily disconnected but is able to reconnect. In this case, however, it takes 4,000 seconds to reconnect. The message expired and dataflow doesn't forward this message to the destination.
-##### Kafka endpoint is a dataflow source
+#### Kafka endpoint is a dataflow source
> [!NOTE] > There's a known issue when using Event Hubs endpoint as a dataflow source where Kafka header gets corrupted as its translated to MQTT. This only happens if using Event Hub though the Event Hub client which uses AMQP under the covers. For for instance "foo"="bar", the "foo" is translated, but the value becomes"\xa1\x03bar".
When a Kafka endpoint is a dataflow source, Kafka user headers are translated to
Kafka user header key/value pairs - provided they're all encoded in UTF-8 - are directly translated into MQTT user key/value properties.
-###### UTF-8 / Binary Mismatches
+##### UTF-8 / Binary Mismatches
MQTT v5 can only support UTF-8 based properties. If dataflow receives a Kafka message that contains one or more non-UTF-8 headers, dataflow will:
MQTT v5 can only support UTF-8 based properties. If dataflow receives a Kafka me
Applications that require binary transfer in Kafka Source headers => MQTT Target properties must first UTF-8 encode them - for example, via Base64.
-###### >=64KB property mismatches
+##### >=64KB property mismatches
MQTT v5 properties must be smaller than 64 KB. If dataflow receives a Kafka message that contains one or more headers that is >= 64KB, dataflow will: * Remove the offending property or properties. * Forward the rest of the message on, following the previous rules.
-###### Property translation when using Event Hubs and producers that use AMQP
+##### Property translation when using Event Hubs and producers that use AMQP
If you have a client forwarding messages a Kafka dataflow source endpoint doing any of the following actions:
Not all event data properties including propertyEventData.correlationId are not
[CloudEvents](https://cloudevents.io/) are a way to describe event data in a common way. The CloudEvents settings are used to send or receive messages in the CloudEvents format. You can use CloudEvents for event-driven architectures where different services need to communicate with each other in the same or different cloud providers.
-The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`. For example:
+The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Cloud event attributes** field to specify the CloudEvents setting.
+
+# [Kubernetes](#tab/kubernetes)
```yaml
-mqttSettings:
- CloudEventAttributes: Propagate # or CreateOrRemap
+kafkaSettings:
+ cloudEventAttributes: Propagate # Or CreateOrRemap
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+kafkaSettings: {
+ cloudEventAttributes: 'Propagate' // Or CreateOrRemap
+}
``` ++
+The following sections describe how CloudEvent properties are propagated or created and remapped.
+ #### Propagate setting CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is. If the required properties are present, a `ce_` prefix is added to the CloudEvent property name.
CloudEvent properties are passed through for messages that contain the required
| `time` | No | `ce-time` | Generated as RFC 3339 in the target client | | `datacontenttype` | No | `ce-datacontenttype` | Changed to the output data content type after the optional transform stage | | `dataschema` | No | `ce-dataschema` | Schema defined in the schema registry |--
iot-operations Howto Configure Local Storage Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-local-storage-endpoint.md
Title: Configure local storage dataflow endpoint in Azure IoT Operations
description: Learn how to configure a local storage dataflow endpoint in Azure IoT Operations. + Last updated 10/02/2024
Use the local storage option to send data to a locally available persistent volu
# [Kubernetes](#tab/kubernetes)
+Create a Kubernetes manifest `.yaml` file with the following content.
+ ```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: DataflowEndpoint metadata:
- name: esa
+ name: <ENDPOINT_NAME>
namespace: azure-iot-operations spec: endpointType: localStorage localStorageSettings:
- persistentVolumeClaimRef: <PVC-NAME>
+ persistentVolumeClaimRef: <PVC_NAME>
```
-The PersistentVolumeClaim (PVC) must be in the same namespace as the *DataflowEndpoint*.
-
-# [Bicep](#tab/bicep)
+Then apply the manifest file to the Kubernetes cluster.
-This Bicep template file from [Bicep File for local storage dataflow Tutorial](https://gist.github.com/david-emakenemi/52377e32af1abd0efe41a5da27190a10) deploys the necessary resources for dataflows to local storage.
+```bash
+kubectl apply -f <FILE>.yaml
+```
-Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
+# [Bicep](#tab/bicep)
-Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
-
-```azurecli
-az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
-```
-This endpoint is the destination for the dataflow that receives messages to Local storage.
+Create a Bicep `.bicep` file with the following content.
```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param persistentVCName string = '<PERSISTENT_VC_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
resource localStorageDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = { parent: aioInstance
- name: 'local-storage-ep'
+ name: endpointName
extendedLocation: {
- name: customLocation.id
+ name: customLocationName
type: 'CustomLocation' } properties: {
resource localStorageDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflo
} } ```--
-## Configure dataflow destination
-
-Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
-# [Kubernetes](#tab/kubernetes)
+Then, deploy via Azure CLI.
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mq
- dataSources:
- *
- - operationType: Destination
- destinationSettings:
- endpointRef: esa
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
```
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- operationType: 'Destination'
- destinationSettings: {
- endpointRef: localStorageDataflowEndpoint.name
- dataDestination: 'sensorData'
- }
-}
-```
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
-
-> [!NOTE]
-> Using the local storage endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+The PersistentVolumeClaim (PVC) must be in the same namespace as the *DataflowEndpoint*.
## Supported serialization formats
iot-operations Howto Configure Mqtt Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md
Title: Configure MQTT dataflow endpoints in Azure IoT Operations
description: Learn how to configure dataflow endpoints for MQTT sources and destinations. + Last updated 10/02/2024
MQTT dataflow endpoints are used for MQTT sources and destinations. You can conf
## Azure IoT Operations Local MQTT broker
-Azure IoT Operations provides a built-in MQTT broker that you can use with dataflows. When you deploy Azure IoT Operations, a *default* MQTT broker dataflow endpoint is created with default settings. You can use this endpoint as a source or destination for dataflows.
+### Default endpoint
+
+Azure IoT Operations provides a built-in MQTT broker that you can use with dataflows. When you deploy Azure IoT Operations, an MQTT broker dataflow endpoint named "default" is created with default settings. You can use this endpoint as a source or destination for dataflows. The default endpoint uses the following settings:
+
+- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)
+- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)
+- TLS: Enabled
+- Trusted CA certificate: The default CA certificate `azure-iot-operations-aio-ca-trust-bundle` from the [default root CA](../deploy-iot-ops/concept-default-root-ca.md)
+
+> [!IMPORTANT]
+> If any of these default MQTT broker settings change, the dataflow endpoint must be updated to reflect the new settings. For example, if the default MQTT broker listener changes to use a different service name `my-mqtt-broker` and port 8885, you must update the endpoint to use the new host `host: my-mqtt-broker:8885`. Same applies to other settings like authentication and TLS.
+
+To view or edit the default MQTT broker endpoint settings:
+
+# [Portal](#tab/portal)
+
+1. In the [operations experience](https://iotoperations.azure.com/), select the **Dataflow endpoints**.
+1. Select the **default** endpoint to view or edit the settings.
+
+ :::image type="content" source="media/howto-configure-mqtt-endpoint/default-mqtt-endpoint.png" alt-text="Screenshot using operations experience to view the default MQTT dataflow endpoint.":::
+
+# [Kubernetes](#tab/kubernetes)
+
+You can view the default MQTT broker endpoint settings in the Kubernetes cluster. To view the settings, use the following command:
+
+```bash
+kubectl get dataflowendpoint default -n azure-iot-operations -o yaml
+```
+
+# [Bicep](#tab/bicep)
+
+To edit the default endpoint, create a Bicep `.bicep` file with the following content. Update the settings as needed, and replace the placeholder values like `<AIO_INSTANCE_NAME>` with your own.
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource defaultMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' existing = {
+ parent: aioInstanceName
+ name: 'default'
+ extendedLocation: {
+ name: customLocationName
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ method: 'ServiceAccountToken'
+ serviceAccountTokenSettings: {
+ audience: 'aio-internal'
+ }
+ }
+ host: 'aio-broker:18883'
+ tls: {
+ mode: 'Enabled'
+ trustedCaCertificateConfigMapRef: 'azure-iot-operations-aio-ca-trust-bundle'
+ }
+ }
+ }
+}
+```
+
+Then, deploy via Azure CLI.
+
+```azurecli
+az stack group create --name MyDeploymentStack --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+++
+### Create new endpoint
You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or other settings.
You can also create new local MQTT broker endpoints with custom settings. For ex
| -- | - | | Name | The name of the dataflow endpoint. | | Host | The hostname and port of the MQTT broker. Use the format `<hostname>:<port>` |
- | Authentication method | The method used for authentication. Choose *System assigned managed identity*, or *X509 certificate* |
+ | Authentication method | The method used for authentication. Choose *Service account token*, or *X509 certificate* |
+ | Service audience | The audience for the service account token. Required if using *Service account token*. |
| X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509 certificate*. | | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509 certificate*. | | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509 certificate*. | # [Kubernetes](#tab/kubernetes)
-To configure an MQTT broker endpoint with default settings, you can omit the host field along with other optional fields.
- ```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: DataflowEndpoint metadata:
- name: mq
+ name: <ENDPOINT_NAME>
namespace: azure-iot-operations spec: endpointType: Mqtt mqttSettings:
+ host: "<HOSTNAME>:<PORT>"
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <TRUST_BUNDLE>
authentication: method: ServiceAccountToken serviceAccountTokenSettings:
- audience: aio-internal
+ audience: <SA_AUDIENCE>
```
-This configuration creates a connection to the default MQTT broker with the following settings:
--- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)-- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)-- TLS: Enabled-- Trusted CA certificate: The default CA certificate `aio-ca-key-pair-test-only` from the [Default root CA](../manage-mqtt-broker/howto-configure-tls-auto.md#default-root-ca-and-issuer)-
-> [!IMPORTANT]
-> If any of these default MQTT broker settings change, the dataflow endpoint must be updated to reflect the new settings. For example, if the default MQTT broker listener changes to use a different service name `my-mqtt-broker` and port 8885, you must update the endpoint to use the new host `host: my-mqtt-broker:8885`. Same applies to other settings like authentication and TLS.
- # [Bicep](#tab/bicep)
-This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://gist.github.com/david-emakenemi/7a72df52c2e7a51d2424f36143b7da85) deploys the necessary dataflow and dataflow endpoints for MQTT broker and Azure Event Grid.
-
-Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName`.
-
-Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
-
-```azurecli
-az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/mqtt-bridge.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
-```
-This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
- ```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param aioBrokerHostName string = '<HOSTNAME>:<PORT>'
+param trustedCA string = '<TRUST_BUNDLE>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
- parent: aioInstance
- name: 'aiomq'
+ parent: aioInstanceName
+ name: endpointName
extendedLocation: {
- name: customLocation.id
+ name: customLocationName
type: 'CustomLocation' } properties: {
resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowE
authentication: { method: 'ServiceAccountToken' serviceAccountTokenSettings: {
- audience: 'aio-internal'
+ audience: '<SA_AUDIENCE>'
} }
- host: 'aio-broker:18883'
+ host: aioBrokerHostName
tls: { mode: 'Enabled'
- trustedCaCertificateConfigMapRef: 'azure-iot-operations-aio-ca-trust-bundle'
+ trustedCaCertificateConfigMapRef: trustedCA
} } } } ``` -- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)-- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)-- TLS: Enabled-- Trusted CA certificate: The default CA certificate `azure-iot-operations-aio-ca-trust-bundle` from the [Default root CA](../manage-mqtt-broker/howto-configure-tls-auto.md#default-root-ca-and-issuer)-
-## How to configure a dataflow endpoint for MQTT brokers
+## Azure Event Grid
+
+[Azure Event Grid provides a fully managed MQTT broker](../../event-grid/mqtt-overview.md) that works with Azure IoT Operations dataflows. To configure an Azure Event Grid MQTT broker endpoint, we recommend that you use managed identity for authentication.
+
+### Configure Event Grid namespace
+
+If you haven't done so already, [create Event Grid namespace](../../event-grid/create-view-manage-namespaces.md) first.
+
+#### Enable MQTT
-You can use an MQTT broker dataflow endpoint for dataflow sources and destinations.
+Once you have an Event Grid namespace, go to **Configuration** and check:
-### Azure Event Grid
+- **Enable MQTT**: Select the checkbox.
+- **Maximum client sessions per authentication name**: Set to **3** or more.
+
+The max client sessions option is important so that dataflows can [scale up](howto-configure-dataflow-profile.md) and still be able to connect. To learn more, see [Event Grid MQTT multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
+
+#### Create a topic space
+
+In order for dataflows to send or receive messages to Event Grid MQTT broker, you need to create at least one topic space in the Event Grid namespace. You can create a topic space in the Event Grid namespace by selecting **Topic spaces** > **New topic space**.
+
+To quickly get started and for testing, you can create a topic space with the wildcard topic `#` as the topic template.
-[Azure Event Grid provides a fully managed MQTT broker](../../event-grid/mqtt-overview.md) that works with Azure IoT Operations dataflows.
+#### Assign permission to managed identity
-To configure an Azure Event Grid MQTT broker endpoint, we recommend that you use managed identity for authentication.
+Now that the topic space is created, you need to assign the managed identity of the Azure IoT Operations Arc extension to the Event Grid namespace or topic space.
+
+In Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
+
+Then, go to the Event Grid namespace > **Access control (IAM)** > **Add role assignment**. Assign the managed identity of the Azure IoT Operations Arc extension with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`. This gives the managed identity the necessary permissions to send or receive messages for all topic spaces in the namespace.
+
+Alternatively, you can assign the role at the topic space level. Go to the topic space > **Access control (IAM)** > **Add role assignment**. Assign the managed identity of the Azure IoT Operations Arc extension with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`. This gives the managed identity the necessary permissions to send or receive messages for the specific topic space.
+
+#### Create dataflow endpoint
+
+Once the Event Grid namespace is configured, you can create a dataflow endpoint for the Event Grid MQTT broker.
# [Portal](#tab/portal)
To configure an Azure Event Grid MQTT broker endpoint, we recommend that you use
| Setting | Description | | -- | - | | Name | The name of the dataflow endpoint. |
- | Host | The hostname and port of the MQTT broker. Use the format `<hostname>:<port>` |
- | Authentication method | The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *X509 certificate* |
- | Client ID | The client ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
- | Tenant ID | The tenant ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
- | X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509 certificate*. |
- | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509 certificate*. |
- | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509 certificate*. |
+ | Host | The hostname and port of the Event Grid MQTT broker. Use the format `<NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883` |
+ | Authentication method | The method used for authentication. Choose *System assigned managed identity* |
1. Select **Apply** to provision the endpoint. # [Kubernetes](#tab/kubernetes)
-1. Create an Event Grid namespace and enable MQTT.
-
-1. Get the managed identity of the Azure IoT Operations Arc extension.
-
-1. Assign the managed identity to the Event Grid namespace or topic space with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`.
-
-1. This endpoint is the destination for the dataflow that receives messages from the default MQTT Broker.
+Create a Kubernetes manifest `.yaml` file with the following content.
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: DataflowEndpoint metadata:
- name: eventgrid
+ name: <ENDPOINT_NAME>
namespace: azure-iot-operations spec: endpointType: Mqtt
spec:
mode: Enabled ```
-# [Bicep](#tab/bicep)
-
-1. Create an Event Grid namespace and enable MQTT.
+Then apply the manifest file to the Kubernetes cluster.
-1. Get the managed identity of the Azure IoT Operations Arc extension.
+```bash
+kubectl apply -f <FILE>.yaml
+```
-1. Assign the managed identity to the Event Grid namespace or topic space with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`.
+# [Bicep](#tab/bicep)
-1. This endpoint is the destination for the dataflow that receives messages from the default MQTT broker.
+Create a Bicep `.bicep` file with the following content.
```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param endpointName string = '<ENDPOINT_NAME>'
+param eventGridHostName string = '<EVENTGRID_HOSTNAME>'
+ resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = { parent: aioInstance
- name: 'eventgrid'
+ name: endpointName
extendedLocation: {
- name: customLocation.id
+ name: customLocationName
type: 'CustomLocation' } properties: {
resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dat
} ```
+Then, deploy via Azure CLI.
+
+```azurecli
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+ Once the endpoint is created, you can use it in a dataflow to connect to the Event Grid MQTT broker as a source or destination. The MQTT topics are configured in the dataflow.
When you use X.509 authentication with an Event Grid MQTT broker, go to the Even
The alternative client authentication and maximum client sessions options allow dataflows to use client certificate subject name for authentication instead of `MQTT CONNECT Username`. This capability is important so that dataflows can spawn multiple instances and still be able to connect. To learn more, see [Event Grid MQTT client certificate authentication](../../event-grid/mqtt-client-certificate-authentication.md) and [Multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
+Then, follow the steps in [X.509 certificate](#x509-certificate) to configure the endpoint with the X.509 certificate settings.
+ #### Event Grid shared subscription limitation
-Azure Event Grid MQTT broker doesn't support shared subscriptions, which means that you can't set the `instanceCount` to more than `1` in the dataflow profile. If you set `instanceCount` greater than `1`, the dataflow fails to start.
+Azure Event Grid MQTT broker doesn't support shared subscriptions, which means that you can't set the `instanceCount` to more than `1` in the dataflow profile if Event Grid is used as a source (where the dataflow subscribes to messages) for a dataflow. In this case, if you set `instanceCount` greater than `1`, the dataflow fails to start.
### Other MQTT brokers
spec:
mqttSettings: host: <HOST>:<PORT> authentication:
- ...
+ # See available authentication methods below
tls:
- mode: Enabled
+ mode: Enabled # or Disabled
trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE-CONFIG-MAP> ``` # [Bicep](#tab/bicep) ```bicep
-properties: {
- endpointType: 'Mqtt'
- mqttSettings: {
- authentication: {
- ...
- }
- host: 'MQTT-BROKER-HOST>:8883'
- tls: {
- mode: 'Enabled'
- trustedCaCertificateConfigMapRef: '<YOUR CA CERTIFICATE CONFIG MAP>'
- }
- }
- }
-```
---
-## Use the endpoint in a dataflow source or destination
-
-Once you've configured the endpoint, you can use it in a dataflow as both a source or a destination. The MQTT topics are configured in the dataflow source or destination settings, which allows you to reuse the same *DataflowEndpoint* resource with multiple dataflows and different MQTT topics.
-
-# [Portal](#tab/portal)
-
-1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab. If creating a new dataflow, select **Create dataflow** and replace `<new-dataflow>` with a name for the dataflow.
-1. In the editor, select **MQTT** as the source dataflow endpoint.
-
- Enter the following settings for the source endpoint:
-
- | Setting | Description |
- | - | - |
- | MQTT topic | The topic to which the dataflow subscribes (if source) or publishes (if destination). |
- | Message schema| The schema that defines the structure of the messages being received (if source) or sent (if destination). You can select an existing schema or upload a new schema to the schema registry. |
-
-1. Select the dataflow endpoint for the destination. Choose an existing MQTT dataflow endpoint. For example, the default MQTT Broker endpoint or a custom MQTT broker endpoint.
-1. Select **Proceed** to configure the destination settings.
-1. Enter the MQTT topic to which the dataflow publishes messages.
-1. Select **Apply** to provision the dataflow.
-
- :::image type="content" source="media/howto-configure-mqtt-endpoint/create-dataflow-mq-mq.png" alt-text="Screenshot using operations experience to create a dataflow with an MQTT source and destination.":::
-
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- endpointRef: mqsource
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - operationType: Destination
- destinationSettings:
- endpointRef: mqdestination
- dataDestination:
- - sensors/thermostats/temperature
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource dataflow 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
- parent: defaultDataflowProfile
- name: 'my-dataflow'
- extendedLocation: {
- name: customLocation.id
- type: 'CustomLocation'
+mqttSettings: {
+ authentication: {
+ // See available authentication methods below
}
- properties: {
- mode: 'Enabled'
- operations: [
- {
- operationType: 'Source'
- sourceSettings: {
- endpointRef: 'mqsource'
- dataSources: array('thermostats/+/telemetry/temperature/#')
- }
- }
- {
- operationType: 'Destination'
- destinationSettings: {
- endpointRef: 'mqdestination'
- dataDestination: 'sensors/thermostats/temperature'
- }
- }
- ]
+ host: 'MQTT-BROKER-HOST>:8883'
+ tls: {
+ mode: 'Enabled' // or 'Disabled'
+ trustedCaCertificateConfigMapRef: '<YOUR CA CERTIFICATE CONFIG MAP>'
}
-}
+}
```
-For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
- To customize the MQTT endpoint settings, see the following sections for more information. ### Available authentication methods
mqttSettings:
{} ```
+# [Bicep](#tab/bicep)
++
+```bicep
+mqttSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+}
+```
+++ If you need to set a different audience, you can specify it in the settings.
+# [Portal](#tab/portal)
+
+Not supported.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings: authentication:
mqttSettings:
# [Bicep](#tab/bicep) - ```bicep mqttSettings: { authentication: { method: 'SystemAssignedManagedIdentity'
- systemAssignedManagedIdentitySettings: {}
+ systemAssignedManagedIdentitySettings: {
+ audience: 'https://<AUDIENCE>'
+ }
} } ```
mqttSettings: {
#### User-assigned managed identity
+To use user-managed identity for authentication, you must first deploy Azure IoT Operations with secure settings enabled. To learn more, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+Then, specify the user-assigned managed identity authentication method along with the client ID, tenant ID, and scope of the managed identity.
+ # [Portal](#tab/portal) In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **User assigned managed identity**.
-Enter the user assigned managed identity client ID and tenant ID in the appropriate fields.
+Enter the user assigned managed identity client ID, tenant ID, and scope in the appropriate fields.
# [Kubernetes](#tab/kubernetes)
-To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity..
- ```yaml mqttSettings: authentication:
mqttSettings:
userAssignedManagedIdentitySettings: clientId: <ID> tenantId: <ID>
+ scope: <SCOPE>
``` # [Bicep](#tab/bicep)
mqttSettings: {
userAssignedManagedIdentitySettings: { cliendId: '<ID>' tenantId: '<ID>'
+ scope: '<SCOPE>'
} } }
mqttSettings:
authentication: method: ServiceAccountToken serviceAccountTokenSettings:
- audience: <YOUR-SERVICE-ACCOUNT-AUDIENCE>
+ audience: <YOUR_SERVICE_ACCOUNT_AUDIENCE>
``` # [Bicep](#tab/bicep)
mqttSettings: {
authentication: { method: 'ServiceAccountToken' serviceAccountTokenSettings: {
- audience: '<YOUR-SERVICE-ACCOUNT-AUDIENCE>'
+ audience: '<YOUR_SERVICE_ACCOUNT_AUDIENCE>'
} } }
If the audience isn't specified, the default audience for the Azure IoT Operatio
To use anonymous authentication, set the authentication method to `Anonymous`.
+# [Portal](#tab/portal)
+
+Not yet supported in the operations experience. See [known issues](../troubleshoot/known-issues.md).
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings: authentication:
mqttSettings:
{} ```
+# [Bicep](#tab/bicep)
+
+Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
+ ## Advanced settings
-You can set advanced settings for the MQTT broker dataflow endpoint such as TLS, trusted CA certificate, MQTT messaging settings, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+You can set advanced settings for the MQTT broker dataflow endpoint such as TLS, trusted CA certificate, MQTT messaging settings, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab, within the dataflow endpoint custom resource.
# [Portal](#tab/portal)
In the operations experience, select the **Advanced** tab for the dataflow endpo
# [Kubernetes](#tab/kubernetes)
-### TLS
+You can set these settings in the dataflow endpoint manifest file.
-Under `mqttSettings.tls`, you can configure the TLS settings for the MQTT broker. To enable or disable TLS, set the `mode` field to `Enabled` or `Disabled`.
+```yaml
+mqttSettings:
+ qos: 1
+ retain: Keep
+ sessionExpirySeconds: 3600
+ keepAliveSeconds: 60
+ maxInflightMessages: 100
+ protocol: WebSockets
+ clientIdPrefix: dataflow
+ CloudEventAttributes: Propagate # or CreateOrRemap
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ qos: 1
+ retain: Keep
+ sessionExpirySeconds: 3600
+ keepAliveSeconds: 60
+ maxInflightMessages: 100
+ protocol: WebSockets
+ clientIdPrefix: 'dataflow'
+ CloudEventAttributes: 'Propagate' // or 'CreateOrRemap'
+}
+```
++++
+### TLS settings
+
+#### TLS mode
+
+To enable or disable TLS for the Kafka endpoint, update the `mode` setting in the TLS settings.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the checkbox next to **TLS mode enabled**.
+
+# [Kubernetes](#tab/kubernetes)
```yaml mqttSettings: tls:
- mode: Enabled
+ mode: Enabled # or Disabled
```
-### Trusted CA certificate
+# [Bicep](#tab/bicep)
-To use a trusted CA certificate for the MQTT broker, you can create a Kubernetes ConfigMap with the CA certificate and reference it in the DataflowEndpoint resource.
+```bicep
+mqttSettings: {
+ tls: {
+ mode: 'Enabled' // or 'Disabled'
+ }
+}
+```
+++
+The TLS mode can be set to `Enabled` or `Disabled`. If the mode is set to `Enabled`, the dataflow uses a secure connection to the Kafka broker. If the mode is set to `Disabled`, the dataflow uses an insecure connection to the Kafka broker.
+
+#### Trusted CA certificate
+
+Configure the trusted CA certificate for the MQTT endpoint to establish a secure connection to the MQTT broker. This setting is important if the MQTT broker uses a self-signed certificate or a certificate signed by a custom CA that isn't trusted by default.
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Trusted CA certificate config map** field to specify the ConfigMap containing the trusted CA certificate.
+
+# [Kubernetes](#tab/kubernetes)
```yaml mqttSettings: tls:
- mode: Enabled
- trustedCaCertificateConfigMapRef: <your CA certificate config map>
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
```
-This is useful when the MQTT broker uses a self-signed certificate or a certificate that's not trusted by default. The CA certificate is used to verify the MQTT broker's certificate. In the case of Event Grid, its CA certificate is already widely trusted and so you can omit this setting.
+# [Bicep](#tab/bicep)
-### MQTT messaging settings
+```bicep
+mqttSettings: {
+ tls: {
+ trustedCaCertificateConfigMapRef: '<YOUR_CA_CERTIFICATE>'
+ }
+}
+```
-Under `mqttSettings`, you can configure the MQTT messaging settings for the dataflow MQTT client used with the endpoint.
++
+This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the MQTT dataflow resource. For example:
+
+```bash
+kubectl create configmap client-ca-configmap --from-file root_ca.crt -n azure-iot-operations
+```
-#### Client ID prefix
++
+In the case of Event Grid, its CA certificate is already widely trusted and so you can omit this setting.
+
+### Client ID prefix
You can set a client ID prefix for the MQTT client. The client ID is generated by appending the dataflow instance name to the prefix.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Client ID prefix** field to specify the prefix.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings:
- clientIdPrefix: dataflow
+ clientIdPrefix: <YOUR_PREFIX>
```
-#### QoS
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ clientIdPrefix: '<YOUR_PREFIX>'
+}
+```
+++
+### QoS
You can set the Quality of Service (QoS) level for the MQTT messages to either 1 or 0. The default is 1.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Quality of service (QoS)** field to specify the QoS level.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings:
- qos: 1
+ qos: 1 # Or 0
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ qos: 1 // Or 0
+}
```
-#### Retain
++
+### Retain
Use the `retain` setting to specify whether the dataflow should keep the retain flag on MQTT messages. The default is `Keep`.
+This setting is useful to ensure that the remote broker has the same message as the local broker, which can be important for Unified Namespace scenarios.
+
+If set to `Never`, the retain flag is removed from the MQTT messages. This can be useful when you don't want the remote broker to retain any messages or if the remote broker doesn't support retain.
+
+To configure retain settings:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Retain** field to specify the retain setting.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings:
- retain: Keep
+ retain: Keep # or Never
```
-This setting is useful to ensure that the remote broker has the same message as the local broker, which can be important for Unified Namespace scenarios.
+# [Bicep](#tab/bicep)
-If set to `Never`, the retain flag is removed from the MQTT messages. This can be useful when you don't want the remote broker to retain any messages or if the remote broker doesn't support retain.
+```bicep
+mqttSettings: {
+ retain: Keep // or Never
+}
+```
+++
+The *retain* setting only takes effect if the dataflow uses MQTT endpoint as both source and destination. For example, in an [MQTT bridge](tutorial-mqtt-bridge.md) scenario.
+
+### Session expiry
+
+You can set the session expiry interval for the dataflow MQTT client. The session expiry interval is the maximum time that an MQTT session is maintained if the dataflow client disconnects. The default is 3600 seconds. To configure the session expiry interval:
-The *retain* setting only takes effect if the dataflow uses MQTT endpoint as both source and destination. For example, in an MQTT bridge scenario.
+# [Portal](#tab/portal)
-#### Session expiry
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Session expiry** field to specify the session expiry interval.
-You can set the session expiry interval for the dataflow MQTT client. The session expiry interval is the maximum time that an MQTT session is maintained if the dataflow client disconnects. The default is 3600 seconds.
+# [Kubernetes](#tab/kubernetes)
```yaml mqttSettings: sessionExpirySeconds: 3600 ```
-#### MQTT or WebSockets protocol
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ sessionExpirySeconds: 3600
+}
+```
+++
+### MQTT or WebSockets protocol
By default, WebSockets isn't enabled. To use MQTT over WebSockets, set the `protocol` field to `WebSockets`.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Protocol** field to specify the protocol.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings: protocol: WebSockets ```
-#### Max inflight messages
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ protocol: 'WebSockets'
+}
+```
+++
+### Max inflight messages
You can set the maximum number of inflight messages that the dataflow MQTT client can have. The default is 100.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Maximum in-flight messages** field to specify the maximum number of inflight messages.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings: maxInflightMessages: 100 ```
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ maxInflightMessages: 100
+}
+```
+++ For subscribe when the MQTT endpoint is used as a source, this is the receive maximum. For publish when the MQTT endpoint is used as a destination, this is the maximum number of messages to send before waiting for an acknowledgment.
-#### Keep alive
+### Keep alive
You can set the keep alive interval for the dataflow MQTT client. The keep alive interval is the maximum time that the dataflow client can be idle before sending a PINGREQ message to the broker. The default is 60 seconds.
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Keep alive** field to specify the keep alive interval.
+
+# [Kubernetes](#tab/kubernetes)
+ ```yaml mqttSettings: keepAliveSeconds: 60 ```
-#### CloudEvents
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ keepAliveSeconds: 60
+}
+```
+++
+### CloudEvents
[CloudEvents](https://cloudevents.io/) are a way to describe event data in a common way. The CloudEvents settings are used to send or receive messages in the CloudEvents format. You can use CloudEvents for event-driven architectures where different services need to communicate with each other in the same or different cloud providers.
-The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`.
+The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`. To configure CloudEvents settings:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Cloud event attributes** field to specify the CloudEvents setting.
+
+# [Kubernetes](#tab/kubernetes)
```yaml mqttSettings: CloudEventAttributes: Propagate # or CreateOrRemap ```
-##### Propagate setting
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ CloudEventAttributes: 'Propagate' // or 'CreateOrRemap'
+}
+```
+++
+The following sections provide more information about the CloudEvents settings.
+
+#### Propagate setting
CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is.
CloudEvent properties are passed through for messages that contain the required
| `datacontenttype` | No | `application/json` | Changed to the output data content type after the optional transform stage. | | `dataschema` | No | `sr://fabrikam-schemas/123123123234234234234234#1.0.0` | If an output data transformation schema is given in the transformation configuration, `dataschema` is changed to the output schema. |
-##### CreateOrRemap setting
+#### CreateOrRemap setting
CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the properties are generated.
CloudEvent properties are passed through for messages that contain the required
| `time` | No | Generated as RFC 3339 in the target client | | `datacontenttype` | No | Changed to the output data content type after the optional transform stage | | `dataschema` | No | Schema defined in the schema registry |--
-# [Bicep](#tab/bicep)
-
-TODO
-
-```bicep
-bicep here
-```
--
iot-operations Howto Create Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
Title: Create a dataflow using Azure IoT Operations
description: Create a dataflow to connect data sources and destinations using Azure IoT Operations. + Last updated 10/08/2024
flowchart LR
To define the source and destination, you need to configure the dataflow endpoints. The transformation is optional and can include operations like enriching the data, filtering the data, and mapping the data to another field.
-This article shows you how to create a dataflow with an example, including the source, transformation, and destination.
+You can use the operations experience in Azure IoT Operations to create a dataflow. The operations experience provides a visual interface to configure the dataflow. You can also use Bicep to create a dataflow using a Bicep template file, or use Kubernetes to create a dataflow using a YAML file.
+
+Continue reading to learn how to configure the source, transformation, and destination.
## Prerequisites -- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)-- A [configured dataflow profile](howto-configure-dataflow-profile.md)-- [Dataflow endpoints](howto-configure-dataflow-endpoint.md). For example, create a dataflow endpoint for the [local MQTT broker](./howto-configure-mqtt-endpoint.md#azure-iot-operations-local-mqtt-broker). You can use this endpoint for both the source and destination. Or, you can try other endpoints like Kafka, Event Hubs, or Azure Data Lake Storage. To learn how to configure each type of dataflow endpoint, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md).
+You can deploy dataflows as soon as you have an instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md) using the default dataflow profile and endpoint. However, you might want to configure dataflow profiles and endpoints to customize the dataflow.
+
+### Dataflow profile
+
+The dataflow profile specifies the number of instances for the dataflows under it to use. If you don't need multiple groups of dataflows with different scaling settings, you can use the default dataflow profile. To learn how to configure a dataflow profile, see [Configure dataflow profiles](howto-configure-dataflow-profile.md).
+
+### Dataflow endpoints
-## Create dataflow
+Dataflow endpoints are required to configure the source and destination for the dataflow. To get started quickly, you can use the [default dataflow endpoint for the local MQTT broker](./howto-configure-mqtt-endpoint.md#default-endpoint). You can also create other types of dataflow endpoints like Kafka, Event Hubs, or Azure Data Lake Storage. To learn how to configure each type of dataflow endpoint, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md).
-Once you have dataflow endpoints, you can use them to create a dataflow. Recall that a dataflow is made up of three parts: the source, the transformation, and the destination.
+## Get started
+
+Once you have the prerequisites, you can start to create a dataflow.
# [Portal](#tab/portal)
-To create a dataflow in [operations experience](https://iotoperations.azure.com/), select **Dataflow** > **Create dataflow**.
+To create a dataflow in [operations experience](https://iotoperations.azure.com/), select **Dataflow** > **Create dataflow**. Then, you see the page where you can configure the source, transformation, and destination for the dataflow.
:::image type="content" source="media/howto-create-dataflow/create-dataflow.png" alt-text="Screenshot using operations experience to create a dataflow.":::
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
-The [Bicep File to create Dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary resources for dataflows.
+Create a Kubernetes manifest `.yaml` file to start creating a dataflow. This example shows the structure of the dataflow containing the source, transformation, and destination configurations.
-1. Download the template file and replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
-1. Deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: <DATAFLOW_NAME>
+ namespace: azure-iot-operations
+spec:
+ # Reference to the default dataflow profile
+ # This field is required when configuring via Kubernetes YAML
+ # The syntax is different when using Bicep
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ # See source configuration section
+ - operationType: BuiltInTransformation
+ builtInTransformationSettings:
+ # See transformation configuration section
+ - operationType: Destination
+ destinationSettings:
+ # See destination configuration section
+```
- ```azurecli
- az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
- ```
+# [Bicep](#tab/bicep)
- The overall structure of a dataflow configuration for Bicep is as follows:
+Create a Bicep `.bicep` file to start creating a dataflow. This example shows the structure of the dataflow containing the source, transformation, and destination configurations.
```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param dataflowName string = '<DATAFLOW_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+
+resource defaultDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
+
+// Pointer to the default dataflow profile
+resource defaultDataflowProfile 'Microsoft.IoTOperations/instances/dataflowProfiles@2024-08-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
+ resource dataflow 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
+ // Reference to the parent dataflow profile, the default profile in this case
+ // Same usage as profileRef in Kubernetes YAML
parent: defaultDataflowProfile
- name: 'my-dataflow'
+ name: dataflowName
extendedLocation: { name: customLocation.id type: 'CustomLocation'
resource dataflow 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@
} ```
-# [Kubernetes](#tab/kubernetes)
-
-The overall structure of a dataflow configuration is as follows:
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: my-dataflow
- namespace: azure-iot-operations
-spec:
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- # See source configuration section
- - operationType: BuiltInTransformation
- builtInTransformationSettings:
- # See transformation configuration section
- - operationType: Destination
- destinationSettings:
- # See destination configuration section
-```
-
-<!-- TODO: link to API reference -->
- Review the following sections to learn how to configure the operation types of the dataflow.
-## Configure a source with a dataflow endpoint to get data
+## Source
-To configure a source for the dataflow, specify the endpoint reference and data source. You can specify a list of data sources for the endpoint.
+To configure a source for the dataflow, specify the endpoint reference and a list of data sources for the endpoint.
-# [Portal](#tab/portal)
+### Use Asset as source
-### Use Asset as a source
+# [Portal](#tab/portal)
You can use an [asset](../discover-manage-assets/overview-manage-assets.md) as the source for the dataflow. Using an asset as a source is only available in the operations experience.
You can use an [asset](../discover-manage-assets/overview-manage-assets.md) as t
1. Select **Apply** to use the asset as the source endpoint.
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
Configuring an asset as a source is only available in the operations experience.
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
Configuring an asset as a source is only available in the operations experience.
-### Use MQTT as a source
+### Use default MQTT endpoint as source
# [Portal](#tab/portal) 1. Under **Source details**, select **MQTT**.
-1. Enter the **MQTT Topic** that you want to listen to for incoming messages.
-1. Choose a **Message schema** from the dropdown list or upload a new schema. If the source data has optional fields or fields with different types, specify a deserialization schema to ensure consistency. For example, the data might have fields that aren't present in all messages. Without the schema, the transformation can't handle these fields as they would have empty values. With the schema, you can specify default values or ignore the fields.
:::image type="content" source="media/howto-create-dataflow/dataflow-source-mqtt.png" alt-text="Screenshot using operations experience to select MQTT as the source endpoint.":::
+1. Enter the following settings for the MQTT source:
+
+ | Setting | Description |
+ | -- | - |
+ | MQTT topic | The MQTT topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
+ | Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-schema-to-deserialize-data). |
+ 1. Select **Apply**.
+# [Kubernetes](#tab/kubernetes)
+
+For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
+
+```yaml
+sourceSettings:
+ endpointRef: default
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+```
+
+Because `dataSources` allows you to specify MQTT or Kafka topics without modifying the endpoint configuration, you can reuse the endpoint for multiple dataflows even if the topics are different. To learn more, see [Configure data sources](#configure-data-sources-mqtt-or-kafka-topics).
+ # [Bicep](#tab/bicep) The MQTT endpoint is configured in the Bicep template file. For example, the following endpoint is a source for the dataflow. ```bicep
-{
- operationType: 'Source'
- sourceSettings: {
- endpointRef: MqttBrokerDataflowEndpoint.name // Reference to the 'mq' endpoint
- dataSources: [
- 'azure-iot-operations/data/thermostat', // MQTT topic for thermostat temperature data
- 'humidifiers/+/telemetry/humidity/#' // MQTT topic for humidifier humidity data
-
- ]
- }
+sourceSettings: {
+ endpointRef: defaultDataflowEndpoint
+ dataSources: [
+ 'thermostats/+/telemetry/temperature/#',
+ 'humidifiers/+/telemetry/humidity/#'
+ ]
} ```
-The `dataSources` setting is an array of MQTT topics that define the data source. In this example, `azure-iot-operations/data/thermostat` refers to one of the topics in the dataSources array where thermostat data is published.
+Here, `dataSources` allow you to specify multiple MQTT or Kafka topics without needing to modify the endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. To learn more, see [Configure data sources](#configure-data-sources-mqtt-or-kafka-topics).
-Datasources allow you to specify multiple MQTT or Kafka topics without needing to modify the endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. For more information, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
+
-<!-- TODO: Put the right article link here -->
-For more information about creating an MQTT endpoint as a dataflow source, see [MQTT Endpoint](howto-configure-mqtt-endpoint.md).
+For more information about the default MQTT endpoint and creating an MQTT endpoint as a dataflow source, see [MQTT Endpoint](howto-configure-mqtt-endpoint.md).
-# [Kubernetes](#tab/kubernetes)
+### Use custom MQTT or Kafka dataflow endpoint as source
-For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
+If you created a custom MQTT or Kafka dataflow endpoint (for example, to use with Event Grid or Event Hubs), you can use it as the source for the dataflow. Remember that storage type endpoints, like Data Lake or Fabric OneLake, cannot be used as source.
+
+To configure, use Kubernetes YAML or Bicep. Replace placeholder values with your custom endpoint name and topics.
+
+# [Portal](#tab/portal)
+
+Using a custom MQTT or Kafka endpoint as a source is currently not supported in the operations experience.
+
+# [Kubernetes](#tab/kubernetes)
```yaml sourceSettings:
- endpointRef: mq
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
+ - <TOPIC_1>
+ - <TOPIC_2>
+ # See section on configuring MQTT or Kafka topics for more information
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+sourceSettings: {
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataSources: [
+ '<TOPIC_1>',
+ '<TOPIC_2>'
+ // See section on configuring MQTT or Kafka topics for more information
+ ]
+}
```
-Because `dataSources` allows you to specify MQTT or Kafka topics without modifying the endpoint configuration, you can reuse the endpoint for multiple dataflows even if the topics are different. To learn more, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
+
-<!-- TODO: link to API reference -->
+### Configure data sources (MQTT or Kafka topics)
-#### Specify schema to deserialize data
+You can specify multiple MQTT or Kafka topics in a source without needing to modify the dataflow endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. For more information, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
-If the source data has optional fields or fields with different types, specify a deserialization schema to ensure consistency. For example, the data might have fields that aren't present in all messages. Without the schema, the transformation can't handle these fields as they would have empty values. With the schema, you can specify default values or ignore the fields.
+#### MQTT topics
+
+When the source is an MQTT (Event Grid included) endpoint, you can use the MQTT topic filter to subscribe to incoming messages. The topic filter can include wildcards to subscribe to multiple topics. For example, `thermostats/+/telemetry/temperature/#` subscribes to all temperature telemetry messages from thermostats. To configure the MQTT topic filters:
+
+# [Portal](#tab/portal)
+
+In the operations experience dataflow **Source details**, select **MQTT**, then use the **MQTT topic** field to specify the MQTT topic filter to subscribe to for incoming messages.
+
+> [!NOTE]
+> Only one MQTT topic filter can be specified in the operations experience. To use multiple MQTT topic filters, use Bicep or Kubernetes.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+sourceSettings:
+ endpointRef: <MQTT_ENDPOINT_NAME>
+ dataSources:
+ - <MQTT_TOPIC_FILTER_1>
+ - <MQTT_TOPIC_FILTER_2>
+ # Add more MQTT topic filters as needed
+```
+
+Example with multiple MQTT topic filters with wildcards:
```yaml
-spec:
- operations:
- - operationType: Source
- sourceSettings:
- serializationFormat: Json
- schemaRef: aio-sr://exampleNamespace/exampleAvroSchema:1.0.0
+sourceSettings:
+ endpointRef: default
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
```
-To specify the schema, create the file and store it in the schema registry.
+Here, the wildcard `+` is used to select all devices under the `thermostats` and `humidifiers` topics. The `#` wildcard is used to select all telemetry messages under all subtopics of the `temperature` and `humidity` topics.
-```json
-{
- "$schema": "http://json-schema.org/draft-07/schema#",
- "name": "Temperature",
- "description": "Schema for representing an asset's key attributes",
- "type": "object",
- "required": [ "deviceId", "asset_name"],
- "properties": {
- "deviceId": {
- "type": "string"
- },
- "temperature": {
- "type": "double"
- },
- "serial_number": {
- "type": "string"
- },
- "production_date": {
- "type": "string",
- "description": "Event duration"
- },
- "asset_name": {
- "type": "string",
- "description": "Name of asset"
- },
- "location": {
- "type": "string",
- },
- "manufacturer": {
- "type": "string",
- "description": "Name of manufacturer"
- }
- }
+# [Bicep](#tab/bicep)
+
+```bicep
+sourceSettings: {
+ endpointRef: <MQTT_ENDPOINT_NAME>
+ dataSources: [
+ '<MQTT_TOPIC_FILTER_1>',
+ '<MQTT_TOPIC_FILTER_2>'
+ // Add more MQTT topic filters as needed
+ ]
+}
+```
+
+Example with multiple MQTT topic filters with wildcards:
+
+```bicep
+sourceSettings: {
+ endpointRef: default
+ dataSources: [
+ 'thermostats/+/telemetry/temperature/#',
+ 'humidifiers/+/telemetry/humidity/#'
+ ]
} ```
+Here, the wildcard `+` is used to select all devices under the `thermostats` and `humidifiers` topics. The `#` wildcard is used to select all telemetry messages under all subtopics of the `temperature` and `humidity` topics.
+++
+#### Kafka topics
+
+When the source is a Kafka (Event Hubs included) endpoint, specify the individual kafka topics to subscribe to for incoming messages. Wildcards are not supported, so you must specify each topic statically.
+ > [!NOTE]
-> The only supported serialization format is JSON. The schema is optional.
+> When using Event Hubs via the Kafka endpoint, each individual event hub within the namespace is the Kafka topic. For example, if you have an Event Hubs namespace with two event hubs, `thermostats` and `humidifiers`, you can specify each event hub as a Kafka topic.
-For more information about schema registry, see [Understand message schemas](concept-schema-registry.md).
+To configure the Kafka topics:
-#### Shared subscriptions
+# [Portal](#tab/portal)
-<!-- TODO: may not be final -->
+Using a Kafka endpoint as a source is currently not supported in the operations experience.
-To use shared subscriptions with MQTT sources, you can specify the shared subscription topic in the form of `$shared/<subscription-group>/<topic>`.
+# [Kubernetes](#tab/kubernetes)
```yaml sourceSettings:
+ endpointRef: <KAFKA_ENDPOINT_NAME>
dataSources:
- - $shared/myGroup/thermostats/+/telemetry/temperature/#
+ - <KAFKA_TOPIC_1>
+ - <KAFKA_TOPIC_2>
+ # Add more Kafka topics as needed
```
-> [!NOTE]
-> If the instance count in the [dataflow profile](howto-configure-dataflow-profile.md) is greater than 1, then the shared subscription topic must be used.
+# [Bicep](#tab/bicep)
-<!-- TODO: Details -->
+```bicep
+sourceSettings: {
+ endpointRef: <KAFKA_ENDPOINT_NAME>
+ dataSources: [
+ '<KAFKA_TOPIC_1>',
+ '<KAFKA_TOPIC_2>'
+ // Add more Kafka topics as needed
+ ]
+}
+```
-## Configure transformation to process data
+### Specify schema to deserialize data
-The transformation operation is where you can transform the data from the source before you send it to the destination. Transformations are optional. If you don't need to make changes to the data, don't include the transformation operation in the dataflow configuration. Multiple transformations are chained together in stages regardless of the order in which they're specified in the configuration. The order of the stages is always
+If the source data has optional fields or fields with different types, specify a deserialization schema to ensure consistency. For example, the data might have fields that aren't present in all messages. Without the schema, the transformation can't handle these fields as they would have empty values. With the schema, you can specify default values or ignore the fields.
-1. **Enrich**: Add additional data to the source data given a dataset and condition to match.
-1. **Filter**: Filter the data based on a condition.
-1. **Map**: Move data from one field to another with an optional conversion.
+Specifying the schema is only relevant when using the MQTT or Kafka source. If the source is an asset, the schema is automatically inferred from the asset definition.
+
+To configure the schema used to deserialize the incoming messages from a source:
# [Portal](#tab/portal)
-In the operations experience, select **Dataflow** > **Add transform (optional)**.
+In operations experience dataflow **Source details**, select **MQTT** and use the **Message schema** field to specify the schema. You can use the **Upload** button to upload a schema file first. To learn more, see [Understand message schemas](concept-schema-registry.md).
+# [Kubernetes](#tab/kubernetes)
-# [Bicep](#tab/bicep)
+Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the dataflow configuration.
-```bicep
-{
- operationType: 'BuiltInTransformation'
- builtInTransformationSettings: {
- map: [
- // ...
- ]
- filter: [
- // ...
- ]
- }
-}
+```yaml
+sourceSettings:
+ serializationFormat: Json
+ schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA_NAME>:<VERSION>
```
-### Specify output schema to transform data
+# [Bicep](#tab/bicep)
-The following configuration demonstrates how to define an output schema in your Bicep file. In this example, the schema defines fields such as `asset_id`, `asset_name`, `location`, `temperature`, `manufacturer`, `production_date`, and `serial_number`. Each field is assigned a data type and marked as non-nullable. The assignment ensures all incoming messages contain these fields with valid data.
+Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the dataflow configuration.
```bicep
-var assetDeltaSchema = '''
-{
- "$schema": "Delta/1.0",
- "type": "object",
- "properties": {
- "type": "struct",
- "fields": [
- { "name": "asset_id", "type": "string", "nullable": false, "metadata": {} },
- { "name": "asset_name", "type": "string", "nullable": false, "metadata": {} },
- { "name": "location", "type": "string", "nullable": false, "metadata": {} },
- { "name": "manufacturer", "type": "string", "nullable": false, "metadata": {} },
- { "name": "production_date", "type": "string", "nullable": false, "metadata": {} },
- { "name": "serial_number", "type": "string", "nullable": false, "metadata": {} },
- { "name": "temperature", "type": "double", "nullable": false, "metadata": {} }
- ]
- }
+sourceSettings: {
+ serializationFormat: Json
+ schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA_NAME>:<VERSION>
}
-'''
```
-The following Bicep configuration registers the schema with the Azure Schema Registry. This configuration creates a schema definition and assigns it a version within the schema registry, allowing it to be referenced later in your data transformations.
+
-```bicep
-param opcuaSchemaName string = 'opcua-output-delta'
-param opcuaSchemaVer string = '1'
-resource opcSchema 'Microsoft.DeviceRegistry/schemaRegistries/schemas@2024-09-01-preview' = {
- parent: schemaRegistry
- name: opcuaSchemaName
- properties: {
- displayName: 'OPC UA Delta Schema'
- description: 'This is a OPC UA delta Schema'
- format: 'Delta/1.0'
- schemaType: 'MessageSchema'
- }
-}
+#### Shared subscriptions
-resource opcuaSchemaInstance 'Microsoft.DeviceRegistry/schemaRegistries/schemas/schemaVersions@2024-09-01-preview' = {
- parent: opcSchema
- name: opcuaSchemaVer
- properties: {
- description: 'Schema version'
- schemaContent: opcuaSchemaContent
- }
-}
-```
+<!-- TODO: may not be final -->
+
+To use shared subscriptions with MQTT sources, you can specify the shared subscription topic in the form of `$shared/<GROUP_NAME>/<TOPIC_FILTER>`.
+
+# [Portal](#tab/portal)
+
+In operations experience dataflow **Source details**, select **MQTT** and use the **MQTT topic** field to specify the shared subscription group and topic.
# [Kubernetes](#tab/kubernetes) ```yaml
-builtInTransformationSettings:
- datasets:
- # ...
- filter:
- # ...
- map:
- # ...
+sourceSettings:
+ dataSources:
+ - $shared/<GROUP_NAME>/<TOPIC_FILTER>
```
-<!-- TODO: link to API reference -->
+# [Bicep](#tab/bicep)
+```bicep
+sourceSettings: {
+ dataSources: [
+ '$shared/<GROUP_NAME>/<TOPIC_FILTER>'
+ ]
+}
+```
-### Enrich: Add reference data
+> [!NOTE]
+> If the instance count in the [dataflow profile](howto-configure-dataflow-profile.md) is greater than 1, then the shared subscription topic prefix is automatically added to the topic filter.
-To enrich the data, you can use the reference dataset in the Azure IoT Operations [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md). The dataset is used to add extra data to the source data based on a condition. The condition is specified as a field in the source data that matches a field in the dataset.
+<!-- TODO: Details -->
+
+## Transformation
-Key names in the distributed state store correspond to a dataset in the dataflow configuration.
+The transformation operation is where you can transform the data from the source before you send it to the destination. Transformations are optional. If you don't need to make changes to the data, don't include the transformation operation in the dataflow configuration. Multiple transformations are chained together in stages regardless of the order in which they're specified in the configuration. The order of the stages is always:
+
+1. **Enrich**: Add additional data to the source data given a dataset and condition to match.
+1. **Filter**: Filter the data based on a condition.
+1. **Map**: Move data from one field to another with an optional conversion.
# [Portal](#tab/portal)
-Currently, the enrich operation isn't available in the operations experience.
+In the operations experience, select **Dataflow** > **Add transform (optional)**.
-# [Bicep](#tab/bicep)
-This example shows how you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
+# [Kubernetes](#tab/kubernetes)
-```bicep
-builtInTransformationSettings: {
- datasets: [
- {
- key: 'assetDataset'
- inputs: [
- '$source.deviceId', // Reference to the device ID from the source
- '$context(assetDataset).asset' // Reference to the asset from the dataset context
- ]
- expression: '$1 == $2' // Expression to evaluate the inputs
- }
- ]
-}
+```yaml
+builtInTransformationSettings:
+ datasets:
+ # See section on enriching data
+ filter:
+ # See section on filtering data
+ map:
+ # See section on mapping data
```
-### Passthrough operation
-
-For example, you could apply a passthrough operation that takes all the input fields and maps them to the output field, essentially passing through all fields.
+# [Bicep](#tab/bicep)
```bicep builtInTransformationSettings: {
+ datasets: [
+ // See section on enriching data
+ ]
+ filter: [
+ // See section on filtering data
+ ]
map: [
- {
- inputs: array('*')
- output: '*'
- }
+ // See section on mapping data
] } ``` ++
+### Enrich: Add reference data
+
+To enrich the data, you can use the reference dataset in the Azure IoT Operations [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md). The dataset is used to add extra data to the source data based on a condition. The condition is specified as a field in the source data that matches a field in the dataset.
+
+You can load sample data into the DSS by using the [DSS set tool sample](https://github.com/Azure-Samples/explore-iot-operations/tree/main/samples/dss_set). Key names in the distributed state store correspond to a dataset in the dataflow configuration.
+
+# [Portal](#tab/portal)
+
+Currently, the enrich operation isn't available in the operations experience.
+ # [Kubernetes](#tab/kubernetes) For example, you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
If the dataset has a record with the `asset` field, similar to:
} ```
-The data from the source with the `deviceId` field matching `thermostat1` has the `location` and `manufacturer` fields available in `filter` and `map` stages.
+The data from the source with the `deviceId` field matching `thermostat1` has the `location` and `manufacturer` fields available in filter and map stages.
-<!-- TODO: link to API reference -->
+# [Bicep](#tab/bicep)
+
+This example shows how you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
+
+```bicep
+builtInTransformationSettings: {
+ datasets: [
+ {
+ key: 'assetDataset'
+ inputs: [
+ '$source.deviceId', // $1
+ '$context(assetDataset).asset' // - $2
+ ]
+ expression: '$1 == $2'
+ }
+ ]
+}
+```
+
+If the dataset has a record with the `asset` field, similar to:
+
+```json
+{
+ "asset": "thermostat1",
+ "location": "room1",
+ "manufacturer": "Contoso"
+}
+```
+
+The data from the source with the `deviceId` field matching `thermostat1` has the `location` and `manufacturer` fields available in filter and map stages.
-You can load sample data into the DSS by using the [DSS set tool sample](https://github.com/Azure-Samples/explore-iot-operations/tree/main/samples/dss_set).
+<!-- Why would a passthrough operation be needed? Just omit the transform section? -->
+
+<!-- ### Passthrough operation
+
+For example, you could apply a passthrough operation that takes all the input fields and maps them to the output field, essentially passing through all fields.
+
+```bicep
+builtInTransformationSettings: {
+ map: [
+ {
+ inputs: array('*')
+ output: '*'
+ }
+ ]
+}
+``` -->
+ For more information about condition syntax, see [Enrich data by using dataflows](concept-dataflow-enrich.md) and [Convert data using dataflows](concept-dataflow-conversions.md).
To filter the data on a condition, you can use the `filter` stage. The condition
1. Select **Apply**.
+For example, you could use a filter condition like `temperature > 20` to filter data less than or equal to 20 based on the temperature field.
+
+# [Kubernetes](#tab/kubernetes)
+
+For example, you could use the `temperature` field in the source data to filter the data:
+
+```yaml
+builtInTransformationSettings:
+ filter:
+ - inputs:
+ - temperature ? $last # - $1
+ expression: "$1 > 20"
+```
+
+If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
+ # [Bicep](#tab/bicep) For example, you could use the `temperature` field in the source data to filter the data:
builtInTransformationSettings: {
filter: [ { inputs: [
- 'temperature ? $last' // Reference to the last temperature value, if available
+ 'temperature ? $last'
]
- expression: '$1 > 20' // Expression to filter based on the temperature value
+ expression: '$1 > 20'
} ] } ```
-# [Kubernetes](#tab/kubernetes)
-
-For example, you could use the `temperature` field in the source data to filter the data:
-
-```yaml
-builtInTransformationSettings:
- filter:
- - inputs:
- - temperature ? $last # - $1
- expression: "$1 > 20"
-```
- If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
-<!-- TODO: link to API reference -->
- ### Map: Move data from one field to another
In the operations experience, mapping is currently supported using **Compute** t
1. Select **Apply**.
+# [Kubernetes](#tab/kubernetes)
+
+For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
+
+```yaml
+builtInTransformationSettings:
+ map:
+ - inputs:
+ - temperature # - $1
+ expression: "($1 - 32) * 5/9"
+ output: temperatureCelsius
+ - inputs:
+ - $context(assetDataset).location
+ output: location
+```
+ # [Bicep](#tab/bicep) For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
builtInTransformationSettings: {
map: [ { inputs: [
- 'temperature' // Reference to the temperature input
+ 'temperature'
]
- output: 'temperatureCelsius' // Output variable for the converted temperature
- expression: '($1 - 32) * 5/9' // Expression to convert Fahrenheit to Celsius
+ output: 'temperatureCelsius'
+ expression: '($1 - 32) * 5/9'
} { inputs: [
- '$context(assetDataset).location' // Reference to the location from the dataset context
+ '$context(assetDataset).location'
]
- output: 'location' // Output variable for the location
+ output: 'location'
} ] } ```
-# [Kubernetes](#tab/kubernetes)
-
-For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
-
-```yaml
-builtInTransformationSettings:
- map:
- - inputs:
- - temperature # - $1
- output: temperatureCelsius
- expression: "($1 - 32) * 5/9"
- - inputs:
- - $context(assetDataset).location
- output: location
-```
-
-<!-- TODO: link to API reference -->
- To learn more, see [Map data by using dataflows](concept-dataflow-mapping.md) and [Convert data by using dataflows](concept-dataflow-conversions.md). ### Serialize data according to a schema
-If you want to serialize the data before sending it to the destination, you need to specify a schema and serialization format. Otherwise, the data is serialized in JSON with the types inferred. Remember that storage endpoints like Microsoft Fabric or Azure Data Lake require a schema to ensure data consistency.
+If you want to serialize the data before sending it to the destination, you need to specify a schema and serialization format. Otherwise, the data is serialized in JSON with the types inferred. Storage endpoints like Microsoft Fabric or Azure Data Lake require a schema to ensure data consistency. Supported serialization formats are Parquet and Delta.
# [Portal](#tab/portal)
-Specify the **Output** schema when you add the destination dataflow endpoint.
+Currently, specifying the output schema and serialization isn't supported in the operations experience.
+
+# [Kubernetes](#tab/kubernetes)
+
+Once you [upload a schema to the schema registry](concept-schema-registry.md#upload-schema), you can reference it in the dataflow configuration.
+
+```yaml
+builtInTransformationSettings:
+ serializationFormat: Delta
+ schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA>:<VERSION>
+```
# [Bicep](#tab/bicep)
-When the dataflow resource is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry. It can be referenced in transformations which creates a new schema in Delta format.
-This [Bicep File to create Dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) provides a streamlined approach to provisioning the dataflow with schema integration.
+Once you [upload a schema to the schema registry](concept-schema-registry.md#upload-schema), you can reference it in the dataflow configuration.
```bicep
-{
- operationType: 'BuiltInTransformation'
- builtInTransformationSettings: {
- // ..
- schemaRef: 'aio-sr://${opcuaSchemaName}:${opcuaSchemaVer}'
- serializationFormat: 'Parquet'
- }
+builtInTransformationSettings: {
+ serializationFormat: Delta
+ schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA>:<VERSION>
} ``` ++ For more information about schema registry, see [Understand message schemas](concept-schema-registry.md).
-# [Kubernetes](#tab/kubernetes)
+## Destination
+To configure a destination for the dataflow, specify the endpoint reference and data destination. You can specify a list of data destinations for the endpoint.
+
+To send data to a destination other than the local MQTT broker, create a dataflow endpoint. To learn how, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md).
+
+> [!IMPORTANT]
+> Storage endpoints require a schema reference. If you've created storage destination endpoints for Microsoft Fabric OneLake, ADLS Gen 2, Azure Data Explorer and Local Storage, you must specify schema reference.
+
+# [Portal](#tab/portal)
+
+1. Select the dataflow endpoint to use as the destination.
+
+ :::image type="content" source="media/howto-create-dataflow/dataflow-destination.png" alt-text="Screenshot using operations experience to select Event Hubs destination endpoint.":::
+
+1. Select **Proceed** to configure the destination.
+1. Enter the required settings for the destination, including the topic or table to send the data to. See [Configure data destination (topic, container, or table)](#configure-data-destination-topic-container-or-table) for more information.
+
+# [Kubernetes](#tab/kubernetes)
```yaml
-builtInTransformationSettings:
- serializationFormat: Parquet
- schemaRef: aio-sr://<NAMESPACE>/<SCHEMA>:<VERSION>
+destinationSettings:
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataDestination: <TOPIC_OR_TABLE> # See section on configuring data destination
```
-To specify the schema, you can create a Schema custom resource with the schema definition.
-
-For more information about schema registry, see [Understand message schemas](concept-schema-registry.md).
+# [Bicep](#tab/bicep)
-```json
-{
- "$schema": "http://json-schema.org/draft-07/schema#",
- "name": "Temperature",
- "description": "Schema for representing an asset's key attributes",
- "type": "object",
- "required": [ "deviceId", "asset_name"],
- "properties": {
- "deviceId": {
- "type": "string"
- },
- "temperature": {
- "type": "double"
- },
- "serial_number": {
- "type": "string"
- },
- "production_date": {
- "type": "string",
- "description": "Event duration"
- },
- "asset_name": {
- "type": "string",
- "description": "Name of asset"
- },
- "location": {
- "type": "string",
- },
- "manufacturer": {
- "type": "string",
- "description": "Name of manufacturer"
- }
- }
+```bicep
+destinationSettings: {
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataDestination: <TOPIC_OR_TABLE> // See section on configuring data destination
} ```
-Supported serialization formats are JSON, Parquet, and Delta.
+### Configure data destination (topic, container, or table)
-## Configure destination with a dataflow endpoint to send data
+Similar to data sources, data destination is a concept that is used to keep the dataflow endpoints reusable across multiple dataflows. Essentially, it represents the sub-directory in the dataflow endpoint configuration. For example, if the dataflow endpoint is a storage endpoint, the data destination is the table in the storage account. If the dataflow endpoint is a Kafka endpoint, the data destination is the Kafka topic.
-To configure a destination for the dataflow, specify the endpoint reference and data destination. You can specify a list of data destinations for the endpoint.
+| Endpoint type | Data destination meaning | Description |
+| - | - | - |
+| MQTT (or Event Grid) | Topic | The MQTT topic where the data is sent. Only static topics are supported, no wildcards. |
+| Kafka (or Event Hubs) | Topic | The Kafka topic where the data is sent. Only static topics are supported, no wildcards. If the endpoint is an Event Hubs namespace, the data destination is the individual event hub within the namespace. |
+| Azure Data Lake Storage | Container | The container in the storage account. Not the table. |
+| Microsoft Fabric OneLake | Table or Folder | Corresponds to the configured [path type for the endpoint](howto-configure-fabric-endpoint.md#onelake-path-type). |
+| Azure Data Explorer | Table | The table in the Azure Data Explorer database. |
+| Local Storage | Folder | The folder or directory name in the local storage persistent volume mount. |
-> [!IMPORTANT]
-> Storage endpoints require a schema reference. If you've created storage destination endpoints for Microsoft Fabric OneLake, ADLS Gen 2, Azure Data Explorer and Local Storage, use bicep to specify the schema reference.
+To configure the data destination:
# [Portal](#tab/portal)
-1. Select the dataflow endpoint to use as the destination.
+When using the operations experience, the data destination field is automatically interpreted based on the endpoint type. For example, if the dataflow endpoint is a storage endpoint, the destination details page prompts you to enter the container name. If the dataflow endpoint is a MQTT endpoint, the destination details page prompts you to enter the topic, and so on.
- :::image type="content" source="media/howto-create-dataflow/dataflow-destination.png" alt-text="Screenshot using operations experience to select Event Hubs destination endpoint.":::
-1. Select **Proceed** to configure the destination.
-1. Add the mapping details based on the type of destination.
+# [Kubernetes](#tab/kubernetes)
+
+The syntax is the same for all dataflow endpoints:
+
+```yaml
+destinationSettings:
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataDestination: <TOPIC_OR_TABLE>
+```
+
+For example, to send data back to the local MQTT broker a static MQTT topic, use the following configuration:
+
+```yaml
+destinationSettings:
+ endpointRef: default
+ dataDestination: example-topic
+```
+
+Or, if you have custom event hub endpoint, the configuration would look like:
+
+```yaml
+destinationSettings:
+ endpointRef: my-eh-endpoint
+ dataDestination: individual-event-hub
+```
+
+Another example using a storage endpoint as the destination:
+
+```yaml
+destinationSettings:
+ endpointRef: my-adls-endpoint
+ dataDestination: my-container
+```
# [Bicep](#tab/bicep)
-The following example configures Fabric OneLake as a destination with a static MQTT topic.
+The syntax is the same for all dataflow endpoints:
```bicep
-resource oneLakeEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
- parent: aioInstance
- name: 'onelake-ep'
- extendedLocation: {
- name: customLocation.id
- type: 'CustomLocation'
- }
- properties: {
- endpointType: 'FabricOneLake'
- fabricOneLakeSettings: {
- authentication: {
- method: 'SystemAssignedManagedIdentity'
- systemAssignedManagedIdentitySettings: {}
- }
- oneLakePathType: 'Tables'
- host: 'https://msit-onelake.dfs.fabric.microsoft.com'
- names: {
- lakehouseName: '<EXAMPLE-LAKEHOUSE-NAME>'
- workspaceName: '<EXAMPLE-WORKSPACE-NAME>'
- }
- batching: {
- latencySeconds: 5
- maxMessages: 10000
- }
- }
- }
+destinationSettings: {
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataDestination: <TOPIC_OR_TABLE>
} ```
+For example, to send data back to the local MQTT broker a static MQTT topic, use the following configuration:
+ ```bicep
-{
- operationType: 'Destination'
- destinationSettings: {
- endpointRef: oneLakeEndpoint.name // oneLake endpoint
- dataDestination: 'sensorData' // static MQTT topic
- }
+destinationSettings: {
+ endpointRef: default
+ dataDestination: example-topic
} ```
-# [Kubernetes](#tab/kubernetes)
+Or, if you have custom event hub endpoint, the configuration would look like:
-For example, to configure a destination using the MQTT endpoint created earlier and a static MQTT topic, use the following configuration:
+```bicep
+destinationSettings: {
+ endpointRef: my-eh-endpoint
+ dataDestination: individual-event-hub
+}
+```
-```yaml
-destinationSettings:
- endpointRef: mq
- dataDestination: factory
+Another example using a storage endpoint as the destination:
+
+```bicep
+destinationSettings: {
+ endpointRef: my-adls-endpoint
+ dataDestination: my-container
+}
``` ++ ## Example The following example is a dataflow configuration that uses the MQTT endpoint for the source and destination. The source filters the data from the MQTT topics `thermostats/+/telemetry/temperature/#` and `humidifiers/+/telemetry/humidity/#`. The transformation converts the temperature to Fahrenheit and filters the data where the temperature is less than 100000. The destination sends the data to the MQTT topic `factory`.
spec:
operations: - operationType: Source sourceSettings:
- endpointRef: mq
+ endpointRef: default
dataSources: - thermostats/+/telemetry/temperature/# - humidifiers/+/telemetry/humidity/#
spec:
output: 'Tag 10' - operationType: Destination destinationSettings:
- endpointRef: mq
+ endpointRef: default
dataDestination: factory ```
Select the dataflow you want to export and select **Export** from the toolbar.
:::image type="content" source="media/howto-create-dataflow/dataflow-export.png" alt-text="Screenshot using operations experience to export a dataflow.":::
-# [Bicep](#tab/bicep)
-
-Bicep is infrastructure as code and no export is required. Use the [Bicep template file to create a dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) to quickly set up and configure dataflows.
- # [Kubernetes](#tab/kubernetes) ```bash kubectl get dataflow my-dataflow -o yaml > my-dataflow.yaml ``` -
+# [Bicep](#tab/bicep)
+
+Bicep is infrastructure as code and no export is required. Use the [Bicep template file to create a dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) to quickly set up and configure dataflows.
++
iot-operations Overview Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/overview-dataflow.md
Title: Process and route data with dataflows
description: Learn about dataflows and how to process and route data in Azure IoT Operations. + Last updated 08/03/2024
iot-operations Tutorial Mqtt Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/tutorial-mqtt-bridge.md
Title: Bi-directional MQTT bridge to Azure Event Grid
description: Learn how to create a bi-directional MQTT bridge to Azure Event Grid using Azure IoT Operations dataflows. + Last updated 10/01/2024
example.region-1.ts.eventgrid.azure.net
# [Bicep](#tab/bicep)
-The dataflow and dataflow endpoints for MQTT broker and Azure Event Grid can be deployed as standard Azure resources since they have Azure Resource Provider (RPs) implementations. This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://gist.github.com/david-emakenemi/7a72df52c2e7a51d2424f36143b7da85) deploys the necessary dataflow and dataflow endpoints.
+The dataflow and dataflow endpoints for MQTT broker and Azure Event Grid can be deployed as standard Azure resources since they have Azure Resource Provider (RPs) implementations. This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary dataflow and dataflow endpoints.
Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName` with yours.
metadata:
name: local-to-remote namespace: azure-iot-operations spec:
+ profileRef: default
operations: - operationType: Source sourceSettings:
metadata:
name: remote-to-local namespace: azure-iot-operations spec:
+ profileRef: default
operations: - operationType: Source sourceSettings:
iot-operations Concept About State Store Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/concept-about-state-store-protocol.md
Last updated 07/02/2024
# CustomerIntent: As a developer, I want understand what the MQTT broker state store protocol is, so # that I can implement a client app to interact with the MQ state store.+ # MQTT broker state store protocol
iot-operations Edge Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/edge-apps-overview.md
Last updated 07/02/2024 #CustomerIntent: As an developer, I want understand how to develop highly available distributed applications for my IoT Operations solution.+ # Develop highly available applications with MQTT broker
iot-operations Howto Deploy Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-deploy-dapr.md
Last updated 07/02/2024+ # Deploy Dapr pluggable components
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-dapr-apps.md
Last updated 07/02/2024 # CustomerIntent: As a developer, I want to understand how to use Dapr to develop distributed apps that talk with MQTT broker.+ # Use Dapr to develop distributed application workloads that talk with MQTT broker
iot-operations Howto Develop Mqttnet Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-mqttnet-apps.md
Last updated 07/02/2024 #CustomerIntent: As an developer, I want to understand how to use MQTTnet to develop distributed apps that talk with MQTT broker.+ # Use MQTTnet to develop distributed application workloads that connect to MQTT broker
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/tutorial-event-driven-with-dapr.md
Last updated 07/02/2024 #CustomerIntent: As an operator, I want to configure MQTT broker to bridge to Azure Event Grid MQTT broker PaaS so that I can process my IoT data at the edge and in the cloud.+ # Tutorial: Build an event-driven app with Dapr and MQTT broker
iot-operations Concept Default Root Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/concept-default-root-ca.md
Last updated 10/01/2024 #CustomerIntent: As an operator, I want to configure Azure IoT Operations components to use TLS so that I have secure communication between all components.+ # Certificate management for Azure IoT Operations Preview internal communication
iot-operations Howto Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authentication.md
Last updated 08/29/2024 #CustomerIntent: As an operator, I want to configure authentication so that I have secure MQTT broker communications.+ # Configure MQTT broker authentication
To link a BrokerListener to a *BrokerAuthentication* resource, specify the `auth
## Default BrokerAuthentication resource
-Azure IoT Operations Preview deploys a default *BrokerAuthentication* resource named `authn` linked with the default listener named `listener` in the `azure-iot-operations` namespace. It's configured to only use Kubernetes Service Account Tokens (SATs) for authentication. To inspect it, run:
+Azure IoT Operations Preview deploys a default *BrokerAuthentication* resource named `default` linked with the *default* listener in the `azure-iot-operations` namespace. It's configured to only use Kubernetes Service Account Tokens (SATs) for authentication. To inspect it, run:
```bash
-kubectl get brokerauthentication authn -n azure-iot-operations -o yaml
+kubectl get brokerauthentication default -n azure-iot-operations -o yaml
``` The output shows the default *BrokerAuthentication* resource, with metadata removed for brevity:
The output shows the default *BrokerAuthentication* resource, with metadata remo
apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerAuthentication metadata:
- name: authn
+ name: default
namespace: azure-iot-operations spec: authenticationMethods: - method: ServiceAccountToken serviceAccountTokenSettings: audiences:
- - "aio-internal"
+ - aio-internal
``` > [!IMPORTANT]
With multiple authentication methods, MQTT broker has a fallback mechanism. For
apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerAuthentication metadata:
- name: authn
+ name: default
namespace: azure-iot-operations spec: authenticationMethods:
iot-operations Howto Configure Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authorization.md
Last updated 09/09/2024 #CustomerIntent: As an operator, I want to configure authorization so that I have secure MQTT broker communications.+ # Configure MQTT broker authorization
iot-operations Howto Configure Availability Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
- ignite-2023 Previously updated : 09/09/2024 Last updated : 10/18/2024 #CustomerIntent: As an operator, I want to understand the settings for the MQTT broker so that I can configure it for high availability and scale.+ # Configure core MQTT broker settings
Medium is the default profile.
## Default broker
-By default, Azure IoT Operations Preview deploys a default Broker resource named `broker`. It's deployed in the `azure-iot-operations` namespace with cardinality and memory profile settings as configured during the initial deployment with Azure portal or Azure CLI. To see the settings, run the following command:
+By default, Azure IoT Operations Preview deploys a default Broker resource named `default`. It's deployed in the `azure-iot-operations` namespace with cardinality and memory profile settings as configured during the initial deployment with Azure portal or Azure CLI. To see the settings, run the following command:
```bash
-kubectl get broker broker -n azure-iot-operations -o yaml
+kubectl get broker default -n azure-iot-operations -o yaml
``` ### Modify default broker by redeploying
Only [cardinality](#configure-scaling-settings) and [memory profile](#configure-
To delete the default broker, run the following command: ```bash
-kubectl delete broker broker -n azure-iot-operations
+kubectl delete broker default -n azure-iot-operations
```
-Then, create a YAML file with desired settings. For example, the following YAML file configures the broker with name `broker` in namespace `azure-iot-operations` with `medium` memory profile and `distributed` mode with two frontend replicas and two backend chains with two partitions and two workers each. Also, the [encryption of internal traffic option](#configure-encryption-of-internal-traffic) is disabled.
+Then, create a YAML file with desired settings. For example, the following YAML file configures the broker with name `default` in namespace `azure-iot-operations` with `medium` memory profile and `distributed` mode with two frontend replicas and two backend chains with two partitions and two workers each. Also, the [encryption of internal traffic option](#configure-encryption-of-internal-traffic) is disabled.
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: Broker metadata:
- name: broker
+ name: default
namespace: azure-iot-operations spec: memoryProfile: medium
kubectl apply -f <path-to-yaml-file>
## Configure MQTT broker advanced settings
-The broker advanced settings include client configurations, encryption of internal traffic, and certificate rotations. For more information on the advanced settings, see the [Broker]() API reference.
+The broker advanced settings include client configurations, encryption of internal traffic, and certificate rotations. For more information on the advanced settings, see the [Broker](/rest/api/iotoperations/broker/create-or-update) API reference.
Here's an example of a *Broker* with advanced settings:
Here's an example of a *Broker* with advanced settings:
apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: Broker metadata:
- name: broker
+ name: default
namespace: azure-iot-operations spec: advanced:
iot-operations Howto Configure Brokerlistener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener.md
- ignite-2023 Previously updated : 10/08/2024 Last updated : 10/18/2024 #CustomerIntent: As an operator, I want understand options to secure MQTT communications for my IoT Operations solution.+ # Secure MQTT broker communication using BrokerListener
Each listener port can have its own authentication and authorization rules that
Listeners have the following characteristics: -- You can have up to three listeners. One listener per service type of `loadBalancer`, `clusterIp`, or `nodePort`. The default *BrokerListener* named *listener* is service type `clusterIp`.
+- You can have up to three listeners. One listener per service type of `loadBalancer`, `clusterIp`, or `nodePort`. The default *BrokerListener* named *default* is service type `clusterIp`.
- Each listener supports multiple ports - BrokerAuthentication and BrokerAuthorization references are per port - TLS configuration is per port
For a list of the available settings, see the [Broker Listener](/rest/api/iotope
## Default BrokerListener
-When you deploy Azure IoT Operations Preview, the deployment also creates a *BrokerListener* resource named `listener` in the `azure-iot-operations` namespace. This listener is linked to the default Broker resource named `broker` that's also created during deployment. The default listener exposes the broker on port 18883 with TLS and SAT authentication enabled. The TLS certificate is [automatically managed](howto-configure-tls-auto.md) by cert-manager. Authorization is disabled by default.
+When you deploy Azure IoT Operations Preview, the deployment also creates a *BrokerListener* resource named `default` in the `azure-iot-operations` namespace. This listener is linked to the default *Broker* resource named `default` that's also created during deployment. The default listener exposes the broker on port 18883 with TLS and SAT authentication enabled. The TLS certificate is [automatically managed](howto-configure-tls-auto.md) by cert-manager. Authorization is disabled by default.
To view or edit the listener:
To view or edit the listener:
To view the default *BrokerListener* resource, use the following command: ```bash
-kubectl get brokerlistener listener -n azure-iot-operations -o yaml
+kubectl get brokerlistener default -n azure-iot-operations -o yaml
``` The output should look similar to this, with most metadata removed for brevity:
The output should look similar to this, with most metadata removed for brevity:
apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerListener metadata:
- name: listener
+ name: default
namespace: azure-iot-operations spec:
- brokerRef: broker
+ brokerRef: default
serviceName: aio-broker serviceType: ClusterIp ports:
- - port: 18883
- authenticationRef: authn
+ - authenticationRef: default
+ port: 18883
protocol: Mqtt tls: certManagerCertificateSpec:
To learn more about the default BrokerAuthentication resource linked to this lis
The default *BrokerListener* uses the service type *ClusterIp*. You can have only one listener per service type. If you want to add more ports to service type *ClusterIp*, you can update the default listener to add more ports. For example, you could add a new port 1883 with no TLS and authentication off with the following kubectl patch command: ```bash
-kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='[{"op": "add", "path": "/spec/ports/", "value": {"port": 1883, "protocol": "Mqtt"}}]'
+kubectl patch brokerlistener default -n azure-iot-operations --type='json' -p='[{"op": "add", "path": "/spec/ports/", "value": {"port": 1883, "protocol": "Mqtt"}}]'
```
metadata:
name: loadbalancer-listener namespace: azure-iot-operations spec:
- brokerRef: broker
+ brokerRef: default
serviceType: LoadBalancer serviceName: aio-broker-loadbalancer ports: - port: 1883 protocol: Mqtt - port: 18883
- authenticationRef: authn
+ authenticationRef: default
protocol: Mqtt tls: mode: Automatic
iot-operations Howto Configure Tls Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-auto.md
- ignite-2023 Previously updated : 08/22/2024 Last updated : 10/18/2024 #CustomerIntent: As an operator, I want to configure MQTT broker to use TLS so that I have secure communication between the MQTT broker and client.+ # Configure TLS with automatic certificate management to secure MQTT communication in MQTT broker
metadata:
name: my-new-tls-listener namespace: azure-iot-operations spec:
- brokerRef: broker
+ brokerRef: default
serviceType: loadBalancer serviceName: my-new-tls-listener # Avoid conflicts with default service name 'aio-broker' ports:
iot-operations Howto Configure Tls Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-manual.md
Last updated 08/03/2024 #CustomerIntent: As an operator, I want to configure MQTT broker to use TLS so that I have secure communication between the MQTT broker and client.+ # Configure TLS with manual certificate management to secure MQTT communication in MQTT broker
metadata:
name: manual-tls-listener namespace: azure-iot-operations spec:
- brokerRef: broker
+ brokerRef: default
serviceType: loadBalancer # Optional, defaults to clusterIP serviceName: mqtts-endpoint # Match the SAN in the server certificate ports:
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-test-connection.md
Last updated 07/08/2024 #CustomerIntent: As an operator or developer, I want to test MQTT connectivity with tools that I'm already familiar with to know that I set up my MQTT broker correctly.+ # Test connectivity to MQTT broker with MQTT clients
If you understand the risks and need to use an insecure port in a well-controlle
name: non-tls-listener namespace: azure-iot-operations spec:
- brokerRef: broker
+ brokerRef: default
serviceType: loadBalancer serviceName: my-unique-service-name authenticationEnabled: false
iot-operations Overview Iot Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/overview-iot-mq.md
Last updated 07/02/2024 #CustomerIntent: As an operator, I want to understand how to I can use MQTT broker to publish and subscribe MQTT topics.+ # Publish and subscribe MQTT messages using MQTT broker
iot-operations Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/mqtt-support.md
Last updated 07/02/2024 # CustomerIntent: As an operator, I want to understand what MQTT specifications are supported by MQTT broker so that I can configure my MQTT client to connect to MQTT broker.+ # MQTT feature support in MQTT broker
iot-operations Tutorial Real Time Dashboard Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-telemetry/tutorial-real-time-dashboard-fabric.md
Last updated 11/15/2023 #CustomerIntent: As an operator, I want to learn how to build a real-time dashboard in Microsoft Fabric using MQTT data from the MQTT broker.+ # Build a real-time dashboard in Microsoft Fabric using MQTT data from the MQTT broker
This walkthrough uses a virtual Kubernetes environment hosted in a GitHub Codesp
## Deploy edge and cloud Azure resources
-The MQTT broker and north-bound cloud connector components can be deployed as regular Azure resources as they have Azure Resource Provider (RPs) implementations. A single Bicep template file from the *explore-iot-operations* repository deploys all the required edge and cloud resources and Azure role-based access assignments. Run this command in your Codespace terminal:
+<!-- TODO Add deployment for edge and cloud resources using a single bicep file -->
+
+1. [Create a Microsoft Fabric Workspace](/fabric/get-started/create-workspaces).
+
+1. [Create a Microsoft Fabric Lakehouse](/fabric/onelake/create-lakehouse-onelake).
+
+1. A single Bicep template file from the *explore-iot-operations* repository deploys all the required dataflows and dataflow endpoints resources [Bicep File to create Dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep). Download the template file and replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, `eventGridHostName`, and `persistentVCName`.
+
+1. Deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
```azurecli
-CLUSTER_NAME=<arc-connected-cluster-name>
-TEMPLATE_FILE_NAME='tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep'
-
- az deployment group create \
- --name az-resources \
- --resource-group $RESOURCE_GROUP \
- --template-file $TEMPLATE_FILE_NAME \
- --parameters clusterName=$CLUSTER_NAME
+az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
``` > [!IMPORTANT] > The deployment configuration is for demonstration or development purposes only. It's not suitable for production environments.
-The resources deployed by the template include:
-* [Event Hubs related resources](https://github.com/Azure-Samples/explore-iot-operations/blob/88ff2f4759acdcb4f752aa23e89b30286ab0cc99/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L349)
-* [IoT Operations MQTT broker Arc extension](https://github.com/Azure-Samples/explore-iot-operations/blob/88ff2f4759acdcb4f752aa23e89b30286ab0cc99/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L118)
-* [MQTT broker Broker](https://github.com/Azure-Samples/explore-iot-operations/blob/88ff2f4759acdcb4f752aa23e89b30286ab0cc99/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L202)
-* [Kafka north-bound connector and topicmap](https://github.com/Azure-Samples/explore-iot-operations/blob/88ff2f4759acdcb4f752aa23e89b30286ab0cc99/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L282)
-* [Azure role-based access assignments](https://github.com/Azure-Samples/explore-iot-operations/blob/88ff2f4759acdcb4f752aa23e89b30286ab0cc99/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L379)
## Send test MQTT data and confirm cloud delivery
The resources deployed by the template include:
kubectl apply -f tutorials/mq-realtime-fabric-dashboard/simulate-data.yaml ```
-1. The Kafka north-bound connector is [preconfigured in the deployment](https://github.com/Azure-Samples/explore-iot-operations/blob/e4bf8375e933c29c49bfd905090b37caef644135/tutorials/mq-realtime-fabric-dashboard/deployEdgeAndCloudResources.bicep#L331) to pick up messages from the MQTT topic where messages are being published to Event Hubs in the cloud.
+2. The Kafka north-bound connector is [preconfigured in the deployment](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) to pick up messages from the MQTT topic where messages are being published to Event Hubs in the cloud.
-1. After about a minute, confirm the message delivery in Event Hubs metrics.
+3. After about a minute, confirm the message delivery in Event Hubs metrics.
:::image type="content" source="media/tutorial-real-time-dashboard-fabric/event-hub-messages.png" alt-text="Screenshot of confirming Event Hubs messages." lightbox="media/tutorial-real-time-dashboard-fabric/event-hub-messages.png":::
In a few seconds, you should see the data being ingested into KQL Database.
:::image type="content" source="media/tutorial-real-time-dashboard-fabric/powerbi-dash-show.png" alt-text="Screenshot of a Power BI report." lightbox="media/tutorial-real-time-dashboard-fabric/powerbi-dash-show.png"::: In this walkthrough, you learned how to build a real-time dashboard in Microsoft Fabric using simulated MQTT data that is published to the MQTT broker.-
-## Next steps
-
-[Upload MQTT data to Microsoft Fabric lakehouse](tutorial-upload-mqtt-lakehouse.md)
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Title: Migrate machines as physical servers to Azure with Azure Migrate and Modernize description: This article describes how to migrate physical machines to Azure with Azure Migrate and Modernize.-+ ms.
Now, select machines for migration.
:::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-vms-inline.png" alt-text="Screenshot that shows selecting VMs." lightbox="./media/tutorial-migrate-physical-virtual-machines/select-vms-expanded.png":::
-1. In **Target settings**, select the subscription and target region to which you'll migrate. Specify the resource group in which the Azure VMs will reside after migration.
+1. In **Target settings**, select the subscription to which you'll migrate. (The region is set to your selection in the previous step and can't be modified.) Specify the resource group in which the Azure VMs will reside after migration.
1. In **Virtual Network**, select the Azure virtual network/subnet to which the Azure VMs will be joined after migration. 1. In **Cache storage account**, keep the default option to use the cache storage account that's automatically created for the project. Use the dropdown list if you want to specify a different storage account to use as the cache storage account for replication. <br/>
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
The Log Analytics workspace that you use for traffic analytics must exist in one
> [!NOTE] > If a network security group is supported for flow logging in a region, but Log Analytics workspace isn't supported in that region for traffic analytics, you can use a Log Analytics workspace from any other supported region as a workaround.
-## Next steps
+## Related content
- Learn more about [Traffic analytics](traffic-analytics.md).-- Learn about [Usage scenarios of traffic analytics](usage-scenarios-traffic-analytics.md).
+- Learn about [Usage scenarios of traffic analytics](traffic-analytics-usage-scenarios.md).
network-watcher Traffic Analytics Usage Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-usage-scenarios.md
+
+ Title: Traffic analytics usage scenarios
+
+description: Learn about Azure Network Watcher traffic analytics and the insights it can provide in different usage scenarios.
++++ Last updated : 10/18/2024+++
+# Traffic analytics usage scenarios
+
+In this article, you learn how to get insights about your traffic after configuring traffic analytics in different scenarios.
+
+## Find traffic hotspots
+
+**Look for**
+
+- Which hosts, subnets, virtual networks, and virtual machine scale set are sending or receiving the most traffic, traversing maximum malicious traffic and blocking significant flows?
+ - Check comparative chart for hosts, subnet, virtual network, and virtual machine scale set. Understanding, which hosts, subnets, virtual networks and virtual machine scale set are sending or receiving the most traffic can help you identify the hosts that are processing the most traffic, and whether the traffic distribution is done properly.
+ - You can evaluate if the volume of traffic is appropriate for a host. Is the volume of traffic normal behavior, or does it merit further investigation?
+- How much inbound/outbound traffic is there?
+ - Is the host expected to receive more inbound traffic than outbound, or vice-versa?
+- Statistics of blocked traffic.
+ - Why is a host blocking a significant volume of benign traffic? This behavior requires further investigation and probably optimization of configuration
+- Statistics of malicious allowed/blocked traffic
+ - Why is a host receiving malicious traffic and why are flows from malicious sources allowed? This behavior requires further investigation and probably optimization of configuration.
+
+ Select **See all** under **IP** as shown in the following image to see time trending for the top five talking hosts and the flow-related details (allowed ΓÇô inbound/outbound and denied - inbound/outbound flows) for a host:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-host-most-traffic-details.png" alt-text="Screenshot of dashboard showcasing host with most traffic details." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-host-most-traffic-details.png":::
+
+**Look for**
+
+- Which are the most conversing host pairs?
+ - Expected behavior like front-end or back-end communication or irregular behavior, like back-end internet traffic.
+- Statistics of allowed/blocked traffic
+ - Why a host is allowing or blocking significant traffic volume
+- Most frequently used application protocol among most conversing host pairs:
+ - Are these applications allowed on this network?
+ - Are the applications configured properly? Are they using the appropriate protocol for communication? Select **See all** under **Frequent conversation**, as show in the following image:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-most-frequent-conversations.png" alt-text="Screenshot of dashboard showcasing most frequent conversations." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-most-frequent-conversations.png":::
+
+- The following image shows time trending for the top five conversations and the flow-related details such as allowed and denied inbound and outbound flows for a conversation pair:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/top-five-chatty-conversation-details-and-trend.png" alt-text="Screenshot of top five chatty conversation details and trends." lightbox="./media/traffic-analytics-usage-scenarios/top-five-chatty-conversation-details-and-trend.png":::
+
+**Look for**
+
+- Which application protocol is most used in your environment, and which conversing host pairs are using the application protocol the most?
+ - Are these applications allowed on this network?
+ - Are the applications configured properly? Are they using the appropriate protocol for communication? Expected behavior is common ports such as 80 and 443. For standard communication, if any unusual ports are displayed, they might require a configuration change. Select **See all** under **Application port**, in the following image:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-top-application-protocols.png" alt-text="Screenshot of dashboard showcasing top application protocols." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-top-application-protocols.png":::
+
+- The following images show time trending for the top five L7 protocols and the flow-related details (for example, allowed and denied flows) for an L7 protocol:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/top-five-layer-seven-protocols-details-and-trend.png" alt-text="Screenshot of top five layer 7 protocols details and trends." lightbox="./media/traffic-analytics-usage-scenarios/top-five-layer-seven-protocols-details-and-trend.png":::
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/log-search-flow-details-for-application-protocol.png" alt-text="Screenshot of the flow details for application protocol in log search." lightbox="./media/traffic-analytics-usage-scenarios/log-search-flow-details-for-application-protocol.png":::
+
+**Look for**
+
+- Capacity utilization trends of a VPN gateway in your environment.
+ - Each VPN SKU allows a certain amount of bandwidth. Are the VPN gateways underutilized?
+ - Are your gateways reaching capacity? Should you upgrade to the next higher SKU?
+- Which are the most conversing hosts, via which VPN gateway, over which port?
+ - Is this pattern normal? Select **See all** under **VPN gateway**, as shown in the following image:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-top-active-vpn-connections.png" alt-text="Screenshot of dashboard showcasing top active VPN connections." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-top-active-vpn-connections.png":::
+
+- The following image shows time trending for capacity utilization of an Azure VPN Gateway and the flow-related details (such as allowed flows and ports):
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/vpn-gateway-utilization-trend-and-flow-details.png" alt-text="Screenshot of VPN gateway utilization trend and flow details." lightbox="./media/traffic-analytics-usage-scenarios/vpn-gateway-utilization-trend-and-flow-details.png":::
+
+## Visualize traffic distribution by geography
+
+**Look for**
+
+- Traffic distribution per data center such as top sources of traffic to a datacenter, top rogue networks conversing with the data center, and top conversing application protocols.
+ - If you observe more load on a data center, you can plan for efficient traffic distribution.
+ - If rogue networks are conversing in the data center, then you can set network security group rules to block them.
+
+ Select **View map** under **Your environment**, as shown in the following image:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-traffic-distribution.png" alt-text="Screenshot of dashboard showcasing traffic distribution." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-traffic-distribution.png":::
+
+- The geo-map shows the top ribbon for selection of parameters such as data centers (Deployed/No-deployment/Active/Inactive/Traffic Analytics Enabled/Traffic Analytics Not Enabled) and countries/regions contributing Benign/Malicious traffic to the active deployment:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/geo-map-view-active-deployment.png" alt-text="Screenshot of geo map view showcasing active deployment." lightbox="./media/traffic-analytics-usage-scenarios/geo-map-view-active-deployment.png":::
+
+- The geo-map shows the traffic distribution to a data center from countries/regions and continents communicating to it in blue (Benign traffic) and red (malicious traffic) colored lines:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/geo-map-view-traffic-distribution-to-countries-and-continents.png" alt-text="Screenshot of geo map view showcasing traffic distribution to countries/regions and continents." lightbox="./media/traffic-analytics-usage-scenarios/geo-map-view-traffic-distribution-to-countries-and-continents.png":::
+
+- The **More Insight** blade of an Azure region also shows the total traffic remaining inside that region (that is, source and destination in same region). It further gives insights about traffic exchanged between availability zones of a datacenter.
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/inter-zone-and-intra-region-traffic.png" alt-text="Screenshot of Inter Zone and Intra region traffic." lightbox="./media/traffic-analytics-usage-scenarios/inter-zone-and-intra-region-traffic.png":::
+
+## Visualize traffic distribution by virtual networks
+
+**Look for**
+
+- Traffic distribution per virtual network, topology, top sources of traffic to the virtual network, top rogue networks conversing to the virtual network, and top conversing application protocols.
+ - Knowing which virtual network is conversing to which virtual network. If the conversation isn't expected, it can be corrected.
+ - If rogue networks are conversing with a virtual network, you can correct network security group rules to block the rogue networks.
+
+ Select **View VNets** under **Your environment** as shown in the following image:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-virtual-network-distribution.png" alt-text="Screenshot of dashboard showcasing virtual network distribution." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-virtual-network-distribution.png":::
+
+- The Virtual Network Topology shows the top ribbon for selection of parameters like a virtual network's (Inter virtual network Connections/Active/Inactive), External Connections, Active Flows, and Malicious flows of the virtual network.
+- You can filter the Virtual Network Topology based on subscriptions, workspaces, resource groups and time interval. Extra filters that help you understand the flow are:
+ Flow Type (InterVNet, IntraVNET, and so on), Flow Direction (Inbound, Outbound), Flow Status (Allowed, Blocked), VNETs (Targeted and Connected), Connection Type (Peering or Gateway - P2S and S2S), and NSG. Use these filters to focus on VNets that you want to examine in detail.
+- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).
+- The Virtual Network Topology shows the traffic distribution to a virtual network to flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and network security groups, for example:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/virtual-network-topology-traffic-distribution-and-flow-details.png" alt-text="Screenshot of virtual network topology showcasing traffic distribution and flow details." lightbox="./media/traffic-analytics-usage-scenarios/virtual-network-topology-traffic-distribution-and-flow-details.png":::
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/virtual-network-filters.png" alt-text="Screenshot of virtual network topology showcasing top level and more filters." lightbox="./media/traffic-analytics-usage-scenarios/virtual-network-filters.png":::
+
+**Look for**
+
+- Traffic distribution per subnet, topology, top sources of traffic to the subnet, top rogue networks conversing to the subnet, and top conversing application protocols.
+ - Knowing which subnet is conversing to which subnet. If you see unexpected conversations, you can correct your configuration.
+ - If rogue networks are conversing with a subnet, you're able to correct it by configuring NSG rules to block the rogue networks.
+- The Subnets Topology shows the top ribbon for selection of parameters such as Active/Inactive subnet, External Connections, Active Flows, and Malicious flows of the subnet.
+- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).
+- The Subnet Topology shows the traffic distribution to a virtual network regarding flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and NSGs, for example:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/topology-subnet-to-subnet-traffic-distribution.png" alt-text="Screenshot of subnet topology showcasing traffic distribution to a virtual network subnet with regards to flows." lightbox="./media/traffic-analytics-usage-scenarios/topology-subnet-to-subnet-traffic-distribution.png":::
+
+**Look for**
+
+Traffic distribution per Application gateway & Load Balancer, topology, top sources of traffic, top rogue networks conversing to the Application gateway & Load Balancer, and top conversing application protocols.
+
+ - Knowing which subnet is conversing to which Application gateway or Load Balancer. If you observe unexpected conversations, you can correct your configuration.
+ - If rogue networks are conversing with an Application gateway or Load Balancer, you're able to correct it by configuring NSG rules to block the rogue networks.
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/topology-subnet-traffic-distribution-to-application-gateway-subnet.png" alt-text="Screenshot shows a subnet topology with traffic distribution to an application gateway subnet regarding flows." lightbox="./media/traffic-analytics-usage-scenarios/topology-subnet-traffic-distribution-to-application-gateway-subnet.png":::
+
+## View ports and virtual machines receiving traffic from the internet
+
+**Look for**
+
+- Which open ports are conversing over the internet?
+ - If unexpected ports are found open, you can correct your configuration:
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-ports-receiving-and-sending-traffic-to-internet.png" alt-text="Screenshot of dashboard showcasing ports receiving and sending traffic to the internet." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-ports-receiving-and-sending-traffic-to-internet.png":::
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/azure-destination-ports-and-hosts-details.png" alt-text="Screenshot of Azure destination ports and hosts details." lightbox="./media/traffic-analytics-usage-scenarios/azure-destination-ports-and-hosts-details.png":::
+
+## View information about public IPs interacting with your deployment
+
+**Look for**
+
+- Which public IPs are communicating with my network? What is the WHOIS data and geographic location of all public IPs?
+- Which malicious IPs are sending traffic to my deployments? What is the threat type and threat description for malicious IPs?
+
+The Public IP Information section, gives a summary of all types of public IPs present in your network traffic. Select the public IP type of interest to view details. On the traffic analytics dashboard, select any IP to view its information. For more information about the data fields presented, see [Public IP details schema](traffic-analytics-schema.md#public-ip-details-schema) .
+
+
+## Visualize the trends in network security group (NSG)/NSG rules hits
+
+**Look for**
+
+- Which NSG/NSG rules have the most hits in comparative chart with flows distribution?
+- What are the top source and destination conversation pairs per NSG/NSG rules?
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/dashboard-nsg-hits-statistics.png" alt-text="Screenshot that shows NSG hits statistics in the dashboard." lightbox="./media/traffic-analytics-usage-scenarios/dashboard-nsg-hits-statistics.png":::
+
+- The following images show time trending for hits of NSG rules and source-destination flow details for a network security group:
+
+ - Quickly detect which NSGs and NSG rules are traversing malicious flows and which are the top malicious IP addresses accessing your cloud environment
+ - Identify which NSG/NSG rules are allowing/blocking significant network traffic
+ - Select top filters for granular inspection of an NSG or NSG rules
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/time-trending-for-nsg-rule-hits-and-top-nsg-rules.png" alt-text="Screenshot showcasing time trending for NSG rule hits and top NSG rules." lightbox="./media/traffic-analytics-usage-scenarios/time-trending-for-nsg-rule-hits-and-top-nsg-rules.png":::
+
+ :::image type="content" source="./media/traffic-analytics-usage-scenarios/top-nsg-rules-statistics-details-in-log-search.png" alt-text="Screenshot of top NSG rules statistics details in log search." lightbox="./media/traffic-analytics-usage-scenarios/top-nsg-rules-statistics-details-in-log-search.png":::
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Traffic analytics schema and data aggregation](traffic-analytics-schema.md)
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
To get answers to the most frequently asked questions about traffic analytics, s
## Related content -- To learn how to use traffic analytics, see [Usage scenarios](usage-scenarios-traffic-analytics.md).
+- To learn how to use traffic analytics, see [Usage scenarios](traffic-analytics-usage-scenarios.md).
- To understand the schema and processing details of traffic analytics, see [Schema and data aggregation in Traffic Analytics](traffic-analytics-schema.md).
networking Network Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/network-monitoring-overview.md
Traffic Analytics is a cloud-based solution that provides visibility into user
Traffic Analytics equips you with information that helps you audit your organizationΓÇÖs network activity, secure applications and data, and optimize workload performance and stay compliant. Related links:
operator-nexus Howto Baremetal Run Data Extract https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-data-extract.md
Title: Troubleshoot bare metal machine issues using the `az networkcloud baremetalmachine run-data-extract` command for Azure Operator Nexus description: Step by step guide on using the `az networkcloud baremetalmachine run-data-extract` to extract data from a bare metal machine for troubleshooting and diagnostic purposes.--++ Previously updated : 10/11/2024 Last updated : 10/16/2024
There might be situations where a user needs to investigate and resolve issues with an on-premises bare metal machine. Azure Operator Nexus provides a prescribed set of data extract commands via `az networkcloud baremetalmachine run-data-extract`. These commands enable users to get diagnostic data from a bare metal machine.
-The command produces an output file containing the results of the data extract. Users should configure the Cluster resource with a storage account and identity that has access to the storage account to receive the output. There's a deprecated method of sending data to the Cluster Manager storage account if a storage account hasn't been provided on the Cluster. The Cluster Manager's storage account will be disabled in a future release as using a separate storage account is more secure.
+The command produces an output file containing the results of the data extract. By default, the data is sent to the Cluster Manager storage account. There's also a preview method where users can configure the Cluster resource with a storage account and identity that has access to the storage account to receive the output.
## Prerequisites
The command produces an output file containing the results of the data extract.
- The syntax for these commands is based on the 0.3.0+ version of the `az networkcloud` CLI. - Get the Cluster Managed Resource group name (cluster_MRG) that you created for Cluster resource.
-## Create and configure storage resources (customer-managed storage)
+## Verify access to the Cluster Manager storage account
+
+> [!NOTE]
+> The Cluster Manager storage account output method will be deprecated in the future once Cluster on-boarding to Trusted Services is complete and the user managed storage option is fully supported.
+
+If using the Cluster Manager storage method, verify you have access to the Cluster Manager's storage account:
+
+1. From Azure portal, navigate to Cluster Manager's Storage account.
+1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
+1. In the Storage browser details, select on **Blob containers**.
+1. If you encounter a `403 This request is not authorized to perform this operation.` while accessing the storage account, storage accountΓÇÖs firewall settings need to be updated to include the public IP address.
+1. Request access by creating a support ticket via Portal on the Cluster Manager resource. Provide the public IP address that requires access.
+
+## **PREVIEW:** Send command output to a user specified storage account
+
+> [!IMPORTANT]
+> Please note that this method of specifying a user storage account for command output is in preview. **This method should only be used with user storage accounts that do not have firewall enabled.** If your environment requires the storage account firewall be enabled, use the existing Cluster Manager output method.
+
+### Create and configure storage resources
1. Create a storage account, or identify an existing storage account that you want to use. See [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal).
-2. In the storage account, create a blob storage container. See [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
-3. Assign the "Storage Blob Data Contributor" role to users and managed identities which need access to the run-data-extract output. See [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal). The role must also be assigned to either a user-assigned managed identity or the cluster's own system-assigned managed identity. For more information on managed identities, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+1. Create a blob storage container in the storage account. See [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+1. Assign the "Storage Blob Data Contributor" role to users and managed identities which need access to the run-data-extract output.
+ 1. See [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal). The role must also be assigned to either a user-assigned managed identity or the cluster's own system-assigned managed identity.
+ 1. For more information on managed identities, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+ 1. If using the Cluster's system assigned identity, the system assigned identity needs to be added to the cluster before it can be granted access.
+ 1. When assigning a role to the cluster's system-assigned identity, make sure you select the resource with the type "Cluster (Operator Nexus)."
-When assigning a role to the cluster's system-assigned identity, make sure you select the resource with the type "Cluster (Operator Nexus)."
+### Configure the cluster to use a user-assigned managed identity for storage access
-## Configure the cluster to use a user-assigned managed identity for storage access
+Use this command to create a cluster with a user managed storage account and user-assigned identity. Note this example is an abbreviated command that just highlights the fields pertinent for adding the user managed storage. It isn't the full cluster create command.
-Use this command to configure the cluster for a user-assigned identity:
+```azurecli-interactive
+az networkcloud cluster create --name "<cluster-name>" \
+ --resource-group "<cluster-resource-group>" \
+ ...
+ --mi-user-assigned "<user-assigned-identity-resource-id>" \
+ --command-output-settings identity-type="UserAssignedIdentity" \
+ identity-resource-id="<user-assigned-identity-resource-id>" \
+ container-url="<container-url>" \
+ ...
+ --subscription "<subscription>"
+```
+
+Use this command to configure an existing cluster for a user provided storage account and user-assigned identity. The update command can also be used to change the storage account location and identity if needed.
```azurecli-interactive az networkcloud cluster update --name "<cluster-name>" \
az networkcloud cluster update --name "<cluster-name>" \
--subscription "<subscription>" ```
-The identity resource ID can be found by clicking "JSON view" on the identity resource; the ID is at the top of the panel that appears. The container URL can be found on the Settings -> Properties tab of the container resource.
+### Configure the cluster to use a system-assigned managed identity for storage access
-## Configure the cluster to use a system-assigned managed identity for storage access
+Use this command to create a cluster with a user managed storage account and system assigned identity. Note this example is an abbreviated command that just highlights the fields pertinent for adding the user managed storage. It isn't the full cluster create command.
-Use this command to configure the cluster to use its own system-assigned identity:
+```azurecli-interactive
+az networkcloud cluster create --name "<cluster-name>" \
+ --resource-group "<cluster-resource-group>" \
+ ...
+ --mi-system-assigned true \
+ --command-output-settings identity-type="SystemAssignedIdentity" \
+ container-url="<container-url>" \
+ ...
+ --subscription "<subscription>"
+```
+
+Use this command to configure an existing cluster for a user provided storage account and to use its own system-assigned identity. The update command can also be used to change the storage account location.
```azurecli-interactive az networkcloud cluster update --name "<cluster-name>" \
az networkcloud cluster update --name "<cluster-name>" \
To change the cluster from a user-assigned identity to a system-assigned identity, the CommandOutputSettings must first be cleared using the command in the next section, then set using this command.
-## Clear the cluster's CommandOutputSettings
+### Clear the cluster's CommandOutputSettings
The CommandOutputSettings can be cleared, directing run-data-extract output back to the cluster manager's storage. However, it isn't recommended since it's less secure, and the option will be removed in a future release.
az rest --method patch \
--body '{"properties": {"commandOutputSettings":null}}' ```
-## Verify Storage Account access (cluster manager storage)
+### View the principal ID for the managed identity
-If using the deprecated Cluster Manager storage method, verify you have access to the Cluster Manager's storage account
+The identity resource ID can be found by selecting "JSON view" on the identity resource; the ID is at the top of the panel that appears. The container URL can be found on the Settings -> Properties tab of the container resource.
-1. From Azure portal, navigate to Cluster Manager's Storage account.
-1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
-1. In the Storage browser details, select on **Blob containers**.
-1. If you encounter a `403 This request is not authorized to perform this operation.` while accessing the storage account, storage accountΓÇÖs firewall settings need to be updated to include the public IP address.
-1. Request access by creating a support ticket via Portal on the Cluster Manager resource. Provide the public IP address that requires access.
+The CLI can also be used to view the identity and the associated principal ID data within the cluster.
+
+Example:
+
+```console
+az networkcloud cluster show --ids /subscriptions/<Subscription ID>/resourceGroups/<Cluster Resource Group Name>/providers/Microsoft.NetworkCloud/clusters/<Cluster Name>
+```
+
+System-assigned identity example:
+
+```
+ "identity": {
+ "principalId": "2cb564c1-b4e5-4c71-bbc1-6ae259aa5f87",
+ "tenantId": "72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "type": "SystemAssigned"
+ },
+```
+
+User-assigned identity example:
+
+```
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionID>/resourcegroups/<resourceGroupName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<userAssignedIdentityName>": {
+ "clientId": "e67dd610-99cf-4853-9fa0-d236b214e984",
+ "principalId": "8e6d23d6-bb6b-4cf3-a00f-4cd640ab1a24"
+ }
+ }
+ },
+```
## Execute a run command
In the response, the operation performs asynchronously and returns an HTTP statu
### Hardware Support Data Collection
-This example executes the `hardware-support-data-collection` command and get `SysInfo` and `TTYLog` logs from the Dell Server. The script executes a `racadm supportassist collect` command on the designated baremetal machine. The resulting tar.gz file contains the zipped extract command file outputs in `hardware-support-data-<timestamp>.zip`.
+This example executes the `hardware-support-data-collection` command and get `SysInfo` and `TTYLog` logs from the Dell Server. The script executes a `racadm supportassist collect` command on the designated bare metal machine. The resulting tar.gz file contains the zipped extract command file outputs in `hardware-support-data-<timestamp>.zip`.
```azurecli az networkcloud baremetalmachine run-data-extract --name "bareMetalMachineName" \
Archive: TSR20240227164024_FM56PK3.pl.zip
Data is collected with the `mde-agent-information` command and formatted as JSON to `/hostfs/tmp/runcommand/mde-agent-information.json`. The JSON file is found in the data extract zip file located in the storage account. The script executes a
-sequence of `mdatp` commands on the designated baremetal machine.
+sequence of `mdatp` commands on the designated bare metal machine.
This example executes the `mde-agent-information` command without arguments.
https://cmkfjft8twwpst.blob.core.windows.net/bmm-run-command-output/20b217b5-ea3
The CVE data is refreshed per container image every 24 hours or when there's a change to the Kubernetes resource referencing the image.
-## Viewing the Output
+## Viewing the output
-The command provides another command (if using customer provided storage) or a link (if using cluster manager storage) to download the full output. The tar.gz file also contains the zipped extract command file outputs. Download the output file from the storage blob to a local directory by specifying the directory path in the optional argument `--output-directory`.
+The command provides a link (if using cluster manager storage) or another command (if using user provided storage) to download the full output. The tar.gz file also contains the zipped extract command file outputs. Download the output file from the storage blob to a local directory by specifying the directory path in the optional argument `--output-directory`.
> [!WARNING] > Using the `--output-directory` argument will overwrite any files in the local directory that have the same name as the new files being created. > [!NOTE]
-> Storage Account could be locked resulting in `403 This request is not authorized to perform this operation.` due to networking or firewall restrictions. Refer to the [customer-managed storage](#create-and-configure-storage-resources-customer-managed-storage) or [cluster manager storage](#verify-storage-account-access-cluster-manager-storage) sections for procedures to verify access.
+> Storage Account could be locked resulting in `403 This request is not authorized to perform this operation.` due to networking or firewall restrictions. Refer to the [cluster manager storage](#verify-access-to-the-cluster-manager-storage-account) or the [user managed storage](#create-and-configure-storage-resources) sections for procedures to verify access.
operator-nexus Howto Baremetal Run Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md
Title: Troubleshoot baremetal machine issues using the `az networkcloud baremetalmachine run-read-command` for Operator Nexus
+ Title: Troubleshoot bare metal machine issues using the `az networkcloud baremetalmachine run-read-command` for Operator Nexus
description: Step by step guide on using the `az networkcloud baremetalmachine run-read-command` to run diagnostic commands on a BMM.--++ Previously updated : 10/11/2024 Last updated : 10/15/2024 # Troubleshoot BMM issues using the `az networkcloud baremetalmachine run-read-command`
-There might be situations where a user needs to investigate & resolve issues with an on-premises BMM. Operator Nexus provides the `az networkcloud baremetalmachine run-read-command` so users can run a curated list of read only commands to get information from a BMM.
+There might be situations where a user needs to investigate and resolve issues with an on-premises bare metal machine (BMM). Operator Nexus provides the `az networkcloud baremetalmachine run-read-command` so users can run a curated list of read only commands to get information from a BMM.
-The command produces an output file containing its results. Users should configure the Cluster resource with a storage account and identity that has access to the storage account to receive the output. There's a deprecated method of sending data to the Cluster Manager storage account if a storage account hasn't been provided on the Cluster. The Cluster Manager's storage account will be disabled in a future release as using a separate storage account is more secure.
+The command produces an output file containing the results of the run-read command execution. By default, the data is sent to the Cluster Manager storage account. There's also a preview method where users can configure the Cluster resource with a storage account and identity that has access to the storage account to receive the output.
## Prerequisites
The command produces an output file containing its results. Users should configu
1. Ensure that the target BMM must have its `poweredState` set to `On` and have its `readyState` set to `True` 1. Get the Managed Resource group name (cluster_MRG) that you created for `Cluster` resource
-## Create and configure storage resources (customer-managed storage)
+## Verify access to the Cluster Manager storage account
+
+> [!NOTE]
+> The Cluster Manager storage account output method will be deprecated in the future once Cluster on-boarding to Trusted Services is complete and the user managed storage option is fully supported.
+
+If using the Cluster Manager storage method, verify you have access to the Cluster Manager's storage account:
+
+1. From Azure portal, navigate to Cluster Manager's Storage account.
+1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
+1. In the Storage browser details, select on **Blob containers**.
+1. If you encounter a `403 This request is not authorized to perform this operation.` while accessing the storage account, storage accountΓÇÖs firewall settings need to be updated to include the public IP address.
+1. Request access by creating a support ticket via Portal on the Cluster Manager resource. Provide the public IP address that requires access.
+
+## **PREVIEW:** Send command output to a user specified storage account
+
+> [!IMPORTANT]
+> Please note that this method of specifying a user storage account for command output is in preview. **This method should only be used with user storage accounts that do not have firewall enabled.** If your environment requires the storage account firewall be enabled, use the existing Cluster Manager output method.
+
+### Create and configure storage resources
1. Create a storage account, or identify an existing storage account that you want to use. See [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal).
-2. In the storage account, create a blob storage container. See [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
-3. Assign the "Storage Blob Data Contributor" role to users and managed identities which need access to the run-read-command output. See [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal). The role must also be assigned to either a user-assigned managed identity or the cluster's own system-assigned managed identity. For more information on managed identities, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+1. Create a blob storage container in the storage account. See [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+1. Assign the "Storage Blob Data Contributor" role to users and managed identities which need access to the run-data-extract output.
+ 1. See [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal). The role must also be assigned to either a user-assigned managed identity or the cluster's own system-assigned managed identity.
+ 1. For more information on managed identities, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+ 1. If using the Cluster's system assigned identity, the system assigned identity needs to be added to the cluster before it can be granted access.
+ 1. When assigning a role to the cluster's system-assigned identity, make sure you select the resource with the type "Cluster (Operator Nexus)."
+
+### Configure the cluster to use a user-assigned managed identity for storage access
-When assigning a role to the cluster's system-assigned identity, make sure you select the resource with the type "Cluster (Operator Nexus)."
+Use this command to create a cluster with a user managed storage account and user-assigned identity. Note this example is an abbreviated command that just highlights the fields pertinent for adding the user managed storage. It isn't the full cluster create command.
-## Configure the cluster to use a user-assigned managed identity for storage access
+```azurecli-interactive
+az networkcloud cluster create --name "<cluster-name>" \
+ --resource-group "<cluster-resource-group>" \
+ ...
+ --mi-user-assigned "<user-assigned-identity-resource-id>" \
+ --command-output-settings identity-type="UserAssignedIdentity" \
+ identity-resource-id="<user-assigned-identity-resource-id>" \
+ container-url="<container-url>" \
+ ...
+ --subscription "<subscription>"
+```
-Use this command to configure the cluster for a user-assigned identity:
+Use this command to configure an existing cluster for a user provided storage account and user-assigned identity. The update command can also be used to change the storage account location and identity if needed.
```azurecli-interactive az networkcloud cluster update --name "<cluster-name>" \
az networkcloud cluster update --name "<cluster-name>" \
--subscription "<subscription>" ```
-The identity resource ID can be found by clicking "JSON view" on the identity resource; the ID is at the top of the panel that appears. The container URL can be found on the Settings -> Properties tab of the container resource.
+### Configure the cluster to use a system-assigned managed identity for storage access
-## Configure the cluster to use a system-assigned managed identity for storage access
+Use this command to create a cluster with a user managed storage account and system assigned identity. Note this example is an abbreviated command that just highlights the fields pertinent for adding the user managed storage. It isn't the full cluster create command.
-Use this command to configure the cluster to use its own system-assigned identity:
+```azurecli-interactive
+az networkcloud cluster create --name "<cluster-name>" \
+ --resource-group "<cluster-resource-group>" \
+ ...
+ --mi-system-assigned true \
+ --command-output-settings identity-type="SystemAssignedIdentity" \
+ container-url="<container-url>" \
+ ...
+ --subscription "<subscription>"
+```
+
+Use this command to configure an existing cluster for a user provided storage account and to use its own system-assigned identity. The update command can also be used to change the storage account location.
```azurecli-interactive az networkcloud cluster update --name "<cluster-name>" \
az networkcloud cluster update --name "<cluster-name>" \
To change the cluster from a user-assigned identity to a system-assigned identity, the CommandOutputSettings must first be cleared using the command in the next section, then set using this command.
-## Clear the cluster's CommandOutputSettings
+### Clear the cluster's CommandOutputSettings
-The CommandOutputSettings can be cleared, directing run-read-command output back to the cluster manager's storage. However, it isn't recommended since it's less secure, and the option will be removed in a future release.
+The CommandOutputSettings can be cleared, directing run-data-extract output back to the cluster manager's storage. However, it isn't recommended since it's less secure, and the option will be removed in a future release.
However, the CommandOutputSettings do need to be cleared if switching from a user-assigned identity to a system-assigned identity.
az rest --method patch \
--body '{"properties": {"commandOutputSettings":null}}' ```
-## Verify Storage Account access (cluster manager storage)
+### View the principal ID for the managed identity
-If using the deprecated Cluster Manager storage method, verify you have access to the Cluster Manager's storage account
+The identity resource ID can be found by selecting "JSON view" on the identity resource; the ID is at the top of the panel that appears. The container URL can be found on the Settings -> Properties tab of the container resource.
-1. From Azure portal, navigate to Cluster Manager's Storage account.
-1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
-1. In the Storage browser details, select on **Blob containers**.
-1. If you encounter a `403 This request is not authorized to perform this operation.` while accessing the storage account, storage accountΓÇÖs firewall settings need to be updated to include the public IP address.
-1. Request access by creating a support ticket via Portal on the Cluster Manager resource. Provide the public IP address that requires access.
+The CLI can also be used to view the identity and the associated principal ID data within the cluster.
+
+Example:
+
+```console
+az networkcloud cluster show --ids /subscriptions/<Subscription ID>/resourceGroups/<Cluster Resource Group Name>/providers/Microsoft.NetworkCloud/clusters/<Cluster Name>
+```
+
+System-assigned identity example:
+
+```
+ "identity": {
+ "principalId": "2cb564c1-b4e5-4c71-bbc1-6ae259aa5f87",
+ "tenantId": "72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "type": "SystemAssigned"
+ },
+```
+
+User-assigned identity example:
+
+```
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionID>/resourcegroups/<resourceGroupName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<userAssignedIdentityName>": {
+ "clientId": "e67dd610-99cf-4853-9fa0-d236b214e984",
+ "principalId": "8e6d23d6-bb6b-4cf3-a00f-4cd640ab1a24"
+ }
+ }
+ },
+```
## Execute a run-read command
is wrong.
Also note that some commands begin with `nc-toolbox nc-toolbox-runread` and must be entered as shown. `nc-toolbox-runread` is a special container image that includes more tools that aren't installed on the
-baremetal host, such as `ipmitool` and `racadm`.
+bare metal host, such as `ipmitool` and `racadm`.
Some of the run-read commands require specific arguments be supplied to enforce read-only capabilities of the commands. An example of run-read commands that require specific arguments is the allowed Mellanox command `mstconfig`,
az networkcloud baremetalmachine run-read-command --name "<bareMetalMachineName>
--subscription "<subscription>" ```
-## Checking command status and viewing output
+## How to view the output of an `az networkcloud baremetalmachine run-read-command` in the Cluster Manager Storage account
+
+This guide walks you through accessing the output file that is created in the Cluster Manager Storage account when an `az networkcloud baremetalmachine run-read-command` is executed on a server. The name of the file is identified in the `az rest` status output.
+
+1. Open the Cluster Manager Managed Resource Group for the Cluster where the server is housed and then select the **Storage account**.
+
+1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
+
+1. In the Storage browser details, select on **Blob containers**.
+
+1. Select the baremetal-run-command-output blob container.
+
+1. Storage Account could be locked resulting in `403 This request is not authorized to perform this operation.` due to networking or firewall restrictions. Refer to the [cluster manager storage](#verify-access-to-the-cluster-manager-storage-account) or the [customer-managed storage](#create-and-configure-storage-resources) sections for procedures to verify access.
+
+1. Select the output file from the run-read command. The file name can be identified from the `az rest --method get` command. Additionally, the **Last modified** timestamp aligns with when the command was executed.
+
+1. You can manage & download the output file from the **Overview** pop-out.
+
+## **PREVIEW**: Check the command status and view the output in a user specified storage account
Sample output is shown. It prints the top 4,000 characters of the result to the screen for convenience and provides a short-lived link to the storage blob containing the command execution result. You can use the link to download the zipped output file (tar.gz).
Sample output is shown. It prints the top 4,000 characters of the result to the
Script execution result can be found in storage account: https://<storage_account_name>.blob.core.windows.net/bmm-run-command-output/a8e0a5fe-3279-46a8-b995-51f2f98a18dd-action-bmmrunreadcmd.tar.gz?se=2023-04-14T06%3A37%3A00Z&sig=XXX&sp=r&spr=https&sr=b&st=2023-04-14T02%3A37%3A00Z&sv=2019-12-12 ```-
-## How to view the output of an `az networkcloud baremetalmachine run-read-command` in the Cluster Manager Storage account
-
-This guide walks you through accessing the output file that is created in the Cluster Manager Storage account when an `az networkcloud baremetalmachine run-read-command` is executed on a server. The name of the file is identified in the `az rest` status output.
-
-1. Open the Cluster Manager Managed Resource Group for the Cluster where the server is housed and then select the **Storage account**.
-
-1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
-
-1. In the Storage browser details, select on **Blob containers**.
-
-1. Select the baremetal-run-command-output blob container.
-
-1. Storage Account could be locked resulting in `403 This request is not authorized to perform this operation.` due to networking or firewall restrictions. Refer to the [customer-managed storage](#create-and-configure-storage-resources-customer-managed-storage) or [cluster manager storage](#verify-storage-account-access-cluster-manager-storage) sections for procedures to verify access.
-
-1. Select the output file from the run-read command. The file name can be identified from the `az rest --method get` command. Additionally, the **Last modified** timestamp aligns with when the command was executed.
-
-1. You can manage & download the output file from the **Overview** pop-out.
operator-nexus Howto Create Access Control List For Network To Network Interconnects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-create-access-control-list-for-network-to-network-interconnects.md
The table below provides guidance on the usage of parameters when creating ACLs:
> - Ingress ACLs do not support the following options: etherType.<br> > - Ports inputs can be `port-number` or `range-of-ports`.<br> > - Fragments inputs can be `port-number` or `range-of-ports`.<br>
+> - ACL with dynamic match configuration on eternal networks is not supported.<br>
### Example payload for ACL creation
oracle Oracle Database Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-regions.md
# Available regions for Oracle Database@Azure- Learn what Azure regions offer Oracle Database@Azure. ## Asia Pacific (APAC)
Learn what Azure regions offer Oracle Database@Azure.
| Azure region | OCI region | Oracle Exadata Database@Azure | Oracle Autonomous Database@Azure | |-|--|-|-| | Australia East | Australia East (Sydney) | Γ£ô | Γ£ô |
+| Southeast Asia | Singapore (Singapore) | Γ£ô | Γ£ô |
+| Korea Central | South Korea Central(Seoul) | Γ£ô | Γ£ô |
## Europe, Middle East, Africa (EMEA)
Learn what Azure regions offer Oracle Database@Azure.
| East US | US East (Ashburn) | Γ£ô | Γ£ô | | Canada Central | Canada Southeast (Toronto) | Γ£ô | Γ£ô |
+## Available DR regions for Oracle Database@Azure
+
+Below Azure regions offer single zone DR solution for Oracle Database@Azure.
+
+| Azure region | OCI region | Oracle Exadata Database@Azure | Oracle Autonomous Database@Azure |
+|-|--|-|-|
+| West US | US West (Phoenix) | Γ£ô | Γ£ô
+ >[!Note] > To provision Oracle Database@Azure resources in a supported region, your tenancy must be subscribed to the target region. For more information, see [Managing regions](https://docs.oracle.com/en-us/iaas/Content/Identity/regions/managingregions.htm#Managing_Regions) and [Subscribing to an infrastructure region](https://docs.oracle.com/en-us/iaas/Content/Identity/regions/To_subscribe_to_an_infrastructure_region.htm#subscribe).
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog description: This article describes how to use the Azure portal to create an instance of Datadog.-+ Last updated 01/06/2023-+
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/get-support.md
Title: Get support for Datadog resource description: This article describes how to contact support for a Datadog resource.--++ Last updated 01/06/2023
partner-solutions Link To Existing Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/link-to-existing-organization.md
Title: Link to existing Datadog
description: This article describes how to use the Azure portal to link to an existing instance of Datadog. Last updated 06/01/2023--++
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/manage.md
Title: Manage a Datadog resource description: This article describes management of a Datadog resource in the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support.- -++ Last updated 06/01/2023
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/overview.md
Title: Datadog overview description: Learn about using Datadog in the Azure Marketplace.- -++ Last updated 01/06/2023
partner-solutions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/prerequisites.md
Title: Prerequisites for Datadog on Azure description: This article describes how to configure your Azure environment to create an instance of Datadog.- -++ Last updated 01/06/2023
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
Title: Create Elastic application
description: This article describes how to use the Azure portal to create an instance of Elastic. Last updated 06/01/2023--++
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/get-support.md
Title: Get support for Elastic resource description: This article describes how to contact support for an Elastic resource.--++ Last updated 06/20/2024
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/manage.md
Title: Manage Elastic Cloud (Elasticsearch) - An Azure Native ISV Service
description: This article describes management of Elastic Cloud (Elasticsearch) on the Azure portal. How to configure diagnostic settings and delete the resource. Last updated 10/06/2023--++ # Manage Elastic Cloud (Elasticsearch) - An Azure Native ISV Service
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/overview.md
Title: Elastic integration overview
description: Learn about using the Elastic Cloud-Native Observability Platform in the Azure Marketplace. Last updated 05/15/2023--++ # What is Elastic Cloud (Elasticsearch) - An Azure Native ISV Service?
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
Title: Troubleshooting Elastic Cloud (Elasticsearch) - An Azure Native ISV Servi
description: This article provides information about troubleshooting Elastic integration with Azure Last updated 10/06/2023--++ # Troubleshooting Elastic Cloud (Elasticsearch) - An Azure Native ISV Service
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/create.md
Title: Create a Logz.io resource
description: Quickstart article that describes how to create a Logz.io resource in Azure. Last updated 10/25/2021--++
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/manage.md
Title: Manage the Azure integration with Logz.io
description: Learn how to manage the Azure integration with Logz.io. Last updated 10/25/2021--++ # Manage the Logz.io integration in Azure
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/overview.md
Title: Logz.io overview
description: Learn about Azure integration using Logz.io in Azure Marketplace. Last updated 10/25/2021--++ # What is Logz.io integration with Azure?
partner-solutions Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/setup-sso.md
Title: Single sign-on for Azure integration with Logz.io
description: Learn about how to set up single sign-on for Azure integration with Logz.io. Last updated 10/25/2021--++ # Set up Logz.io single sign-on
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/troubleshoot.md
Title: Troubleshooting Logz.io description: This article describes how to troubleshoot Logz.io integration with Azure.- -++ Last updated 01/06/2023
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
Title: Create an NGINXaaS deployment description: This article describes how to use the Azure portal to create an instance of NGINXaaS.-+ -+ Last updated 01/18/2023
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
Title: Manage an NGINXaaS resource through the Azure portal description: This article describes management functions for NGINXaaS on the Azure portal. - -++ Last updated 01/18/2023
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
Title: What is NGINXaaS description: Learn about using the NGINXaaS Cloud-Native Observability Platform in the Azure Marketplace.-+ -+ Last updated 01/18/2023
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
Title: Troubleshooting your NGINXaaS deployment description: This article provides information about getting support and troubleshooting an NGINXaaS integration.- -++ Last updated 01/18/2023
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Overview of Azure Native ISV Services description: Introduction to the Azure Native ISV Services.-+ Last updated 04/08/2024-+ # Azure Native ISV Services overview
partner-solutions Palo Alto Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-troubleshoot.md
Title: Troubleshooting your Cloud NGFW by Palo Alto Networks
description: This article provides information about getting support and troubleshooting a Cloud NGFW (Next-Generation Firewall) by Palo Alto Networks. Previously updated : 07/10/2023 Last updated : 10/18/2024
You can get support for your Palo Alto deployment through a **New Support reques
## Troubleshooting
+### Connection errors
+
+For connection errors, see [known issues for Azure Virtual WAN](../../virtual-wan/whats-new.md#known-issues).
+
+#### See also
+
+- [Azure Virtual Network FAQ](../../virtual-network/virtual-networks-faq.md)
+- [Virtual WAN FAQ](../../virtual-wan/virtual-wan-faq.md)
+ ### Marketplace purchase errors [!INCLUDE [marketplace-purchase-errors](../includes/marketplace-purchase-errors.md)]
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
description: Learn how to migrate your Azure API Management instances to availab
Previously updated : 07/07/2022 Last updated : 10/16/2024
# Migrate Azure API Management to availability zone support
-The Azure API Management service supports [zone redundancy](../reliability/availability-zones-overview.md), which provides resiliency and high availability to a service instance in a specific Azure region. With zone redundancy, the gateway and the control plane of your API Management instance (management API, developer portal, Git configuration) are replicated across datacenters in physically separated zones, so they're resilient to a zone failure.
+The Azure API Management service supports [availability zones](../reliability/availability-zones-overview.md) in both zonal and zone-redundant configurations:
-This article describes four options for migrating an API Management instance to availability zones. For background about configuring API Management for high availability, see [Ensure API Management availability and reliability](../api-management/high-availability.md).
+* **Zonal** - the API Management gateway and the control plane of your API Management instance (management API, developer portal, Git configuration) are deployed in a single zone you select within an Azure region.
+
+* **Zone-redundant** - the gateway and the control plane of your API Management instance (management API, developer portal, Git configuration) are replicated across two or more physically separated zones within an Azure region. Zone redundancy provides resiliency and high availability to a service instance.
+
+This article describes four scenarios for migrating an API Management instance to availability zones. For more information about configuring API Management for high availability, see [Ensure API Management availability and reliability](../api-management/high-availability.md).
## Prerequisites
-* To configure API Management for zone redundancy, your instance must be in one of the [Azure regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+* To configure availability zones for API Management, your instance must be in one of the [Azure regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
* If you don't have an API Management instance, create one by following the [Create a new Azure API Management instance by using the Azure portal](../api-management/get-started-create-service-instance.md) quickstart. Select the Premium service tier.
There are no downtime requirements for any of the migration options.
* Changes can take 15 to 45 minutes to apply. The API Management gateway can continue to handle API requests during this time.
-* When you're migrating an API Management instance that's deployed in an external or internal virtual network to availability zones, you must specify a new public IP address resource. In an internal virtual network, the public IP address is used only for management operations, not for API requests. [Learn more about IP addresses of API Management](../api-management/api-management-howto-ip-addresses.md).
+* When you're migrating an API Management instance that's deployed in an external or internal virtual network to availability zones, you can optionally specify a new public IP address resource. In an internal virtual network, the public IP address is used only for management operations, not for API requests. [Learn more about IP addresses of API Management](../api-management/api-management-howto-ip-addresses.md).
* Migrating to availability zones or changing the configuration of availability zones triggers a public and private [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
-* When you're enabling availability zones in a region, you configure API Management scale [units](../api-management/upgrade-and-scale.md) that you can distribute evenly across the zones. For example, if you configure two zones, you can configure two units, four units, or another multiple of two units.
+* When you're enabling availability zones in a region, you configure API Management scale [units](../api-management/upgrade-and-scale.md) that you can distribute evenly across the zones. For example, if you configure two zones, you can configure two units, four units, or another multiple of two units.
Adding units incurs additional costs. For details, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
-* If you configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
+* If you configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after configuring availability zones. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
## Existing gateway location not injected in a virtual network
To migrate an existing location of your API Management instance to availability
To migrate an existing location of your API Management instance to availability zones when the instance is currently injected in a virtual network and is currently hosted on the `stv1` platform, use the following steps. Migrating to availability zones also migrates the instance to the `stv2` platform.
-1. Create a new subnet and public IP address in the location to migrate to availability zones. Detailed requirements are in the [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+1. Create a new subnet and optional public IP address in the location to migrate to availability zones. Detailed requirements are in the [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
1. In the Azure portal, go to your API Management instance.
To migrate an existing location of your API Management instance to availability
1. In the **Availability zones** box, select one or more zones. The number of units that you selected must be distributed evenly across the availability zones. For example, if you selected three units, select three zones so that each zone hosts one unit.
-1. In the respective boxes under **Network**, select the new subnet and new public IP address in the location.
+1. In the respective boxes under **Network**, select the new subnet and optional public IP address in the location.
1. Select **Apply**, and then select **Save**.
To migrate an existing location of your API Management instance to availability
To migrate an existing location of your API Management instance to availability zones when the instance is currently injected in a virtual network and is already hosted on the `stv2` platform:
-1. Create a new subnet and public IP address in the location to migrate to availability zones. Detailed requirements are in the [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+1. Create a new subnet and optional public IP address in the location to migrate to availability zones. Detailed requirements are in the [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
1. In the Azure portal, go to your API Management instance.
To migrate an existing location of your API Management instance to availability
1. In the **Availability zones** box, select one or more zones. The number of units that you selected must be distributed evenly across the availability zones. For example, if you selected three units, select three zones so that each zone hosts one unit.
-1. In the **Public IP Address** box, select the new public IP address in the location.
+1. In the **Public IP Address** box, optionally select the new public IP address in the location.
1. Select **Apply**, and then select **Save**. ## New gateway location To add a new location to your API Management instance and enable availability zones in that location:
-1. If your API Management instance is deployed in a virtual network in the primary location, set up a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), subnet, and public IP address in any new location where you plan to enable zone redundancy.
+1. If your API Management instance is deployed in a virtual network in the primary location, set up a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), subnet, and optional public IP address in any new location where you plan to enable availability zones.
1. In the Azure portal, go to your API Management instance.
To add a new location to your API Management instance and enable availability zo
1. In the **Availability zones** box, select one or more zones. The number of units that you selected must be distributed evenly across the availability zones. For example, if you selected three units, select three zones so that each zone hosts one unit.
-1. If your API Management instance is deployed in a virtual network, use the boxes under **Network** to select the virtual network, subnet, and public IP address that are available in the location.
+1. If your API Management instance is deployed in a virtual network, use the boxes under **Network** to select the virtual network, subnet, and optional public IP address that are available in the location.
1. Select **Add**, and then select **Save**.
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
Title: Extend Microsoft Sentinel across workspaces and tenants
description: How to use Microsoft Sentinel to query and analyze data across workspaces and tenants. Previously updated : 06/28/2023 Last updated : 10/17/2024 -
+appliesto: Microsoft Sentinel in the Azure portal
#Customer intent: As a security analyst, I want to query data across multiple workspaces and tenants so that I can centralize incident management and enhance threat detection capabilities.
When you onboard Microsoft Sentinel, your first step is to select your Log Analytics workspace. While you can get the full benefit of the Microsoft Sentinel experience with a single workspace, in some cases, you might want to extend your workspace to query and analyze your data across workspaces and tenants. For more information, see [Design a Log Analytics workspace architecture](/azure/azure-monitor/logs/workspace-design) and [Prepare for multiple workspaces and tenants in Microsoft Sentinel](prepare-multiple-workspaces.md).
+If you onboard Microsoft Sentinel to the Microsoft Defender portal, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview).
+ ## Manage incidents on multiple workspaces Microsoft Sentinel supports a [multiple workspace incident view](./multiple-workspace-view.md) where you can centrally manage and monitor incidents across multiple workspaces. The centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace.
sentinel Multiple Workspace View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/multiple-workspace-view.md
Title: Work with Microsoft Sentinel incidents in many workspaces at once | Micro
description: How to view incidents in multiple workspaces concurrently in Microsoft Sentinel. Previously updated : 01/11/2022 Last updated : 10/17/2024 -
+appliesto: Microsoft Sentinel in the Azure portal
#Customer intent: As a security analyst, I want to manage and investigate incidents across multiple workspaces and tenants so that I can maintain comprehensive visibility and control over my organization's security posture.
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+If you onboard Microsoft Sentinel to the Microsoft Defender portal, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview).
+ ## Entering multiple workspace view
-When you open Microsoft Sentinel, you are presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. To the left of each workspace name is a checkbox. Selecting the name of a single workspace will bring you into that workspace. To choose multiple workspaces, select all the corresponding checkboxes, and then select the **View incidents** button at the top of the page.
+When you open Microsoft Sentinel, you're presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. Selecting the name of a single workspace brings you into that workspace. To choose multiple workspaces, select all the corresponding checkboxes, and then select the **View incidents** button at the top of the page.
> [!IMPORTANT] > Multiple Workspace View now supports a maximum of 100 concurrently displayed workspaces. >
-Note that in the list of workspaces, you can see the directory, subscription, location, and resource group associated with each workspace. The directory corresponds to the tenant.
+In the list of workspaces, you can see the directory, subscription, location, and resource group associated with each workspace. The directory corresponds to the tenant.
:::image type="content" source="./media/multiple-workspace-view/workspaces.png" alt-text="Screenshot of selecting multiple workspaces.":::
Multiple workspace view is currently available only for incidents. This page loo
- The counters at the top of the page - *Open incidents*, *New incidents*, *Active incidents*, etc. - show the numbers for all of the selected workspaces collectively. -- You'll see incidents from all of the selected workspaces and directories (tenants) in a single unified list. You can filter the list by workspace and directory, in addition to the filters from the regular **Incidents** screen.
+- You see incidents from all of the selected workspaces and directories (tenants) in a single unified list. You can filter the list by workspace and directory, in addition to the filters from the regular **Incidents** screen.
-- You'll need to have read and write permissions on all the workspaces from which you've selected incidents. If you have only read permissions on some workspaces, you'll see warning messages if you select incidents in those workspaces. You won't be able to modify those incidents or any others you've selected together with those (even if you do have permissions for the others).
+- You need to have read and write permissions on all the workspaces from which you've selected incidents. If you have only read permissions on some workspaces, you see warning messages if you select incidents in those workspaces. You aren't able to modify those incidents or any others you've selected together with those (even if you do have permissions for the others).
-- If you choose a single incident and click **View full details** or **Actions** > **Investigate**, you will from then on be in the data context of that incident's workspace and no others.
+- If you choose a single incident and select **View full details** or **Actions** > **Investigate**, you'll from then on be in the data context of that incident's workspace and no others.
## Next steps
sentinel Use Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-multiple-workspaces.md
Title: Set up multiple workspaces and tenants in Microsoft Sentinel
description: If you've defined that your environment needs multiple workspaces, you now set up your multiple workspace architecture in Microsoft Sentinel. Previously updated : 07/05/2023 Last updated : 10/17/2024 -
+appliesto: Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
#Customer intent: As a security architect, I want to use Microsoft Sentinel across multiple workspaces so that I can efficiently monitor and analyze security data across my entire organization.
In this article, you learn how to set up Microsoft Sentinel to extend across mul
## Options for using multiple workspaces
-If you've determined and set up your environment to extend across workspaces, you can:
+After you set up your environment to extend across workspaces, you can:
+
+- **Manage and monitor your cross-workspace architecture**: Query and analyze your data across workspaces and tenants.
+ - To work in the Azure portal, see [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md).
+ - If your organization onboards Microsoft Sentinel to the Microsoft Defender portal, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview).
+
+For Microsoft Sentinel in the Azure portal, you can:
+
+- **Manage multiple workspaces with workspace manager**: Centrally manage multiple workspaces within one or more Azure tenants. For more information, see [Centrally manage multiple Microsoft Sentinel workspaces with workspace manager](workspace-manager.md).
-- [Manage and monitor cross-workspace architecture](extend-sentinel-across-workspaces-tenants.md): Query and analyze your data across workspaces and tenants.-- [Manage multiple workspaces with workspace manager](workspace-manager.md): Centrally manage multiple workspaces within one or more Azure tenants.
+Only one Microsoft Sentinel workspace per tenant is currently supported in the unified security operations platform. For more information, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview).
## Next steps
sentinel Workspace Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md
description: Learn how to centrally manage multiple Microsoft Sentinel workspace
Previously updated : 04/24/2023 Last updated : 10/17/2024
+appliesto: Microsoft Sentinel in the Azure portal
#Customer intent: As a Managed Security Services Provider (MSSP) or global enterprise, I want to centrally manage multiple security workspaces so that I can efficiently operate at scale across one or more Azure tenants.
Here are the active content types supported with workspace
> Support for workspace manager is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+If you onboard Microsoft Sentinel to the Microsoft Defender portal, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview).
## Prerequisites
service-bus-messaging Service Bus Async Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-async-messaging.md
As a mitigation, the code must read the error and halt any retries of the messag
For more information on how application code should handle throttling concerns, see the [documentation on the Throttling Pattern](/azure/architecture/patterns/throttling). ### Issue for an Azure dependency
-Other components within Azure can occasionally have service issues. For example, when a system that Service Bus uses is being upgraded, that system can temporarily experience reduced capabilities. To work around these types of issues, Service Bus regularly investigates and implements mitigations. Side effects of these mitigations do appear. For example, to handle transient issues with storage, Service Bus implements a system that allows message send operations to work consistently. Due to the nature of the mitigation, a sent message can take up to 15 minutes to appear in the affected queue or subscription and be ready for a receive operation. Generally speaking, most entities will not experience this issue. However, given the number of entities in Service Bus within Azure, this mitigation is sometimes needed for a small subset of Service Bus customers.
+Other components within Azure can occasionally have service issues. For example, when a system that Service Bus uses is being upgraded, that system can temporarily experience reduced capabilities. To work around these types of issues, Service Bus regularly investigates and implements mitigations. Side effects of these mitigations do appear. For example, to handle transient issues with storage, Service Bus implements a system that allows message send operations to work consistently. Generally speaking, most entities will not experience this issue. However, given the number of entities in Service Bus within Azure, this mitigation is sometimes needed for a small subset of Service Bus customers.
### Service Bus failure on a single subsystem With any application, circumstances can cause an internal component of Service Bus to become inconsistent. When Service Bus detects this, it collects data from the application to aid in diagnosing what happened. Once the data is collected, the application is restarted in an attempt to return it to a consistent state. This process happens fairly quickly, and results in an entity appearing to be unavailable for up to a few minutes, though typical down times are much shorter.
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
After configuring endpoints, you can configure network rules to further control
You can enable or disable public Internet access to your Elastic SAN endpoints at the SAN level. Enabling public network access for an Elastic SAN allows you to configure public access to individual volume groups in that SAN over storage service endpoints. By default, public access to individual volume groups is denied even if you allow it at the SAN level. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints.
+## Data Integrity
+
+Data integrity is important for preventing data corruption in cloud storage. TCP provides a foundational level of data integrity through its checksum mechanism, it can be enhanced over iSCSI with more robust error detection with a cyclic redundancy check (CRC), specifically CRC-32C. CRC-32C can be used to add checksum verification for iSCSI headers and data payloads.
+
+Elastic SAN supports CRC-32C checksum verification when enabled on the client side for connections to Elastic SAN volumes. Elastic SAN also offers the ability to enforce this error detection through a property that can be set at the volume group level, which is inherited by any volume within that volume group. When you enable this property on a volume group, Elastic SAN rejects all client connections to any volumes in the volume group if CRC-32C isn't set for header or data digests on those connections. When you disable this property, Elastic SAN volume checksum verification depends on whether CRC-32C is set for header or data digests on the client, but your Elastic SAN won't reject any connections. To learn how to enable CRC protection, see [Configure networking](elastic-san-networking.md#enable-iscsi-error-detection).
+
+> [!NOTE]
+> Some operating systems may not support iSCSI header or data digests. Fedora and its downstream Linux distributions like Red Hat Enterprise Linux, CentOS, Rocky Linux, etc. don't support data digests. Don't enable CRC protection on your volume groups if your clients use operating systems like these that don't support iSCSI header or data digests because connections to the volumes will fail.
+ ## Storage service endpoints [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
iSCSI sessions can periodically disconnect and reconnect over the course of the
## Next steps
-[Configure Elastic SAN networking](elastic-san-networking.md)
+[Configure Elastic SAN networking](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
az elastic-san update \
+## Configure iSCSI error detection
+
+### Enable iSCSI error detection
+
+To enable CRC-32C checksum verification for iSCSI headers or data payloads, set CRC-32C on header or data digests for all connections on your clients that connect to Elastic SAN volumes. To do this, connect your clients to Elastic SAN volumes using multi-session scripts generated either in the Azure portal or provided in either the [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-Linux.md) Elastic SAN connection articles.
+
+If you need to, you can do this without the multi-session connection scripts. On Windows, you can do this by setting header or data digests to 1 during login to the Elastic SAN volumes (`LoginTarget` and `PersistentLoginTarget`). On Linux, you can do this by updating the global iSCSI configuration file (iscsid.conf, generally found in /etc/iscsi directory). When a volume is connected, a node is created along with a configuration file specific to that node (for example, on Ubuntu it can be found in /etc/iscsi/nodes/$volume_iqn/portal_hostname,$port directory) inheriting the settings from global configuration file. If you have already connected volumes to your client before updating your global configuration file, update the node specific configuration file for each volume directly, or using the following command:
+
+```sudo iscsiadm -m node -T $volume_iqn -p $portal_hostname:$port -o update -n $iscsi_setting_name -v $setting_value```
+
+Where
+- $volume_iqn: Elastic SAN volume IQN
+- $portal_hostname: Elastic SAN volume portal hostname
+- $port: 3260
+- $iscsi_setting_name: node.conn[0].iscsi.HeaderDigest (or) node.conn[0].iscsi.DataDigest
+- $setting_value: CRC32C
+
+### Enforce iSCSI error detection
+
+To enforce iSCSI error detection, set CRC-32C for both header and data digests on your clients and enable the CRC protection property on the volume group that contains volumes already connected to or have yet to be connected to from your clients. If your Elastic SAN volumes are already connected and don't have CRC-32C for both digests, you should disconnect the volumes and reconnect them using multi-session scripts generated in the Azure portal when connecting to an Elastic SAN volume, or from the [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-Linux.md) Elastic SAN connection articles.
+
+> [!NOTE]
+> CRC protection feature isn't currently available in North Europe and South Central US.
+
+To enable CRC protection on the volume group:
+
+# [Portal](#tab/azure-portal)
+
+Enable CRC protection on a new volume group:
++
+Enable CRC protection on an existing volume group:
++
+# [PowerShell](#tab/azure-powershell)
+
+Use this script to enable CRC protection on a new volume group using the Azure PowerShell module. Replace the values of `$RgName`, `$EsanName`, `$EsanVgName` before running the script.
+
+```powershell
+# Set the variable values.
+# The name of the resource group where the Elastic San is deployed.
+$RgName = "<ResourceGroupName>"
+# The name of the Elastic SAN.
+$EsanName = "<ElasticSanName>"
+# The name of volume group within the Elastic SAN.
+$EsanVgName = "<VolumeGroupName>"
+
+# Create a volume group by enabling CRC protection
+New-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName -EnforceDataIntegrityCheckForIscsi $true
+
+```
+
+Use this script to enable CRC protection on an existing volume group using the Azure PowerShell module. Replace the values of `$RgName`, `$EsanName`, `$EsanVgName` before running the script.
+
+```powershell
+# Set the variable values.
+$RgName = "<ResourceGroupName>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<VolumeGroupName>"
+
+# Edit a volume group to enable CRC protection
+Update-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName -EnforceDataIntegrityCheckForIscsi $true
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The following code sample enable CRC protection on a new volume group using Azure CLI. Replace the values of `RgName`, `EsanName`, `EsanVgName`, before running the sample.
+
+```azurecli
+# Set the variable values.
+# The name of the resource group where the Elastic San is deployed.
+RgName="<ResourceGroupName>"
+# The name of the Elastic SAN.
+EsanName="<ElasticSanName>"
+# The name of volume group within the Elastic SAN.
+EsanVgName= "<VolumeGroupName>"
+
+# Create the Elastic San.
+az elastic-san volume-group create \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --volume-group-name $EsanVgName \
+ --data-integrity-check true
+```
+
+The following code sample enable CRC protection on an existing volume group using Azure CLI. Replace the values of `RgName`, `EsanName`, `EsanVgName`, before running the sample.
+
+```azurecli
+# Set the variable values.
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName= "<VolumeGroupName>"
+
+# Create the Elastic San.
+az elastic-san volume-group update \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --volume-group-name $EsanVgName \
+ --data-integrity-check true
+```
++++ ## Configure a virtual network endpoint You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets can belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Microsoft Entra tenant.
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Title: Assign share-level permissions for Azure Files
-description: Learn how to control access to Azure Files by assigning share-level permissions to a Microsoft Entra identity that represents a hybrid user to control user access to Azure file shares with identity-based authentication.
+description: Learn how to control access to Azure Files by assigning share-level permissions to control user access to Azure file shares with identity-based authentication.
Previously updated : 05/09/2024 Last updated : 10/18/2024 ms.devlang: azurecli
Once you've enabled an Active Directory (AD) source for your storage account, yo
## Choose how to assign share-level permissions
-Share-level permissions on Azure file shares are configured for Microsoft Entra users, groups, or service principals, while directory and file-level permissions are enforced using Windows access control lists (ACLs). You must assign share-level permissions to the Microsoft Entra identity representing the same user, group, or service principal in your AD DS in order to support AD DS authentication to your Azure file share. Authentication and authorization against identities that only exist in Microsoft Entra ID, such as Azure Managed Identities (MSIs), aren't supported.
+Share-level permissions on Azure file shares are configured for Microsoft Entra users, groups, or service principals, while directory and file-level permissions are enforced using Windows access control lists (ACLs). You must assign share-level permissions to the Microsoft Entra identity representing the user, group, or service principal that should have access. Authentication and authorization against identities that only exist in Microsoft Entra ID, such as Azure Managed Identities (MSIs), aren't supported.
Most users should assign share-level permissions to specific Microsoft Entra users or groups, and then use Windows ACLs for granular access control at the directory and file level. This is the most stringent and secure configuration.
-There are three scenarios where we instead recommend using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities) to allow contributor, elevated contributor, or reader access to all authenticated identities:
+There are three scenarios where we instead recommend using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities) to allow reader, contributor, elevated contributor, privileged contributor, or privileged reader access to all authenticated identities:
- If you're unable to sync your on-premises AD DS to Microsoft Entra ID, you can use a default share-level permission. Assigning a default share-level permission allows you to work around the sync requirement because you don't need to specify the permission to identities in Microsoft Entra ID. Then you can use Windows ACLs for granular permission enforcement on your files and directories. - Identities that are tied to an AD but aren't synching to Microsoft Entra ID can also leverage the default share-level permission. This could include standalone Managed Service Accounts (sMSA), group Managed Service Accounts (gMSA), and computer accounts. - The on-premises AD DS you're using is synched to a different Microsoft Entra ID than the Microsoft Entra ID the file share is deployed in.
- - This is typical when you're managing multi-tenant environments. Using a default share-level permission allows you to bypass the requirement for a Microsoft Entra ID [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md). You can still use Windows ACLs on your files and directories for granular permission enforcement.
+ - This is typical when you're managing multitenant environments. Using a default share-level permission allows you to bypass the requirement for a Microsoft Entra ID [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md). You can still use Windows ACLs on your files and directories for granular permission enforcement.
- You prefer to enforce authentication only using Windows ACLs at the file and directory level.
-> [!NOTE]
-> Because computer accounts don't have an identity in Microsoft Entra ID, you can't configure Azure role-based access control (RBAC) for them. However, computer accounts can access a file share by using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities).
+## Azure RBAC roles for Azure Files
-## Share-level permissions and Azure RBAC roles
+There are five built-in Azure role-based access control (RBAC) roles for Azure Files, some of which allow granting share-level permissions to users and groups. If you're using Azure Storage Explorer, you'll also need the [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access) role in order to read/access the Azure file share.
-The following table lists the share-level permissions and how they align with the built-in Azure RBAC roles:
+> [!NOTE]
+> Because computer accounts don't have an identity in Microsoft Entra ID, you can't configure Azure RBAC for them. However, computer accounts can access a file share by using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities).
-|Supported built-in roles |Description |
+|**Built-in Azure RBAC role** |**Description** |
||| |[Storage File Data SMB Share Reader](../../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-reader) |Allows for read access to files and directories in Azure file shares. This role is analogous to a file share ACL of read on Windows File servers. [Learn more](storage-files-identity-auth-active-directory-enable.md). | |[Storage File Data SMB Share Contributor](../../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-contributor) |Allows for read, write, and delete access on files and directories in Azure file shares. [Learn more](storage-files-identity-auth-active-directory-enable.md). | |[Storage File Data SMB Share Elevated Contributor](../../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-elevated-contributor) |Allows for read, write, delete, and modify ACLs on files and directories in Azure file shares. This role is analogous to a file share ACL of change on Windows file servers. [Learn more](storage-files-identity-auth-active-directory-enable.md). |
+|[Storage File Data Privileged Contributor](../../role-based-access-control/built-in-roles/storage.md#storage-file-data-privileged-contributor) |Allows read, write, delete, and modify ACLs in Azure file shares by overriding existing ACLs. |
+|[Storage File Data Privileged Reader](../../role-based-access-control/built-in-roles/storage.md#storage-file-data-privileged-reader) |Allows read access in Azure file shares by overriding existing ACLs. |
<a name='share-level-permissions-for-specific-azure-ad-users-or-groups'></a>
If you intend to use a specific Microsoft Entra user or group to access Azure fi
In order for share-level permissions to work, you must: -- Sync the users **and** the groups from your local AD to Microsoft Entra ID using either the on-premises [Microsoft Entra Connect Sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Microsoft Entra Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Microsoft Entra Admin Center.
+- If your AD source is AD DS or Microsoft Entra Kerberos, you must sync the users **and** the groups from your local AD to Microsoft Entra ID using either the on-premises [Microsoft Entra Connect Sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Microsoft Entra Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Microsoft Entra Admin Center.
- Add AD synced groups to RBAC role so they can access your storage account. > [!TIP]
In order for share-level permissions to work, you must:
You can use the Azure portal, Azure PowerShell, or Azure CLI to assign the built-in roles to the Microsoft Entra identity of a user for granting share-level permissions. > [!IMPORTANT]
-> The share-level permissions will take up to three hours to take effect once completed. Please wait for the permissions to sync before connecting to your file share using your credentials.
+> The share-level permissions will take up to three hours to take effect once completed. Be sure to wait for the permissions to sync before connecting to your file share using your credentials.
# [Portal](#tab/azure-portal) To assign an Azure role to a Microsoft Entra identity, using the [Azure portal](https://portal.azure.com), follow these steps:
-1. In the Azure portal, go to your file share, or [create a file share](storage-how-to-create-file-share.md).
+1. In the Azure portal, go to your file share, or [create an SMB file share](storage-how-to-create-file-share.md).
1. Select **Access Control (IAM)**. 1. Select **Add a role assignment**
-1. In the **Add role assignment** blade, select the [appropriate built-in role](#share-level-permissions-and-azure-rbac-roles) from the **Role** list.
- 1. Storage File Data SMB Share Reader
- 1. Storage File Data SMB Share Contributor
- 1. Storage File Data SMB Share Elevated Contributor
+1. In the **Add role assignment** blade, select the [appropriate built-in role](#azure-rbac-roles-for-azure-files) from the **Role** list.
1. Leave **Assign access to** at the default setting: **Microsoft Entra user, group, or service principal**. Select the target Microsoft Entra identity by name or email address. **The selected Microsoft Entra identity must be a hybrid identity and cannot be a cloud only identity.** This means that the same identity is also represented in AD DS. 1. Select **Save** to complete the role assignment operation.
New-AzRoleAssignment -SignInName <user-principal-name> -RoleDefinitionName $File
# [Azure CLI](#tab/azure-cli)
-The following CLI 2.0 command assigns an Azure role to a Microsoft Entra identity, based on sign-in name. For more information about assigning Azure roles with Azure CLI, see [Add or remove Azure role assignments using the Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+The following CLI command assigns an Azure role to a Microsoft Entra identity, based on sign-in name. For more information about assigning Azure roles with Azure CLI, see [Add or remove Azure role assignments using the Azure CLI](../../role-based-access-control/role-assignments-cli.md).
Before you run the following sample script, remember to replace placeholder values, including brackets, with your own values. ```azurecli-interactive
-#Assign the built-in role to the target identity: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor
+#Assign the built-in role to the target identity: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor, Storage File Data Privileged Contributor, Storage File Data Privileged Reader
az role assignment create --role "<role-name>" --assignee <user-principal-name> --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>" ```
To configure default share-level permissions on your storage account using the [
:::image type="content" source="media/storage-files-identity-ad-ds-assign-permissions/set-default-share-level-permission.png" alt-text="Screenshot showing how to set a default share-level permission using the Azure portal." lightbox="media/storage-files-identity-ad-ds-assign-permissions/set-default-share-level-permission.png" border="true":::
-1. Select the appropriate role to be enabled as the default [share permission](#share-level-permissions-and-azure-rbac-roles) from the dropdown list.
+1. Select the appropriate role to be enabled as the default [share permission](#azure-rbac-roles-for-azure-files) from the dropdown list.
1. Select **Save**. # [Azure PowerShell](#tab/azure-powershell)
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
description: Learn how to configure Windows ACLs for directory and file level pe
Previously updated : 05/09/2024 Last updated : 10/18/2024 recommendations: false
Both share-level and file/directory-level permissions are enforced when a user a
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-## Azure RBAC permissions
-
-The following table contains the Azure RBAC permissions related to this configuration. If you're using Azure Storage Explorer, you'll also need the [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access) role in order to read/access the file share.
-
-| Share-level permission (built-in role) | NTFS permission | Resulting access |
-||||
-|Storage File Data SMB Share Reader | Full control, Modify, Read, Write, Execute | Read & execute |
-| | Read | Read |
-|Storage File Data SMB Share Contributor | Full control | Modify, Read, Write, Execute |
-| | Modify | Modify |
-| | Read & execute | Read & execute |
-| | Read | Read |
-| | Write | Write |
-|Storage File Data SMB Share Elevated Contributor | Full control | Modify, Read, Write, Edit (Change permissions), Execute |
-| | Modify | Modify |
-| | Read & execute | Read & execute |
-| | Read | Read |
-| | Write | Write |
- ## Supported Windows ACLs Azure Files supports the full set of basic and advanced Windows ACLs.
There are two approaches you can take to configuring and editing Windows ACLs:
- **One-time username/storage account key setup:** > [!NOTE]
-> This setup works for newly created file shares because any new file/directory will inherit the configured root permission. For file shares migrated along with existing ACLs, or if you migrate any on premises file/directory with existing permissions in a new fileshare, this approach might not work because the migrated files don't inherit the configured root ACL.
+> This setup works for newly created file shares because any new file/directory will inherit the configured root permission. For file shares migrated along with existing ACLs, or if you migrate any on premises file/directory with existing permissions in a new file share, this approach might not work because the migrated files don't inherit the configured root ACL.
1. Log in with a username and storage account key on a machine that has unimpeded network connectivity to the domain controller, and give some users (or groups) permission to edit permissions on the root of the file share. 2. Assign those users the **Storage File Data SMB Share Elevated Contributor** Azure RBAC role.
There are two approaches you can take to configuring and editing Windows ACLs:
## Mount the file share using your storage account key
-Before you configure Windows ACLs, you must first mount the file share by using your storage account key. To do this, log into a domain-joined device, open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet.
+Before you configure Windows ACLs, you must first mount the file share by using your storage account key. To do this, log into a domain-joined device (as a Microsoft Entra user if your AD source is Microsoft Entra Domain Services), open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet.
It's important that you use the `net use` Windows command to mount the share at this stage and not PowerShell. If you use PowerShell to mount the share, then the share won't be visible to Windows File Explorer or cmd.exe, and you'll have difficulty configuring Windows ACLs.
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /use
You can configure the Windows ACLs using either [icacls](#configure-windows-acls-with-icacls) or [Windows File Explorer](#configure-windows-acls-with-windows-file-explorer). You can also use the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command.
-> [!IMPORTANT]
-> If your environment has multiple AD DS forests, don't use Windows Explorer to configure ACLs. Use icacls instead.
- If you have directories or files in on-premises file servers with Windows ACLs configured against the AD DS identities, you can copy them over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format.
+> [!IMPORTANT]
+> **If you're using Microsoft Entra Kerberos as your AD source, identities must be synced to Microsoft Entra ID in order for ACLs to be enforced.** You can set file/directory level ACLs for identities that aren't synced to Microsoft Entra ID. However, these ACLs won't be enforced because the Kerberos ticket used for authentication/authorization won't contain the not-synced identities. If you're using on-premises AD DS as your AD source, you can have not-synced identities in the ACLs. AD DS will put those SIDs in the Kerberos ticket, and ACLs will be enforced.
+ ### Configure Windows ACLs with icacls
-To grant full permissions to all directories and files under the file share, including the root directory, run the following Windows command from a machine that has line-of-sight to the AD domain controller. Remember to replace the placeholder values in the example with your own values.
+To grant full permissions to all directories and files under the file share, including the root directory, run the following Windows command from a machine that has unimpeded network connectivity to the AD domain controller. Remember to replace the placeholder values in the example with your own values. If your AD source is Microsoft Entra Domain Services, then `<user-upn>` will be `<user-email>`.
``` icacls <mapped-drive-letter>: /grant <user-upn>:(f)
For more information on how to use icacls to set Windows ACLs and on the differe
### Configure Windows ACLs with Windows File Explorer
-If you're logged on to a domain-joined Windows client, you can use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory. If your client isn't domain-joined, [use icacls](#configure-windows-acls-with-icacls) for configuring Windows ACLs.
+If you're logged on to a domain-joined Windows client, you can use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory.
+
+> [!IMPORTANT]
+> If your client isn't domain joined, or if your environment has multiple AD forests, don't use Windows Explorer to configure ACLs. [Use icacls](#configure-windows-acls-with-icacls) instead. This is because Windows File Explorer ACL configuration requires the client to be domain joined to the AD domain that the storage account is joined to.
+
+Follow these steps to configure ACLs using Windows File Explorer.
-1. Open Windows File Explorer and right click on the file/directory and select **Properties**.
+1. Open Windows File Explorer, right click on the file/directory, and select **Properties**.
1. Select the **Security** tab. 1. Select **Edit..** to change permissions. 1. You can change the permissions of existing users or select **Add...** to grant permissions to new users.
-1. In the prompt window for adding new users, enter the target username you want to grant permissions to in the **Enter the object names to select** box, and select **Check Names** to find the full UPN name of the target user.
+1. In the prompt window for adding new users, enter the target username you want to grant permissions to in the **Enter the object names to select** box, and select **Check Names** to find the full UPN name of the target user. You might need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client.
1. Select **OK**. 1. In the **Security** tab, select all permissions you want to grant your new user. 1. Select **Apply**. ## Next step
-Now that you've enabled and configured identity-based authentication with AD DS, you can [mount a file share](storage-files-identity-ad-ds-mount-file-share.md).
+Now that you've configured directory and file-level permissions, you can [mount the file share](storage-files-identity-ad-ds-mount-file-share.md).
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Title: Mount SMB Azure file share using AD DS credentials
-description: Learn how to mount an SMB Azure file share using your on-premises Active Directory Domain Services (AD DS) credentials.
+ Title: Mount SMB Azure file share with identity-based access
+description: Learn how to mount an SMB Azure file share on Windows using Active Directory (AD) or Microsoft Entra credentials.
Previously updated : 05/09/2024 Last updated : 10/18/2024 recommendations: false
-# Mount an Azure file share
+# Mount an SMB Azure file share
-Before you begin this article, make sure you've read [configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-
-The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can mount your SMB Azure file share. Remember that share-level role assignment can take some time to take effect.
-
-Sign in to the client using the credentials of the identity that you granted permissions to.
+The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can mount your SMB Azure file share.
## Applies to
Sign in to the client using the credentials of the identity that you granted per
Before you can mount the Azure file share, make sure you've gone through the following prerequisites: -- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Microsoft Entra credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#identity-based-authentication).-- Your client must have unimpeded network connectivity to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication.
+- Make sure you've [assigned share-level permissions](storage-files-identity-ad-ds-assign-permissions.md) and [configured directory and file-level permissions](storage-files-identity-ad-ds-configure-permissions.md). Remember that share-level role assignment can take some time to take effect.
+- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share and removed the persistent credentials of the storage account key. For instructions on how to remove cached credentials and delete existing SMB connections before initializing a new connection with Active Directory Domain Services (AD DS) or Microsoft Entra credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#identity-based-authentication).
+- If your AD source is AD DS or Microsoft Entra Kerberos, your client must have unimpeded network connectivity to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication.
+- Sign in to the client using the credentials of the AD DS or Microsoft Entra identity that you granted permissions to.
## Mount the file share from a domain-joined VM
-Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
+Run the following PowerShell script or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. Because you've been authenticated, you won't need to provide the storage account key. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
Unless you're using [custom domain names](#mount-file-shares-using-custom-domain-names), you should mount Azure file shares using the suffix `file.core.windows.net`, even if you set up a private endpoint for your share.
If you run into issues, see [Unable to mount Azure file shares with AD credentia
## Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain
-Non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have unimpeded network connectivity to the domain controllers and provide explicit credentials (username and password). The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
+If your AD source is on-premises AD DS, then non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have unimpeded network connectivity to the AD domain controllers and provide explicit credentials (username and password). The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
+
+If your AD source is Microsoft Entra Domain Services, the VM must have unimpeded network connectivity to the domain controllers for Microsoft Entra Domain Services, which are located in Azure. This usually requires setting up a site-to-site or point-to-site VPN. The user accessing the file share must have an identity (a Microsoft Entra identity synced from Microsoft Entra ID to Microsoft Entra Domain Services) in the Microsoft Entra Domain Services managed domain.
To mount a file share from a non-domain-joined VM, use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name. This will allow the client to contact the domain controller to request and receive Kerberos tickets. You can get the value of **domainFQDN** by running `(Get-ADDomain).Dnsroot` in Active Directory PowerShell.
For example:
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<username@domainFQDN> ```
+If your AD source is Microsoft Entra Domain services, you can also provide credentials such as **DOMAINNAME\username** where **DOMAINNAME** is the Microsoft Entra Domain Services domain and **username** is the identity's user name in Microsoft Entra Domain
+
+```
+net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<DOMAINNAME\username>
+```
++ > [!NOTE]
-> Azure Files doesn't support SID to UPN translation for users and groups from a non-domain joined VM or a VM joined to a different domain via Windows File Explorer. If you want to view file/directory owners or view/modify NTFS permissions via Windows File Explorer, you can do so only from domain joined VMs.
+> Azure Files doesn't support SID to UPN translation for users and groups from a non-domain joined VM or a VM joined to a different domain via Windows File Explorer. If you want to view file/directory owners or view/modify NTFS permissions via Windows File Explorer, you can do so only from domain-joined VMs.
## Mount file shares using custom domain names
This will allow clients to mount the share with `net use \\mystorageaccount.onpr
To use this method, complete the following steps:
-1. Make sure you've set up identity-based authentication and synced your AD user account(s) to Microsoft Entra ID.
+1. Make sure you've set up identity-based authentication. If your AD source is AD DS or Microsoft Entra Kerberos, make sure you've synced your AD user account(s) to Microsoft Entra ID.
2. Modify the SPN of the storage account using the `setspn` tool. You can find `<DomainDnsRoot>` by running the following Active Directory PowerShell command: `(Get-AdDomain).DnsRoot`
storage Storage Files Identity Auth Domain Services Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md
Title: Use Microsoft Entra Domain Services with Azure Files
-description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Microsoft Entra Domain Services. Your domain-joined Windows VMs can then access Azure file shares by using Microsoft Entra credentials.
+description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Microsoft Entra Domain Services. Your Windows VMs can then access Azure file shares by using Microsoft Entra credentials.
Previously updated : 05/10/2024 Last updated : 10/18/2024 recommendations: false
recommendations: false
[!INCLUDE [storage-files-aad-auth-include](../../../includes/storage-files-aad-auth-include.md)]
-This article focuses on enabling and configuring Microsoft Entra Domain Services (formerly Azure Active Directory Domain Services) for identity-based authentication with Azure file shares. In this authentication scenario, Microsoft Entra credentials and Microsoft Entra Domain Services credentials are the same and can be used interchangeably.
+This article focuses on enabling Microsoft Entra Domain Services (formerly Azure Active Directory Domain Services) for identity-based authentication with Azure file shares. In this authentication scenario, Microsoft Entra credentials and Microsoft Entra Domain Services credentials are the same and can be used interchangeably.
-We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the AD source you choose.
+We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for your storage account. The setup is different depending on the AD source you choose.
If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md) before reading this article.
Azure Files authentication with Microsoft Entra Domain Services is available in
## Overview of the workflow
-Before you enable Microsoft Entra Domain Services authentication over SMB for Azure file shares, verify that your Microsoft Entra ID and Azure Storage environments are properly configured. We recommend that you walk through the [prerequisites](#prerequisites) to make sure you've completed all the required steps.
-
-Follow these steps to grant access to Azure Files resources with Microsoft Entra credentials:
-
-1. Enable Microsoft Entra Domain Services authentication over SMB for your storage account to register the storage account with the associated Microsoft Entra Domain Services deployment.
-1. Assign share-level permissions to a Microsoft Entra identity (a user, group, or service principal).
-1. Connect to your Azure file share using a storage account key and configure Windows access control lists (ACLs) for directories and files.
-1. Mount an Azure file share from a domain-joined VM.
- The following diagram illustrates the end-to-end workflow for enabling Microsoft Entra Domain Services authentication over SMB for Azure Files. :::image type="content" source="media/storage-files-identity-auth-domain-services-enable/files-entra-domain-services-workflow.png" alt-text="Diagram showing Microsoft Entra ID over SMB for Azure Files workflow." lightbox="media/storage-files-identity-auth-domain-services-enable/files-entra-domain-services-workflow.png" border="false":::
Get-ADUser $userObject -properties KerberosEncryptionType
> [!IMPORTANT] > If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256.
-## Assign share-level permissions
-
-To access Azure Files resources with identity-based authentication, an identity (a user, group, or service principal) must have the necessary permissions at the share level. This process is similar to specifying Windows share permissions, where you specify the type of access that a particular user has to a file share. The guidance in this section demonstrates how to assign read, write, or delete permissions for a file share to an identity. **We highly recommend assigning permissions by declaring actions and data actions explicitly as opposed to using the wildcard (\*) character.**
-
-Most users should assign share-level permissions to specific Microsoft Entra users or groups, and then [configure Windows ACLs](#configure-windows-acls) for granular access control at the directory and file level. However, alternatively you can set a [default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) to allow contributor, elevated contributor, or reader access to **all authenticated identities**.
-
-There are five Azure built-in roles for Azure Files, some of which allow granting share-level permissions to users and groups:
--- **Storage File Data Privileged Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB by overriding existing Windows ACLs.-- **Storage File Data Privileged Reader** allows read access in Azure file shares over SMB by overriding existing Windows ACLs.-- **Storage File Data SMB Share Contributor** allows read, write, and delete access in Azure file shares over SMB.-- **Storage File Data SMB Share Elevated Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB.-- **Storage File Data SMB Share Reader** allows read access in Azure file shares over SMB.-
-> [!IMPORTANT]
-> Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage account key. Administrative control isn't supported with Microsoft Entra credentials.
-
-You can use the Azure portal, PowerShell, or Azure CLI to assign the built-in roles to the Microsoft Entra identity of a user for granting share-level permissions. Be aware that the share-level Azure role assignment can take some time to take effect. We recommend using share-level permission for high-level access management to an AD group representing a group of users and identities, then leverage Windows ACLs for granular access control at the directory/file level.
-
-<a name='assign-an-azure-role-to-an-azure-ad-identity'></a>
-
-### Assign an Azure role to a Microsoft Entra identity
-
-> [!IMPORTANT]
-> **Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (\*) character.** If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are granted access for all possible data actions. This means that all such identities will also be granted any new data action added to the platform. The additional access and permissions granted through new actions or data actions may be unwanted behavior for customers using wildcard.
-
-# [Portal](#tab/azure-portal)
-To assign an Azure role to a Microsoft Entra identity using the [Azure portal](https://portal.azure.com), follow these steps:
-
-1. In the Azure portal, go to your file share, or [Create a file share](storage-how-to-create-file-share.md).
-2. Select **Access Control (IAM)**.
-3. Select **Add a role assignment**
-4. In the **Add role assignment** blade, select the appropriate built-in role (for example, Storage File Data SMB Share Reader or Storage File Data SMB Share Contributor) from the **Role** list. Leave **Assign access to** at the default setting: **Microsoft Entra user, group, or service principal**. Select the target Microsoft Entra identity by name or email address.
-5. Select **Review + assign** to complete the role assignment.
-
-# [PowerShell](#tab/azure-powershell)
-
-The following PowerShell sample shows how to assign an Azure role to a Microsoft Entra identity, based on sign-in name. For more information about assigning Azure roles with PowerShell, see [Manage access using RBAC and Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
-
-Before you run the following sample script, remember to replace placeholder values, including brackets, with your own values.
-
-```powershell
-#Get the name of the custom role
-$FileShareContributorRole = Get-AzRoleDefinition "<role-name>" #Use one of the built-in roles: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor
-#Constrain the scope to the target file share
-$scope = "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>"
-#Assign the custom role to the target identity with the specified scope.
-New-AzRoleAssignment -SignInName <user-principal-name> -RoleDefinitionName $FileShareContributorRole.Name -Scope $scope
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-The following command shows how to assign an Azure role to a Microsoft Entra identity based on sign-in name. For more information about assigning Azure roles with Azure CLI, see [Manage access by using RBAC and Azure CLI](../../role-based-access-control/role-assignments-cli.md).
-
-Before you run the following sample script, remember to replace placeholder values, including brackets, with your own values.
-
-```azurecli-interactive
-#Assign the built-in role to the target identity: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor
-az role assignment create --role "<role-name>" --assignee <user-principal-name> --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>"
-```
--
-## Configure Windows ACLs
-
-After you assign share-level permissions with RBAC, you can assign Windows ACLs at the root, directory, or file level. Think of share-level permissions as the high-level gatekeeper that determines whether a user can access the share, whereas Windows ACLs act at a more granular level to determine what operations the user can do at the directory or file level.
-
-Azure Files supports the full set of basic and advanced permissions. You can view and configure Windows ACLs on directories and files in an Azure file share by mounting the share and then using Windows File Explorer or running the Windows [icacls](/windows-server/administration/windows-commands/icacls) or [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) command.
-
-The following sets of permissions are supported on the root directory of a file share:
--- BUILTIN\Administrators:(OI)(CI)(F)-- NT AUTHORITY\SYSTEM:(OI)(CI)(F)-- BUILTIN\Users:(RX)-- BUILTIN\Users:(OI)(CI)(IO)(GR,GE)-- NT AUTHORITY\Authenticated Users:(OI)(CI)(M)-- NT AUTHORITY\SYSTEM:(F)-- CREATOR OWNER:(OI)(CI)(IO)(F)-
-For more information, see [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-
-### Mount the file share using your storage account key
-
-Before you configure Windows ACLs, you must first mount the file share to your domain-joined VM by using your storage account key. To do this, log into the domain-joined VM as a Microsoft Entra user, open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet.
-
-It's important that you use the `net use` Windows command to mount the share at this stage and not PowerShell. If you use PowerShell to mount the share, then the share won't be visible to Windows File Explorer or cmd.exe, and you won't be able to configure Windows ACLs.
-
-> [!NOTE]
-> You might see the **Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share level and the file/directory level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or directory can assign permissions on those new files or directories without using the storage account key. All other file/directory permission assignment requires connecting to the share using the storage account key first.
-
-```
-net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:localhost\<YourStorageAccountName> <YourStorageAccountKey>
-```
-
-### Configure Windows ACLs with Windows File Explorer
-
-After you've mounted your Azure file share, you must configure the Windows ACLs. You can do this using either Windows File Explorer or icacls.
-
-Follow these steps to use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory.
-
-1. Open Windows File Explorer and right click on the file/directory and select **Properties**.
-1. Select the **Security** tab.
-1. Select **Edit** to change permissions.
-1. You can change the permissions of existing users or select **Add** to grant permissions to new users.
-1. In the prompt window for adding new users, enter the target user name you want to grant permission to in the **Enter the object names to select** box, and select **Check Names** to find the full UPN name of the target user.
-1. Select **OK**.
-1. In the **Security** tab, select all permissions you want to grant your new user.
-1. Select **Apply**.
-
-### Configure Windows ACLs with icacls
-
-Use the following Windows command to grant full permissions to all directories and files under the file share, including the root directory. Remember to replace the placeholder values in the example with your own values.
-
-```
-icacls <mounted-drive-letter>: /grant <user-email>:(f)
-```
-
-For more information on how to use icacls to set Windows ACLs and the different types of supported permissions, see [the command-line reference for icacls](/windows-server/administration/windows-commands/icacls).
-
-## Mount the file share from a domain-joined VM
-
-The following process verifies that your file share and access permissions were set up correctly and that you can access an Azure file share from a domain-joined VM. Be aware that the share-level Azure role assignment can take some time to take effect.
-
-Sign in to the domain-joined VM using the Microsoft Entra identity to which you granted permissions. Be sure to sign in with Microsoft Entra credentials. If the drive is already mounted with the storage account key, you'll need to disconnect the drive or sign in again.
-
-Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. Because you've been authenticated, you won't need to provide the storage account key. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace `<storage-account-name>` and `<file-share-name>` with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-
-Unless you're using [custom domain names](storage-files-identity-ad-ds-mount-file-share.md#mount-file-shares-using-custom-domain-names), you should mount Azure file shares using the suffix `file.core.windows.net`, even if you set up a private endpoint for your share.
-
-```powershell
-$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
-if ($connectTestResult.TcpTestSucceeded) {
- cmd.exe /C "cmdkey /add:`"<storage-account-name>.file.core.windows.net`" /user:`"localhost\<storage-account-name>`""
- New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\<file-share-name>" -Persist -Scope global
-} else {
- Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."
-}
-```
-
-You can also use the `net-use` command from a Windows prompt to mount the file share. Remember to replace `<YourStorageAccountName>` and `<FileShareName>` with your own values.
-
-```
-net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName>
-```
-
-## Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain
-
-Non-domain-joined VMs or VMs that are joined to a different domain than the storage account can access Azure file shares using Microsoft Entra Domain Services authentication only if the VM has unimpeded network connectivity to the domain controllers for Microsoft Entra Domain Services, which are located in Azure. This usually requires setting up a site-to-site or point-to-site VPN. The user accessing the file share must have an identity (a Microsoft Entra identity synced from Microsoft Entra ID to Microsoft Entra Domain Services) in the Microsoft Entra Domain Services managed domain, and must provide explicit credentials (username and password).
-
-To mount a file share from a non-domain-joined VM, the user must either:
--- Provide credentials such as **DOMAINNAME\username** where **DOMAINNAME** is the Microsoft Entra Domain Services domain and **username** is the identityΓÇÖs user name in Microsoft Entra Domain Services, or-- Use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name.-
-Using one of these approaches will allow the client to contact the domain controller in the Microsoft Entra Domain Services domain to request and receive Kerberos tickets.
-
-For example:
-
-```
-net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<DOMAINNAME\username>
-```
-
-or
-
-```
-net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<username@domainFQDN>
-```
-
-## Next steps
-
-To grant additional users access to your file share, follow the instructions in [Assign share-level permissions](#assign-share-level-permissions) and [Configure Windows ACLs](#configure-windows-acls).
-
-For more information about identity-based authentication for Azure Files, see these resources:
+## Next step
-- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)-- [FAQ](storage-files-faq.md)
+- To grant users access to your file share, follow the instructions in [Assign share-level permissions](storage-files-identity-ad-ds-assign-permissions.md).
storage Storage Files Identity Auth Hybrid Cloud Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-cloud-trust.md
description: Learn how to enable Microsoft Entra Kerberos authentication for hyb
Previously updated : 10/03/2024 Last updated : 10/18/2024 recommendations: false
In such scenarios, customers can enable Microsoft Entra Kerberos authentication
This article focuses on authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Microsoft Entra ID using either [Microsoft Entra Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) or [Microsoft Entra Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md). **Cloud-only identities aren't currently supported for Azure Files**.
+## Applies to
+
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+ ## Scenarios The following are examples of scenarios in which you might want to configure a cloud trust:
For guidance on disabling MFA, see the following:
### Assign share-level permissions
-When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
+When you enable identity-based access, for each share you must assign which users and groups have access to that particular share. Once a user or group is allowed access to a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
To set share-level permissions, follow the instructions in [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md). ### Configure directory and file-level permissions
-Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with unimpeded network connectivity to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
-
-There are two options for configuring directory and file-level permissions with Microsoft Entra Kerberos authentication:
--- **Windows File Explorer:** If you choose this option, then the client must be domain-joined to the on-premises AD.-- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs unimpeded network connectivity to the on-premises AD.-
-To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step isn't required.
-
-> [!IMPORTANT]
-> You can set file/directory level ACLs for identities which aren't synced to Microsoft Entra ID. However, these ACLs won't be enforced because the Kerberos ticket used for authentication/authorization won't contain these not-synced identities. In order to enforce set ACLs, identities must be synced to Microsoft Entra ID.
-
-> [!TIP]
-> If Microsoft Entra hybrid joined users from two different forests will be accessing the share, it's best to use icacls to configure directory and file-level permissions. This is because Windows File Explorer ACL configuration requires the client to be domain joined to the Active Directory domain that the storage account is joined to.
+Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with unimpeded network connectivity to an on-premises AD**.
To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
storage Storage Files Identity Auth Hybrid Identities Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 10/02/2024 Last updated : 10/18/2024 recommendations: false
For more information on supported options and considerations, see [Overview of A
> [!IMPORTANT] > You can only use one AD method for identity-based authentication with Azure Files. If Microsoft Entra Kerberos authentication for hybrid identities doesn't fit your requirements, you might be able to use [on-premises Active Directory Domain Service (AD DS)](storage-files-identity-auth-active-directory-enable.md) or [Microsoft Entra Domain Services](storage-files-identity-auth-domain-services-enable.md) instead. The configuration steps and supported scenarios are different for each method.
+## Applies to
+
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+ ## Prerequisites Before you enable Microsoft Entra Kerberos authentication over SMB for Azure file shares, make sure you've completed the following prerequisites.
For guidance on disabling MFA, see the following:
## Assign share-level permissions
-When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
+When you enable identity-based access, for each share you must assign which users and groups have access to that particular share. Once a user or group is allowed access to a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
To set share-level permissions, follow the instructions in [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md). ## Configure directory and file-level permissions
-Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with unimpeded network connectivity to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
-
-There are two options for configuring directory and file-level permissions with Microsoft Entra Kerberos authentication:
--- **Windows File Explorer:** If you choose this option, then the client must be domain-joined to the on-premises AD.-- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs unimpeded network connectivity to the on-premises AD.-
-To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step isn't required.
-
-> [!IMPORTANT]
-> You can set file/directory level ACLs for identities which aren't synced to Microsoft Entra ID. However, these ACLs won't be enforced because the Kerberos ticket used for authentication/authorization won't contain these not-synced identities. In order to enforce set ACLs, identities must be synced to Microsoft Entra ID.
-
-> [!TIP]
-> If Microsoft Entra hybrid joined users from two different forests will be accessing the share, it's best to use icacls to configure directory and file-level permissions. This is because Windows File Explorer ACL configuration requires the client to be domain joined to the Active Directory domain that the storage account is joined to.
+Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with unimpeded network connectivity to an on-premises AD**.
To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
az storage account update --name <storageaccountname> --resource-group <resource
## Next steps
-For more information, see these resources:
-
+- [Mount an Azure file share](storage-files-identity-ad-ds-mount-file-share.md)
- [Potential errors when enabling Microsoft Entra Kerberos authentication for hybrid users](files-troubleshoot-smb-authentication.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users)-- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md) - [Create a profile container with Azure Files and Microsoft Entra ID](../../virtual-desktop/create-profile-container-azure-ad.yml)-- [FAQ](storage-files-faq.md)+
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 07/15/2024 Last updated : 10/18/2024
Here's information about the Azure Virtual Desktop Agent.
| Release | Latest version | |--|--|
-| Production | 1.0.9103.3700 |
+| Production | 1.0.9742.2500 |
| Validation | 1.0.9103.2900 | > [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
+## Version 1.0.9742.2500
+
+*Published: October 2024*
+
+In this update, we've made the following changes:
+
+- Fixed an issue relating to app attach expansion from the portal.
+- General improvements and bug fixes.
+ ## Version 1.0.9103.3800 *Published: June 2024*