Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-management | Api Management Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md | Title: Capacity of an Azure API Management instance | Microsoft Docs -description: This article explains what the capacity metric is and how to make informed decisions whether to scale an Azure API Management instance. + Title: Capacity metrics - Azure API Management | Microsoft Docs +description: This article explains the capacity metrics in Azure API Management and how to make informed decisions about whether to scale an instance. Previously updated : 07/06/2022 Last updated : 08/26/2024 # Capacity of an Azure API Management instance -**Capacity** is the most important [Azure Monitor metric](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) for making informed decisions whether to [scale or upgrade](upgrade-and-scale.md) an API Management instance to accommodate more load. Its construction is complex and imposes certain behavior. +API Management provides [Azure Monitor metrics](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) to detect use of system capacity, helping you troubleshoot gateway problems and make informed decisions whether to [scale or upgrade](upgrade-and-scale.md) an API Management instance to accommodate more load. -This article explains what the **capacity** is and how it behaves. It shows how to access **capacity** metrics in the Azure portal and suggests when to consider scaling or upgrading your API Management instance. +This article explains the capacity metrics and how they behave, shows how to access capacity metrics in the Azure portal, and suggests when to consider scaling or upgrading your API Management instance. [!INCLUDE [api-management-workspace-availability](../../includes/api-management-workspace-availability.md)] > [!IMPORTANT]-> This article discusses how you can monitor and scale your Azure API Management instance based upon its capacity metric. However, it is equally important to understand what happens when an individual API Management instance has actually *reached* its capacity. Azure API Management will not apply service-level throttling to prevent a physical overload of the instances. When an instance reaches its physical capacity, it will behave similar to any overloaded web server that is unable to process incoming requests: latency will increase, connections will get dropped, timeout errors will occur, and so on. This means that API clients should be prepared to deal with this possibility as they do with any other external service (for example, by applying retry policies). +> This article introduces how to monitor and scale your Azure API Management instance based on capacity metrics. However, when an instance *reaches* its capacity, it won't throttle to prevent overload. Instead, it will act like an overloaded web server: increased latency, dropped connections, and timeout errors. API clients should be ready to handle these issues as they do with other external services, for example by using retry policies. ## Prerequisites -To follow the steps in this article, you must have: +To follow the steps in this article, you must have an API Management instance in one of the tiers that supports capacity metrics. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). -+ An active Azure subscription. +## Available capacity metrics - [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] +Different capacity metrics are available in the [v2 service tiers](v2-service-tiers-overview.md) and classic tiers. -+ An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). +#### [v2 tiers](#tab/v2-tiers) ++In the v2 tiers, the following metrics are available: ++* **CPU Percentage of Gateway** - The percentage of CPU capacity used by the gateway units. ++* **Memory Percentage of Gateway** - The percentage of memory capacity used by the gateway units. ++Available aggregations for these metrics are as follows. ++* **Avg** - Average percentage of capacity used across gateway processes in every [unit](upgrade-and-scale.md) of an API Management instance. +* **Max** - Percentage of capacity in gateway process with the greatest consumption. +++#### [Classic tiers](#tab/classic) ++In the Developer, Basic, Standard, and Premium tiers, the **Capacity** metric is available for making decisions about scaling or upgrading an API Management instance. Its construction is complex and imposes certain behavior. ++Available aggregations for this metric are as follows. ++* **Avg** - Average percentage of capacity used across gateway processes in every [unit](upgrade-and-scale.md) of an API Management instance. +* **Max** - Percentage of capacity in gateway process with the greatest consumption. [!INCLUDE [availability-capacity.md](../../includes/api-management-availability-capacity.md)] -## What is capacity +### What the Capacity metric indicates ![Diagram that explains the Capacity metric.](./media/api-management-capacity/capacity-ingredients.png) -**Capacity** is an indicator of load on an API Management instance. It reflects usage of resources (CPU, memory) and network queue lengths. CPU and memory usage reveals consumption of resources by: +**Capacity** is an indicator of load on an API Management instance. It reflects usage of resources (CPU, memory) and network queue lengths. -+ API Management data plane services, such as request processing, which can include forwarding requests or running a policy. -+ API Management management plane services, such as management actions applied via the Azure portal or Azure Resource Manager, or load coming from the [developer portal](api-management-howto-developer-portal.md). -+ Selected operating system processes, including processes that involve cost of TLS handshakes on new connections. -+ Platform updates, such as OS updates on the underlying compute resources for the instance. -+ Number of APIs deployed, regardless of activity, which can consume additional capacity. -Total **capacity** is an average of its own values from every [unit](upgrade-and-scale.md) of an API Management instance. --Although the **capacity metric** is designed to surface problems with your API Management instance, there are cases when problems won't be reflected in changes in the **capacity metric**. + ## Capacity metric behavior -Because of its construction, in real life **capacity** can be impacted by many variables, for example: +In real life capacity metrics can be impacted by many variables, for example: + connection patterns (new connection on a request versus reusing the existing connection) + size of a request and response + policies configured on each API or number of clients sending requests. -The more complex operations on the requests are, the higher the **capacity** consumption will be. For example, complex transformation policies consume much more CPU than a simple request forwarding. Slow backend service responses will increase it, too. +The more complex operations on the requests are, the higher the capacity consumption is. For example, complex transformation policies consume much more CPU than a simple request forwarding. Slow backend service responses increase it, too. > [!IMPORTANT]-> **Capacity** is not a direct measure of the number of requests being processed. +> Capacity metrics are not direct measures of the number of requests being processed. ![Capacity metric spikes](./media/api-management-capacity/capacity-spikes.png) -**Capacity** can also spike intermittently or be greater than zero even if no requests are being processed. It happens because of system- or platform-specific actions and should not be taken into consideration when deciding whether to scale an instance. +Capacity metrics can also spike intermittently or be greater than zero even if no requests are being processed. It happens because of system- or platform-specific actions and should not be taken into consideration when deciding whether to scale an instance. ++Although capacity metrics are designed to surface problems with your API Management instance, there are cases when problems won't be reflected in changes in these metrics. Additionally, low capacity metrics don't necessarily mean that your API Management instance isn't experiencing any problems. -Low **capacity metric** doesn't necessarily mean that your API Management instance isn't experiencing any problems. -## Use the Azure portal to examine capacity +## Use the Azure portal to examine capacity metrics ++Access metrics in the portal to understand how much capacity is used over time. ++#### [v2 tiers](#tab/v2-tiers) ++1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/). +1. In the left menu, under **Monitoring**, select **Metrics**. +1. Select the **CPU Percentage of Gateway** or **Memory Percentage of Gateway** metric from the available metrics. Choose the default **Avg** aggregation or select the **Max** aggregation to see the peak usage. +1. Pick a desired timeframe from the top bar of the section. ++> [!IMPORTANT] +> Currently, the **Capacity** metric also appears in the portal for instances in v2 tiers. However, it's not supported for use in the v2 tiers and shows a value of 0. ++> [!NOTE] +> You can set a [metric alert](api-management-howto-use-azure-monitor.md#set-up-an-alert-rule) to let you know when something unexpected is happening. For example, get notifications when your API Management instance has exceeded its expected peak CPU or Memory usage for more than 20 minutes. + ++#### [Classic tiers](#tab/classic) ![Capacity metric](./media/api-management-capacity/capacity-metric.png) 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).-2. In the left menu, under **Monitoring**, select **Metrics**. -3. Select the **Capacity** metric from the available metrics and leave the default **Avg** aggregation. +1. In the left menu, under **Monitoring**, select **Metrics**. +1. Select the **Capacity** metric from the available metrics and leave the default **Avg** aggregation. > [!TIP] > If you've deployed your instance to multiple locations, you should always look at a **capacity** metric breakdown per location to avoid wrong interpretations. -4. To split the metric by location, from the section at the top, select **Apply splitting** and then select **Location**. -5. Pick a desired timeframe from the top bar of the section. +1. To split the metric by location, from the section at the top, select **Apply splitting** and then select **Location**. +1. Pick a desired timeframe from the top bar of the section. ++> [!IMPORTANT] +> Currently, the **CPU Percentage of Gateway** and **Memory Consumption of Gateway** metrics also appear in the portal for instances in classic tiers. However, they're not supported for use in classic tiers and show a value of 0. ++ - You can set a [metric alert](api-management-howto-use-azure-monitor.md#set-up-an-alert-rule) to let you know when something unexpected is happening. For example, get notifications when your API Management instance has exceeded its expected peak capacity for more than 20 minutes. - >[!TIP] - > You can configure alerts to let you know when your service is running low on capacity or use Azure Monitor [autoscaling](api-management-howto-autoscale.md) to automatically add an Azure API Management unit. Scaling operation can take around 30 minutes, so you should plan your rules accordingly. - > Only scaling the master location is allowed. ++> [!NOTE] +> * You can set a [metric alert](api-management-howto-use-azure-monitor.md#set-up-an-alert-rule) to let you know when something unexpected is happening. For example, get notifications when your API Management instance has exceeded its expected peak capacity for more than 20 minutes. +> * You can use Azure Monitor [autoscaling](api-management-howto-autoscale.md) to automatically add an Azure API Management unit. Scaling operation can take around 30 minutes, so you should plan your rules accordingly. +> * In multi-region deployments, only scaling the primary location is allowed. ++ ## Use capacity for scaling decisions -**Capacity** is the metric for making decisions whether to scale an API Management instance to accommodate more load. The following are general considerations: +Use capacity metrics for making decisions whether to scale an API Management instance to accommodate more load. The following are general considerations: + Look at a long-term trend and average. + Ignore sudden spikes that are most likely not related to an increase in load (see [Capacity metric behavior](#capacity-metric-behavior) section for explanation).-+ As a general rule, upgrade or scale your instance when the **capacity** value exceeds **60% - 70%** for a long period of time (for example, 30 minutes). Different values may work better for your service or scenario. -+ If your instance is configured with only 1 unit, upgrade or scale your instance when the **capacity** value exceeds **40%** for a long period. This recommendation is based on the need to reserve capacity for guest OS updates in the underlying service platform. ++ As a general rule, upgrade or scale your instance when a capacity metric value exceeds **60% - 70%** for a long period of time (for example, 30 minutes). Different values may work better for your service or scenario.++ If your instance is configured with only 1 unit, upgrade or scale your instance when a capacity metric value exceeds **40%** for a long period. This recommendation is based on the need to reserve capacity for guest OS updates in the underlying service platform. >[!TIP] > If you are able to estimate your traffic beforehand, test your API Management instance on workloads you expect. You can increase the request load on your tenant gradually and monitor the value of the capacity metric that corresponds to your peak load. Follow the steps from the previous section to use Azure portal to understand how much capacity is used at any given time. -## Next steps +## Related content - [Upgrade and scale an Azure API Management service instance](upgrade-and-scale.md) - [Automatically scale an Azure API Management instance](api-management-howto-autoscale.md)-- [Plan and manage costs for API Management](plan-manage-costs.md)+- [Plan and manage costs for API Management](plan-manage-costs.md) |
api-management | Api Management Howto Use Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md | In this tutorial, you learn how to: ## View metrics of your APIs -API Management emits [metrics](../azure-monitor/essentials/data-platform-metrics.md) every minute, giving you near real-time visibility into the state and health of your APIs. The following are the two most frequently used metrics. For a list of all available metrics, see [supported metrics](../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice). +API Management emits [metrics](../azure-monitor/essentials/data-platform-metrics.md) every minute, giving you near real-time visibility into the state and health of your APIs. The following are the most frequently used metrics. For a list of all available metrics, see [supported metrics](../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice). ++* **Capacity** - helps you make decisions about upgrading/downgrading your API Management services. The metric is emitted per minute and reflects the estimated gateway capacity at the time of reporting. The metric ranges from 0-100 calculated based on gateway resources such as CPU and memory utilization and other factors. ++ > [!TIP] + > In the [v2 service tiers](v2-service-tiers-overview.md), API Management replaced the capacity metric with separate CPU and memory utilization metrics. These metrics can also be used for scaling decisions and troubleshooting. [Learn more](api-management-capacity.md) -* **Capacity** - helps you make decisions about upgrading/downgrading your API Management services. The metric is emitted per minute and reflects the estimated gateway capacity at the time of reporting. The metric ranges from 0-100 calculated based on gateway resources such as CPU and memory utilization. * **Requests** - helps you analyze API traffic going through your API Management services. The metric is emitted per minute and reports the number of gateway requests with dimensions. Filter requests by response codes, location, hostname, and errors. > [!IMPORTANT]-> The following metrics have been deprecated as of May 2019 and will be retired in August 2023: Total Gateway Requests, Successful Gateway Requests, Unauthorized Gateway Requests, Failed Gateway Requests, Other Gateway Requests. Please migrate to the Requests metric which provides equivalent functionality. +> The following metrics have been retired: Total Gateway Requests, Successful Gateway Requests, Unauthorized Gateway Requests, Failed Gateway Requests, Other Gateway Requests. Please migrate to the Requests metric which provides equivalent functionality. :::image type="content" source="media/api-management-howto-use-azure-monitor/apim-monitor-metrics-1.png" alt-text="Screenshot of Metrics in API Management Overview"::: |
api-management | V2 Service Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md | The following API Management capabilities are currently unavailable in the v2 ti * Zone redundancy * Multi-region deployment * Multiple custom domain names -* Capacity metric +* Capacity metric - replaced by CPU Percentage of Gateway and Memory Percentage of Gateway metrics * Autoscaling * Inbound connection using a private endpoint * Injection in a VNet in external mode or internal mode |
api-management | Validate Content Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md | The policy validates the following content in the request or response against th <content-type-map any-content-type-value="content type string" missing-content-type-value="content type string"> <type from | when="content type string" to="content type string" /> </content-type-map>- <content type="content type string" validate-as="json | xml | soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore | prevent | detect" allow-additional-properties="true | false" /> + <content type="content type string" validate-as="json | xml | soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore | prevent | detect" allow-additional-properties="true | false" case-insensitive-property-names="true | false"/> </validate-content> ``` The policy validates the following content in the request or response against th | schema-id | Name of an existing schema that was [added](#schemas-for-content-validation) to the API Management instance for content validation. If not specified, the default schema from the API definition is used. | No | N/A | | schema-ref| For a JSON schema specified in `schema-id`, optional reference to a valid local reference path in the JSON document. Example: `#/components/schemas/address`. The attribute should return a JSON object that API Management handles as a valid JSON schema.<br/><br/> For an XML schema, `schema-ref` isn't supported, and any top-level schema element can be used as the root of the XML request or response payload. The validation checks that all elements starting from the XML request or response payload root adhere to the provided XML schema. | No | N/A | | allow-additional-properties | Boolean. For a JSON schema, specifies whether to implement a runtime override of the `additionalProperties` value configured in the schema: <br> - `true`: allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to not allow additional properties. <br> - `false`: do not allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to allow additional properties.<br/><br/>If the attribute isn't specified, the policy validates additional properties according to configuration of the `additionalProperties` field in the schema. | No | N/A |+| case-insensitive-property-names | Boolean. For a JSON schema, specifies whether to compare property names of JSON objects without regard to case. <br> - `true`: compare property names case insensitively. <br> - `false`: compare property names case sensitively. | No | false | [!INCLUDE [api-management-validation-policy-actions](../../includes/api-management-validation-policy-actions.md)] |
api-management | Validate Jwt Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md | The `validate-jwt` policy enforces existence and validity of a supported JSON we * The policy supports both symmetric and asymmetric signing algorithms: * **Symmetric** - The following encryption algorithms are supported: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512. * If used in the policy, the key must be provided inline within the policy in the Base64-encoded form.- * **Asymmetric** - The following encryption algortithms are supported: PS256, RS256, RS512. + * **Asymmetric** - The following encryption algortithms are supported: PS256, RS256, RS512, ES256. * If used in the policy, the key may be provided either via an OpenID configuration endpoint, or by providing the ID of an uploaded certificate (in PFX format) that contains the public key, or the modulus-exponent pair of the public key. * To configure the policy with one or more OpenID configuration endpoints for use with a self-hosted gateway, the OpenID configuration endpoints URLs must also be reachable by the cloud gateway. * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Microsoft Entra authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control. |
app-service | Tutorial Java Spring Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md | +#customer intent: As a developer, I want to learn how to deploy a Spring Boot web app to Azure App Service and connect to an Cosmos DB instance with the MongoDB API. Title: 'Tutorial: Linux Java app with MongoDB' description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure Cosmos DB. ms.devlang: java Previously updated : 12/10/2018 Last updated : 08/31/2024 +zone_pivot_groups: app-service-portal-azd + # Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB -> [!NOTE] -> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination. See [Java Workload Destination Guidance](https://aka.ms/javadestinations) for advice. +In this tutorial, you learn how to build, configure, and deploy a secure Spring Boot application in Azure App Service that connects to a MongoDB database in Azure (actually, a Cosmos DB database with MongoDB API). When you're finished, you'll have a Java SE application running on Azure App Service on Linux. -This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure. -When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](/azure/cosmos-db/) running on [Azure App Service on Linux](overview.md). --![Spring Boot application storing data in Azure Cosmos DB](./media/tutorial-java-spring-cosmosdb/spring-todo-app-running-locally.jpg) In this tutorial, you learn how to: In this tutorial, you learn how to: > * Stream diagnostic logs from App Service > * Add additional instances to scale out the sample app +**To complete this tutorial, you'll need:** +++* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/). +* A GitHub account. you can also [get one for free](https://github.com/join). +* Knowledge of Java with Spring Framework development. +* **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available. -## Prerequisites -* [Azure CLI](/cli/azure/overview), installed on your own computer. -* [Git](https://git-scm.com/) -* [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure) -* [Maven](https://maven.apache.org) -## Clone the sample TODO app and prepare the repo +* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java). +* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed. +* Knowledge of Java with Spring Framework development. +* **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available. -This tutorial uses a sample TODO list app with a web UI that calls a Spring REST API backed by [Spring Data for Azure Cosmos DB](https://github.com/Microsoft/spring-data-cosmosdb). The code for the app is available [on GitHub](https://github.com/Microsoft/spring-todo-app). To learn more about writing Java apps using Spring and Azure Cosmos DB, see the [Spring Boot Starter with the Azure Cosmos DB for NoSQL tutorial](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db) and the [Spring Data for Azure Cosmos DB quick start](https://github.com/Microsoft/spring-data-cosmosdb#quick-start). -Run the following commands in your terminal to clone the sample repo and set up the sample app environment. +## Skip to the end ++You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt: ```bash-git clone --recurse-submodules https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git -cd e2e-java-experience-in-app-service-linux-part-2 -yes | cp -rf .prep/* . +mkdir msdocs-spring-boot-mongodb-sample-app +cd msdocs-spring-boot-mongodb-sample-app +azd init --template msdocs-spring-boot-mongodb-sample-app +azd up ``` -## Create an Azure Cosmos DB +## 1. Run the sample ++First, you set up a sample data-driven app as a starting point. For your convenience, the [sample repository](https://github.com/Azure-Samples/msdocs-spring-boot-mongodb-sample-app), includes a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The dev container has everything you need to develop an application, including the MongoDB database, cache, and all environment variables needed by the sample application. The dev container can run in a [GitHub codespace](https://docs.github.com/en/codespaces/overview), which means you can run the sample on any computer with a web browser. ++ :::column span="2"::: + **Step 1:** In a new browser window: + 1. Sign in to your GitHub account. + 1. Navigate to [https://github.com/Azure-Samples/msdocs-spring-boot-mongodb-sample-app/fork](https://github.com/Azure-Samples/msdocs-spring-boot-mongodb-sample-app/fork). + 1. Unselect **Copy the main branch only**. You want all the branches. + 1. Select **Create fork**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the GitHub fork: + 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration. + 1. Select **Code** > **Create codespace on starter-no-infra**. + The codespace takes a few minutes to set up. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-2.png" alt-text="A screenshot showing how to create a codespace in GitHub." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** In the codespace terminal: + 1. Run `mvn package spring-boot:run`. + 1. When you see the notification `Your application running on port 8080 is available.`, select **Open in Browser**. + You should see the sample application in a new browser tab. + To stop the Jetty server, type `Ctrl`+`C`. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-3.png" alt-text="A screenshot showing how to run the sample application inside the GitHub codespace." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-run-sample-application-3.png"::: + :::column-end::: ++> [!TIP] +> You can ask [GitHub Copilot](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) about this repository. For example: +> +> * *@workspace What does this project do?* +> * *@workspace How does the app connect to the database?* +> * *@workspace What does the .devcontainer folder do?* ++Having issues? Check the [Troubleshooting section](#troubleshooting). +++## 2. Create App Service and Cosmos DB ++First, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Cosmos DB. For the creation process, you specify: ++* The **Name** for the web app. It's used as part of the DNS name for your app in the form of `https://<app-name>-<hash>.<region>.azurewebsites.net`. +* The **Region** to run the app physically in the world. It's also used as part of the DNS name for your app. +* The **Runtime stack** for the app. It's where you select the version of Java to use for your app. +* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app. +* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application. ++Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources. ++ :::column span="2"::: + **Step 1:** In the Azure portal: + 1. Enter "web app database" in the search bar at the top of the Azure portal. + 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. + You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. + 1. *Resource Group*: Select **Create new** and use a name of **msdocs-spring-cosmosdb-tutorial**. + 1. *Region*: Any Azure region near you. + 1. *Name*: **msdocs-spring-cosmosdb-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. + 1. *Runtime stack*: **Java 21**. + 1. *Java web server stack*: **Java SE (Embedded Web Server)**. + 1. *Engine*: **Cosmos DB API for MongoDB**. Cosmos DB is a fully managed NoSQL, relational, and vector database as a service on Azure. + 1. *Hosting plan*: **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later. + 1. Select **Review + create**. + 1. After validation completes, select **Create**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + - **Resource group**: The container for all the created resources. + - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. + - **App Service**: Represents your app and runs in the App Service plan. + - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic. + - **Azure Cosmos DB**: Accessible only from behind its private endpoint. A database is created for you on the database account. + - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-create-app-cosmosdb-3.png"::: + :::column-end::: ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 3. Secure connection secrets ++The creation wizard generated the connectivity string for you already as [app settings](configure-common.md#configure-app-settings). However, it's In this step, you learn where to find the app settings, and how you can create your own. ++App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, you can use [Key Vault references](app-service-key-vault-references.md) instead. ++ :::column span="2"::: + **Step 1:** In the App Service page, + 1. In the left menu, select **Settings > Environment variables**. + 1. Next to **AZURE_COSMOS_CONNECTIONSTRING**, select **Show value**. + This connection string lets you connect to the Cosmos DB database secured behind a private endpoint. However, the secret is saved directly in the App Service app, which isn't the best. You'll change this. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-1.png" alt-text="A screenshot showing how to see the value of an app setting." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** Create a key vault for secure management of secrets. + 1. In the top search bar, type "*key vault*", then select **Marketplace** > **Key Vault**. + 1. In **Resource Group**, select **msdocs-spring-cosmosdb-tutorial**. + 1. In **Key vault name**, type a name that consists of only letters and numbers. + 1. In **Region**, set it to the sample location as the resource group. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-2.png" alt-text="A screenshot showing how to create a key vault." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** + 1. Select the **Networking** tab. + 1. Unselect **Enable public access**. + 1. Select **Create a private endpoint**. + 1. In **Resource Group**, select **msdocs-spring-cosmosdb-tutorial**. + 1. In **Key vault name**, type a name that consists of only letters and numbers. + 1. In **Region**, set it to the sample location as the resource group. + 1. In the dialog, in **Location**, select the same location as your App Service app. + 1. In **Resource Group**, select **msdocs-spring-cosmosdb-tutorial**. + 1. In **Name**, type **msdocs-spring-cosmosdb-XYZVvaultEndpoint**. + 1. In **Virtual network**, select **msdocs-spring-cosmosdb-XYZVnet**. + 1. In **Subnet**, **msdocs-spring-cosmosdb-XYZSubnet**. + 1. Select **OK**. + 1. Select **Review + create**, then select **Create**. Wait for the key vault deployment to finish. You should see "Your deployment is complete." + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-3.png" alt-text="A screenshot showing how secure a key vault with a private endpoint." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-3.png"::: + :::column-end::: + :::column span="2"::: + **Step 4:** + 1. In the top search bar, type *msdocs-spring-cosmosdb*, then the App Service resource called **msdocs-spring-cosmosdb-XYZ**. + 1. In the App Service page, in the left menu, select **Settings > Service Connector. There's already a connector, which the app creation wizard created for you. + 1. Select checkbox next to the connector, then select **Edit**. + 1. In the **Basics** tab, set **Client type** to **SpringBoot**. This option creates the Spring Boot specific environment variables for you. + 1. Select the **Authentication** tab. + 1. Select Store Secret in Key Vault. + 1. Under **Key Vault Connection**, select **Create new**. + A **Create connection** dialog is opened on top of the edit dialog. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-4.png" alt-text="A screenshot showing how to edit a service connector with a key vault connection." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-4.png"::: + :::column-end::: + :::column span="2"::: + **Step 5:** In the **Create connection** dialog for the Key Vault connection: + 1. In **Key Vault**, select the key vault you created earlier. + 1. Select **Review + Create**. You should see that **System assigned managed identity** is set to **Selected**. + 1. When validation completes, select **Create**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-5.png" alt-text="A screenshot showing how to configure a key vault service connector." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-5.png"::: + :::column-end::: + :::column span="2"::: + **Step 6:** You're back in the edit dialog for **defaultConnector**. + 1. In the **Authentication** tab, wait for the key vault connector to be created. When it's finished, the Key Vault Connection dropdown automatically selects it. + 1. Select **Next: Networking**. + 1. Select **Configure firewall rules to enable access to target service**. If you see the message, "No Private Endpoint on the target service," ignore it. The app creation wizard already secured the Cosmos DB database with a private endpoint. + 1. Select **Save**. Wait until the **Update succeeded** notification appears. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-6.png" alt-text="A screenshot showing the key vault connection selected in the defaultConnector." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-6.png"::: + :::column-end::: + :::column span="2"::: + **Step 7:** To verify that you secured the secrets: + 1. From the left menu, select **Environment variables** again. + 1. Make sure that the app setting **spring.data.mongodb.uri** exists. The default connector generated it for you, and your Spring Boot application already uses the variable. + 1. Next to the app setting, select **Show value**. The value should be `@Microsoft.KeyValut(...)`, which means that it's a [key vault reference](app-service-key-vault-references.md) because the secret is now managed in the key vault. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-7.png" alt-text="A screenshot showing how to see the value of the Spring Boot environment variable in Azure." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-secure-connection-secrets-7.png"::: + :::column-end::: ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 5. Deploy sample code ++In this step, you configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository kicks off the build and deploy action. ++Like the Tomcat convention, if you want to deploy to the root context of Tomcat, name your built artifact *ROOT.war*. ++ :::column span="2"::: + **Step 1:** In the left menu, select **Deployment** > **Deployment Center**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the Deployment Center page: + 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. + 1. Sign in to your GitHub account and follow the prompt to authorize Azure. + 1. In **Organization**, select your account. + 1. In **Repository**, select **msdocs-spring-boot-mongodb-sample-app**. + 1. In **Branch**, select **starter-no-infra**. This is the same branch that you worked in with your sample app, without any Azure-related files or configuration. + 1. For **Authentication type**, select **User-assigned identity**. + 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory. + By default, the deployment center [creates a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity) for the workflow to authenticate using Microsoft Entra (OIDC authentication). For alternative authentication options, see [Deploy to App Service using GitHub Actions](deploy-github-actions.md). + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-2.png"::: + :::column-end::: + :::column span="3"::: + **Step 3:** + 1. Select the **Logs** tab. See that a new deployment already ran, but the status is **Failed**. + 1. Select **Build/Deploy Logs**. + A browser tab opens to the **Actions** tab of your forked repository in GitHub. In **Annotations**, you see the error `The string 'java21' is not valid SeVer notation for a Java version`. If you want, select the failed **build** step in the page to get more information. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing an error in the deployment center's Logs page." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-3.png"::: + :::column-end::: + :::column span="2"::: + **Step 4:** The error shows that something went wrong during the GitHub workflow. To fix it, pull the latest changes into your codespace first. Back in the GitHub codespace of your sample fork, run `git pull origin starter-no-infra`. + This pulls the newly committed workflow file into your codespace. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing git pull inside a GitHub codespace." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-4.png"::: + :::column-end::: + :::column span="2"::: + **Step 5 (Option 1: with GitHub Copilot):** + 1. Start a new chat session by selecting the **Chat** view, then selecting **+**. + 1. Ask, "*@workspace why do i get the error in GitHub actions: The string 'java21' is not valid SemVer notation for a Java version.*" Copilot might give you an explanation and even give you the link to the workflow file that you need to fix. + 1. Open *.github/workflows/starter-no-infra_msdocs-spring-cosmosdb-123.yaml* in the explorer and make the suggested fix. + GitHub Copilot doesn't give you the same response every time, you might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace). + :::column-end::: + :::column::: + :::image type="content" source="media/tutorial-java-spring-cosmosdb/github-copilot-1.png" alt-text="A screenshot showing how to ask a question in a new GitHub Copilot chat session." lightbox="media/tutorial-java-spring-cosmosdb/github-copilot-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 5 (Option 2: without GitHub Copilot):** + 1. Open *.github/workflows/starter-no-infra_msdocs-spring-cosmosdb-123.yaml* in the explorer and find the `setup-java@v4` action. + 1. Change the value of `java-version` to `'21'`. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing a GitHub codespace and the autogenerated workflow file opened." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-5.png"::: + :::column-end::: + :::column span="2"::: + **Step 6:** + 1. Select the **Source Control** extension. + 1. In the textbox, type a commit message like `Fix error in java-version`. Or, select :::image type="icon" source="media/quickstart-dotnetcore/github-copilot-in-editor.png" border="false"::: and let GitHub Copilot generate a commit message for you. + 1. Select **Commit**, then confirm with **Yes**. + 1. Select **Sync changes 1**, then confirm with **OK**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-6.png"::: + :::column-end::: + :::column span="2"::: + **Step 7:** + Back in the Deployment Center page in the Azure portal: + 1. Under the **Logs** tab, select **Refresh**. A new deployment run is already started from your committed changes. + 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a successful deployment in the deployment center's Logs page." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-7.png"::: + :::column-end::: + :::column span="2"::: + **Step 8:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-8.png" alt-text="A screenshot showing a successful GitHub run." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-deploy-sample-code-8.png"::: + :::column-end::: ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 6. Browse to the app ++ :::column span="2"::: + **Step 1:** In the App Service page: + 1. From the left menu, select **Overview**. + 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** Add a few tasks to the list. + Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Cosmos DB. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-2.png" alt-text="A screenshot of the Spring Boot web app with Cosmos DB running in Azure." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-2.png"::: + :::column-end::: ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 7. Stream diagnostic logs ++Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample application includes standard Log4j logging statements to demonstrate this capability, as shown in the following snippet: +++ :::column span="2"::: + **Step 1:** In the App Service page: + 1. From the left menu, select **App Service logs**. + 1. Under **Application logging**, select **File System**. + 1. In the top menu, select **Save**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-stream-diagnostic-logs-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-stream-diagnostic-logs-2.png"::: + :::column-end::: ++Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=java). ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 8. Clean up resources ++When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group. ++ :::column span="2"::: + **Step 1:** In the search bar at the top of the Azure portal: + 1. Enter the resource group name. + 1. Select the resource group. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the resource group page, select **Delete resource group**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the **Delete Resource Group** button in the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** + 1. Confirm your deletion by typing the resource group name. + 1. Select **Delete**. + 1. Confirm with **Delete** again. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-clean-up-resources-3.png"::: + :::column-end::: ++++## 2. Create Azure resources and deploy a sample app ++In this step, you create the Azure resources and deploy a sample app to App Service on Linux. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Cosmos DB. ++The dev container already has the [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) (AZD). ++1. From the repository root, run `azd init`. ++ ```bash + azd init --template javase-app-service-cosmos-redis-infra + ``` -Follow these steps to create an Azure Cosmos DB database in your subscription. The TODO list app will connect to this database and store its data when running, persisting the application state no matter where you run the application. +1. When prompted, give the following answers: + + |Question |Answer | + ||| + |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** | + |What would you like to do with these files? | **Keep my existing files unchanged** | + |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. | -1. Sign in to your Azure CLI, and optionally set your subscription if you have more than one connected to your sign-in credentials. +1. Sign into Azure by running the `azd auth login` command and following the prompt: - ```azurecli - az login - az account set -s <your-subscription-id> - ``` + ```bash + azd auth login + ``` -2. Create an Azure Resource Group, noting the resource group name. +1. Create the necessary Azure resources and deploy the app code with the `azd up` command. Follow the prompt to select the desired subscription and location for the Azure resources. - ```azurecli - az group create -n <your-azure-group-name> \ - -l <your-resource-group-region> - ``` + ```bash + azd up + ``` -3. Create Azure Cosmos DB with the `GlobalDocumentDB` kind. -The name of the Azure Cosmos DB instance must use only lower case letters. Note down the `documentEndpoint` field in the response from the command. + The `azd up` command takes about 15 minutes to complete (the Redis cache take the most time). It also compiles and deploys your application code, but you'll modify your code later to work with App Service. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application. - ```azurecli - az cosmosdb create --kind GlobalDocumentDB \ - -g <your-azure-group-name> \ - -n <your-azure-COSMOS-DB-name-in-lower-case-letters> - ``` + This AZD template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources: -4. Get your Azure Cosmos DB key to connect to the app. Keep the `primaryMasterKey`, `documentEndpoint` nearby as you'll need them in the next step. + - **Resource group**: The container for all the created resources. + - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *B1* tier is created. + - **App Service**: Represents your app and runs in the App Service plan. + - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic. + - **Azure Cosmos DB account with MongoDB API**: Accessible only from behind its private endpoint. A database is created for you on the server. + - **Azure Cache for Redis**: Accessible only from within the virtual network. + - **Key vault**: Accessible only from behind its private endpoint. Used to manage secrets for the App Service app. + - **Private DNS zones**: Enable DNS resolution of the Cosmos DB database, the Redis cache, and the key vault in the virtual network. + - **Log Analytics workspace**: Acts as the target container for your app to ship its logs, where you can also query the logs. - ```azurecli - az cosmosdb keys list -g <your-azure-group-name> -n <your-azure-COSMOSDB-name> - ``` +Having issues? Check the [Troubleshooting section](#troubleshooting). -## Configure the TODO app properties +## 3. Verify connection strings -Open a terminal on your computer. Copy the sample script file in the cloned repo so you can customize it for the Azure Cosmos DB database you just created. +The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository. -```bash -cd initial/spring-todo-app -cp set-env-variables-template.sh .scripts/set-env-variables.sh -``` - -Edit `.scripts/set-env-variables.sh` in your favorite editor and supply Azure Cosmos DB connection info. For the App Service Linux configuration, use the same region as before (`your-resource-group-region`) and resource group (`your-azure-group-name`) used when creating the Azure Cosmos DB database. Choose a WEBAPP_NAME that is unique since it cannot duplicate any web app name in any Azure deployment. +1. In the AZD output, find the app setting `spring.data.mongodb.uri`. Only the setting names are displayed. They look like this in the AZD output: -```bash -export COSMOSDB_URI=<put-your-COSMOS-DB-documentEndpoint-URI-here> -export COSMOSDB_KEY=<put-your-COSMOS-DB-primaryMasterKey-here> -export COSMOSDB_DBNAME=<put-your-COSMOS-DB-name-here> --# App Service Linux Configuration -export RESOURCEGROUP_NAME=<put-your-resource-group-name-here> -export WEBAPP_NAME=<put-your-Webapp-name-here> -export REGION=<put-your-REGION-here> -``` + <pre> + App Service app has the following app settings: + - spring.data.mongodb.uri + - spring.data.mongodb.database + - spring.redis.host + - spring.redis.port + - spring.redis.password + - spring.redis.database + - spring.redis.ssl + - spring.cloud.azure.keyvault.secret.credential.managed_identity_enabled + - spring.cloud.azure.keyvault.secret.endpoint + - azure.keyvault.uri + - azure.keyvault.scope + </pre> -Then run the script: + `spring.data.mongodb.uri` contains the connection URI to the Cosmos DB database in Azure. It's a standard Spring Data variable, which your application is already using in the *src/main/resources/application.properties* file. -```bash -source .scripts/set-env-variables.sh -``` - -These environment variables are used in `application.properties` in the TODO list app. The fields in the properties file define a default repository configuration for Spring Data: +1. In the explorer, navigate to *src/main/resources/application.properties* and see that your Spring Boot app is already using the `spring.data.mongodb.uri` variable to access data. -```properties -azure.cosmosdb.uri=${COSMOSDB_URI} -azure.cosmosdb.key=${COSMOSDB_KEY} -azure.cosmosdb.database=${COSMOSDB_DBNAME} -``` +1. For your convenience, the AZD template output shows you the direct link to the app's app settings page. Find the link and open it in a new browser tab. -```java -@Repository -public interface TodoItemRepository extends DocumentDbRepository<TodoItem, String> { -} -``` + If you look at the value of `spring.data.mongodb.uri`, it should be `@Microsoft.KeyValut(...)`, which means that it's a [key vault reference](app-service-key-vault-references.md) because the secret is managed in the key vault. -Then the sample app uses the `@Document` annotation imported from `com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document` to set up an entity type to be stored and managed by Azure Cosmos DB: +Having issues? Check the [Troubleshooting section](#troubleshooting). -```java -@Document -public class TodoItem { - private String id; - private String description; - private String owner; - private boolean finished; -``` +## 5. Browse to the app -## Run the sample app +1. In the AZD output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output: -Use Maven to run the sample. + <pre> + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service web + - Endpoint: https://<app-name>.azurewebsites.net/ + </pre> -```bash -mvn package spring-boot:run -``` +2. Add a few tasks to the list. -The output should look like the following. --```output -bash-3.2$ mvn package spring-boot:run -[INFO] Scanning for projects... -[INFO] -[INFO] -[INFO] Building spring-todo-app 2.0-SNAPSHOT -[INFO] -[INFO] ---[INFO] SimpleUrlHandlerMapping - Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] -[INFO] SimpleUrlHandlerMapping - Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] -[INFO] WelcomePageHandlerMapping - Adding welcome page: class path resource [static/https://docsupdatetracker.net/index.html] -2018-10-28 15:04:32.101 INFO 7673 [ main] c.m.azure.documentdb.DocumentClient : Initializing DocumentClient with serviceEndpoint [https://sample-cosmos-db-westus.documents.azure.com:443/], ConnectionPolicy [ConnectionPolicy [requestTimeout=60, mediaRequestTimeout=300, connectionMode=Gateway, mediaReadMode=Buffered, maxPoolSize=800, idleConnectionTimeout=60, userAgentSuffix=;spring-data/2.0.6;098063be661ab767976bd5a2ec350e978faba99348207e8627375e8033277cb2, retryOptions=com.microsoft.azure.documentdb.RetryOptions@6b9fb84d, enableEndpointDiscovery=true, preferredLocations=null]], ConsistencyLevel [null] -[INFO] AnnotationMBeanExporter - Registering beans for JMX exposure on startup -[INFO] TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path '' -[INFO] TodoApplication - Started TodoApplication in 45.573 seconds (JVM running for 76.534) -``` + :::image type="content" source="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-2.png" alt-text="A screenshot of the Tomcat web app with MySQL running in Azure showing tasks." lightbox="./media/tutorial-java-spring-cosmosdb/azure-portal-browse-app-2.png"::: -You can access Spring TODO App locally using this link once the app is started: `http://localhost:8080/`. -- ![Access Spring TODO app locally](./media/tutorial-java-spring-cosmosdb/spring-todo-app-running-locally.jpg) --If you see exceptions instead of the "Started TodoApplication" message, check that the `bash` script in the previous step exported the environment variables properly and that the values are correct for the Azure Cosmos DB database you created. --## Configure Azure deployment --Open the `pom.xml` file in the `initial/spring-boot-todo` directory and add the following [Azure Web App Plugin for Maven](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md) configuration. --```xml -<plugins> -- <!--*************************************************--> - <!-- Deploy to Java SE in App Service Linux --> - <!--*************************************************--> - - <plugin> - <groupId>com.microsoft.azure</groupId> - <artifactId>azure-webapp-maven-plugin</artifactId> - <version>2.5.0</version> - <configuration> - <schemaVersion>v2</schemaVersion> -- <!-- Web App information --> - <resourceGroup>${RESOURCEGROUP_NAME}</resourceGroup> - <appName>${WEBAPP_NAME}</appName> - <region>${REGION}</region> - <pricingTier>P1v2</pricingTier> - <!-- Java Runtime Stack for Web App on Linux--> - <runtime> - <os>linux</os> - <javaVersion>Java 8</javaVersion> - <webContainer>Java SE</webContainer> - </runtime> - <deployment> - <resources> - <resource> - <directory>${project.basedir}/target</directory> - <includes> - <include>*.jar</include> - </includes> - </resource> - </resources> - </deployment> -- <appSettings> - <property> - <name>COSMOSDB_URI</name> - <value>${COSMOSDB_URI}</value> - </property> - <property> - <name>COSMOSDB_KEY</name> - <value>${COSMOSDB_KEY}</value> - </property> - <property> - <name>COSMOSDB_DBNAME</name> - <value>${COSMOSDB_DBNAME}</value> - </property> - <property> - <name>JAVA_OPTS</name> - <value>-Dserver.port=80</value> - </property> - </appSettings> -- </configuration> - </plugin> - ... -</plugins> -``` + Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Cosmos DB. -## Deploy to App Service on Linux +Having issues? Check the [Troubleshooting section](#troubleshooting). -Use the `mvn azure-webapp:deploy` Maven goal to deploy the TODO app to Azure App Service on Linux. +## 6. Stream diagnostic logs -```bash +Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). -# Deploy -bash-3.2$ mvn azure-webapp:deploy -[INFO] Scanning for projects... -[INFO] -[INFO] -[INFO] Building spring-todo-app 2.0-SNAPSHOT -[INFO] -[INFO] -[INFO] azure-webapp-maven-plugin:2.5.0:deploy (default-cli) @ spring-todo-app -Auth Type: AZURE_CLI -Default subscription: xxxxxxxxx -Username: xxxxxxxxx -[INFO] Subscription: xxxxxxxxx -[INFO] Creating App Service Plan 'ServicePlanb6ba8178-5bbb-49e7'... -[INFO] Successfully created App Service Plan. -[INFO] Creating web App spring-todo-app... -[INFO] Successfully created Web App spring-todo-app. -[INFO] Trying to deploy artifact to spring-todo-app... -[INFO] Successfully deployed the artifact to https://spring-todo-app.azurewebsites.net -[INFO] -[INFO] BUILD SUCCESS -[INFO] -[INFO] Total time: 02:19 min -[INFO] Finished at: 2019-11-06T15:32:03-07:00 -[INFO] Final Memory: 50M/574M -[INFO] -``` +The sample application includes standard Log4j logging statements to demonstrate this capability, as shown in the following snippet: -The output contains the URL to your deployed application (in this example, `https://spring-todo-app.azurewebsites.net`). You can copy this URL into your web browser or run the following command in your Terminal window to load your app. ++In the AZD output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the AZD output: ++<pre> +Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/<subscription-guid>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name>/logStream +</pre> ++Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=java). ++Having issues? Check the [Troubleshooting section](#troubleshooting). ++## 7. Clean up resources ++To delete all Azure resources in the current deployment environment, run `azd down` and follow the prompts. ```bash-explorer https://spring-todo-app.azurewebsites.net +azd down ``` -You should see the app running with the remote URL in the address bar: - ![Spring Boot application running with a remote URL](./media/tutorial-java-spring-cosmosdb/spring-todo-app-running-in-app-service.jpg) +## Troubleshooting -## Stream diagnostic logs +- [The portal deployment view for Azure Cosmos DB shows a Conflict status](#the-portal-deployment-view-for-azure-cosmos-db-shows-a-conflict-status) +- [The deployed sample app doesn't show the tasks list app](#the-deployed-sample-app-doesnt-show-the-tasks-list-app) +#### The portal deployment view for Azure Cosmos DB shows a Conflict status +Depending on your subscription and the region you select, you might see the deployment status for Azure Cosmos DB to be `Conflict`, with the following message in Operation details: -## Scale out the TODO App +`Sorry, we are currently experiencing high demand in <region> region, and cannot fulfill your request at this time.` -Scale out the application by adding another worker: +The error is most likely caused by a limit on your subscription for the region you select. Try choosing a different region for your deployment. -```azurecli -az appservice plan update --number-of-workers 2 \ - --name ${WEBAPP_PLAN_NAME} \ - --resource-group <your-azure-group-name> -``` +#### The deployed sample app doesn't show the tasks list app ++If you see a `Hey, Java developers!` page instead of the tasks list app, App Service is most likely still loading the updated container from your most recent code deployment. Wait a few minutes and refresh the page. ++## Frequently asked questions ++- [How much does this setup cost?](#how-much-does-this-setup-cost) +- [How do I run database migration with the Cosmos DB database behind the virtual network?](#how-do-i-run-database-migration-with-the-cosmos-db-database-behind-the-virtual-network) +- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) +- [I don't have permissions to create a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity) ++#### How much does this setup cost? -## Clean up resources +Pricing for the created resources is as follows: -If you don't need these resources for another tutorial (see [Next steps](#next)), you can delete them by running the following command in the Cloud Shell: -```azurecli -az group delete --name <your-azure-group-name> --yes +- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/). +- The Azure Cosmos DB account is created in **Serverless** tier and there's a small cost associated with this tier. See [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/serverless/). +- The Azure Cache for Redis is created in **Basic** tier with the minimum cache size. There's a small cost associated with this tier. You can scale it up to higher performance tiers for higher availability, clustering, and other features. See [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). +- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). +- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/). ++#### How do I run database migration with the Cosmos DB database behind the virtual network? ++The Java SE container in App Service already has network connectivity to Cosmos DB, but doesn't contain any migration tools or other MongoDB tools. You have a few options: ++- Run database migrations automatically at app start, such as with Hibernate and or Flyway. +- In the app's [SSH session](configure-language-java-deploy-run.md#linux-troubleshooting-tools), install a migration tool like [Flyway CLI](https://documentation.red-gate.com/fd/command-line-184127404.html), then run the migration script. Remember that the installed tool won't persist after an app restart unless it's in the */home* directory. +- [Integrate the Azure cloud shell](../cloud-shell/private-vnet.md) with the virtual network and run database migrations from there. ++#### How does local app development work with GitHub Actions? ++Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example: ++```terminal +git add . +git commit -m "<some-message>" +git push origin main ``` -<a name="next"></a> +#### I don't have permissions to create a user-assigned identity ++See [Set up GitHub Actions deployment from the Deployment Center](deploy-github-actions.md#set-up-github-actions-deployment-from-the-deployment-center). ++#### What can I do with GitHub Copilot in my codespace? ++You might notice that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available). ++A few tips for you when you talk to GitHub Copilot: ++- In a single chat session, the questions and answers build on each other and you can adjust your questions to fine-tune the answer you get. +- By default, GitHub Copilot doesn't have access to any file in your repository. To ask questions about a file, open the file in the editor first. +- To let GitHub Copilot have access to all of the files in the repository when preparing its answers, begin your question with `@workspace`. For more information, see [Use the @workspace agent](https://github.blog/2024-03-25-how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/#10-use-the-workspace-agent). +- In the chat session, GitHub Copilot can suggest changes and (with `@workspace`) even where to make the changes, but it's not allowed to make the changes for you. It's up to you to add the suggested changes and test it. ## Next steps -[Azure for Java Developers](/java/azure/) -[Spring Boot](https://spring.io/projects/spring-boot), -[Spring Data for Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db), -[Azure Cosmos DB](/azure/cosmos-db/introduction) and -[App Service Linux](overview.md). +- [Azure for Java Developers](/java/azure/) +- [Spring Boot](https://spring.io/projects/spring-boot) +- [Spring Data for Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db) +- [Azure Cosmos DB](/azure/cosmos-db/introduction) and +-[App Service Linux](overview.md) -Learn more about running Java apps on App Service on Linux in the developer guide. +Learn more about running Java apps on App Service in the developer guide. > [!div class="nextstepaction"] -> [Java in App Service Linux dev guide](configure-language-java-deploy-run.md?pivots=platform-linux) +> [Configure a Java app in Azure App Service](configure-language-java-deploy-run.md?pivots=platform-linux) Learn how to secure your app with a custom domain and certificate. |
app-service | Tutorial Java Tomcat Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md | First, you set up a sample data-driven app as a starting point. For your conveni :::column span="2"::: **Step 2:** In the GitHub fork: 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration.- 1. Select **Code** > **Create codespace on main**. + 1. Select **Code** > **Create codespace on starter-no-infra**. The codespace takes a few minutes to set up. :::column-end::: :::column::: Having issues? Check the [Troubleshooting section](#troubleshooting). First, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for MySQL. For the creation process, you specify: -* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`. -* The **Region** to run the app physically in the world. +* The **Name** for the web app. It's the name used as part of the DNS name for your app in the form of `https://<app-name>.azurewebsites.net`. +* The **Region** to run the app physically in the world. It's also used as part of the DNS name for your app. * The **Runtime stack** for the app. It's where you select the version of Java to use for your app. * The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app. * The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application. Having issues? Check the [Troubleshooting section](#troubleshooting). 1. Open *src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java* in the explorer and add the code suggestion in the `contextInitialized` method. - GitHub Copilot doesn't give you the same response every time, you might need to add additional questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace) + GitHub Copilot doesn't give you the same response every time, you might need to ask other questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace) 1. Back in the codespace terminal, run `azd deploy`. Pricing for the created resources is as follows: #### How do I connect to the MySQL server behind the virtual network with other tools? -- The Tomcat container currently doesn't have the `mysql-client` terminal too. If you want, you must manually install it. Note that anything you install doesn't persist across app restarts.+- The Tomcat container currently doesn't have the `mysql-client` terminal too. If you want, you must manually install it. Remember that anything you install doesn't persist across app restarts. - To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM in one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network. - You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network. See [Set up GitHub Actions deployment from the Deployment Center](deploy-github- #### What can I do with GitHub Copilot in my codespace? -You might have noticed that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available). +You might notice that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available). A few tips for you when you talk to GitHub Copilot: A few tips for you when you talk to GitHub Copilot: Here are some other things you can say to fine-tune the answer you get. -* Please change this code to use the data source jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS. +* Change this code to use the data source jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS. * Some imports in your code are using javax but I have a Jakarta app. * I want this code to run only if the environment variable AZURE_MYSQL_CONNECTIONSTRING is set. * I want this code to run only in Azure App Service and not locally. |
automation | Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/configure-alerts.md | Title: How to create alerts for Azure Automation Change Tracking and Inventory description: This article tells how to configure Azure alerts to notify about the status of changes detected by Change Tracking and Inventory. Previously updated : 07/22/2024 Last updated : 08/30/2024 # How to create alerts for Change Tracking and Inventory Alerts in Azure proactively notify you of results from runbook jobs, service health issues, or other scenarios related to your Automation account. Azure Automation does not include pre-configured alert rules, but you can create your own based on data that it generates. This article provides guidance on creating alert rules based on changes identified by Change Tracking and Inventory. If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../../azure-monitor/alerts/alerts-unified-log.md). |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-automation-account.md | Title: Enable Azure Automation Change Tracking and Inventory from Automation acc description: This article tells how to enable Change Tracking and Inventory from an Automation account. Previously updated : 07/22/2024 Last updated : 08/30/2024 # Enable Change Tracking and Inventory from an Automation account +> [!Important] +> Change Tracking and Inventory using Log Analytics agent has retired on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md). + This article describes how you can use your Automation account to enable [Change Tracking and Inventory](overview.md) for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory. > [!NOTE] |
automation | Enable From Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-portal.md | Title: Enable Azure Automation Change Tracking and Inventory from the Azure port description: This article tells how to enable the Change Tracking and Inventory feature from the Azure portal. Previously updated : 07/22/2024 Last updated : 08/30/2024 # Enable Change Tracking and Inventory from Azure portal +> [!Important] +> Change Tracking and Inventory using Log Analytics agent has retired on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md). + This article describes how you can enable [Change Tracking and Inventory](overview.md) for one or more Azure VMs in the Azure portal. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory. The number of resource groups that you can use for managing your VMs is limited by the [Resource Manager deployment limits](../../azure-resource-manager/templates/deploy-to-resource-group.md). Resource Manager deployments are limited to five resource groups per deployment. Two of these resource groups are reserved to configure the Log Analytics workspace, Automation account, and related resources. This leaves you with three resource groups to select for management by Change Tracking and Inventory. This limit only applies to simultaneous setup, not the number of resource groups that can be managed by an Automation feature. |
automation | Enable From Runbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-runbook.md | description: This article tells how to enable Change Tracking and Inventory from Previously updated : 07/22/2024 Last updated : 08/30/2024 # Enable Change Tracking and Inventory from a runbook +> [!Important] +> Change Tracking and Inventory using Log Analytics agent has retired on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md). + This article describes how you can use a runbook to enable [Change Tracking and Inventory](overview.md) for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory. > [!NOTE] |
automation | Guidance Migration Log Analytics Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md | |
automation | Manage Change Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-change-tracking.md | |
automation | Manage Scope Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-scope-configurations.md | Title: Limit Azure Automation Change Tracking and Inventory deployment scope description: This article tells how to work with scope configurations to limit the scope of a Change Tracking and Inventory deployment. Previously updated : 07/22/2024 Last updated : 08/31/2024 # Limit Change Tracking and Inventory deployment scope + This article describes how to work with scope configurations when using the [Change Tracking and Inventory](overview.md) feature to deploy changes to your VMs. For more information, see [Targeting monitoring solutions in Azure Monitor (Preview)](/previous-versions/azure/azure-monitor/insights/solution-targeting). ## About scope configurations |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md | Title: Azure Automation Change Tracking and Inventory overview description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 08/02/2024 Last updated : 08/30/2024 -> - Change Tracking and Inventory using Log Analytics agent will retire on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md). +> Change Tracking and Inventory using Log Analytics agent has retired on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md). ++> [!Important] +> You can expect the following if you use the capability using Change Tracking & Inventory Log Analytics Agent. +> - **Functionality**: The functional capabilities of Change Tracking with Log Analytics agent would continue to work till Feb'2025. However, the support will be limited and can lead to potential issues over time. +> - **Installation**: The ability to configure Change Tracking & Inventory using MMA/OMS agents will be removed from the Azure portal soon. +>- **Customer Support**: You will not be able to get support through existing channels for Change Tracking & Inventory with MMA/OMS. Microsoft will provide support on a best effort basis. +> - **New capabilities and support matrix**: No new capabilities will be added, including functionality for additional Windows or Linux versions. +> - **File Integrity Monitoring**: Microsoft Defender for Servers Plan 2 will offer a new File Integrity Monitoring (FIM) solution powered by Microsoft Defender for Endpoint (MDE) integration. Microsoft Defender for Cloud recommends disabling FIM over MMA by November 2024 and onboarding your environment to the new FIM version based on Defender for Endpoint. File Integrity Monitoring based on Log Analytics Agent (MMA) is supported till November 2024. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/prepare-deprecation-log-analytics-mma-agent#migration-from-fim-over-log-analytics-agent-mma). + This article introduces you to Change Tracking and Inventory in Azure Automation. This feature tracks changes in virtual machines hosted in Azure, on-premises, and other cloud environments to help you pinpoint operational and environmental issues with software managed by the Distribution Package Manager. Items that are tracked by Change Tracking and Inventory include: |
automation | Remove Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/remove-feature.md | Title: Remove Azure Automation Change Tracking and Inventory feature description: This article tells how to stop using Change Tracking and Inventory, and unlink an Automation account from the Log Analytics workspace. Previously updated : 07/22/2024 Last updated : 08/30/2024 # Remove Change Tracking and Inventory from Automation account + After you enable management of your virtual machines using Azure Automation Change Tracking and Inventory, you may decide to stop using it and remove the configuration from the account and linked Log Analytics workspace. This article tells you how to completely remove Change Tracking and Inventory from the managed VMs, your Automation account, and Log Analytics workspace. ## Sign into the Azure portal |
automation | Remove Vms From Change Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/remove-vms-from-change-tracking.md | description: This article tells how to remove Azure and non-Azure machines from Previously updated : 07/22/2024 Last updated : 08/30/2024 # Remove machines from Change Tracking and Inventory + When you're finished tracking changes on your Azure or non-Azure machines in your environment, you can stop managing them with the [Change Tracking and Inventory](overview.md) feature. To stop managing them, you will edit the saved search query `MicrosoftDefaultComputerGroup` in your Log Analytics workspace that is linked to your Automation account. ## Sign into the Azure portal |
automation | Region Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md | Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 07/22/2024 Last updated : 08/30/2024 |
automation | Change Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/change-tracking.md | description: This article tells how to troubleshoot and resolve issues with the Previously updated : 02/15/2021 Last updated : 08/30/2024 # Troubleshoot Change Tracking and Inventory issues + This article describes how to troubleshoot and resolve Azure Automation Change Tracking and Inventory issues. For general information about Change Tracking and Inventory, see [Change Tracking and Inventory overview](../change-tracking/overview.md). ## General errors |
automation | Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/onboarding.md | Title: Troubleshoot Azure Automation feature deployment issues description: This article tells how to troubleshoot and resolve issues that arise when deploying Azure Automation features. Previously updated : 02/11/2021 Last updated : 08/30/2024 # Troubleshoot feature deployment issues + You might receive error messages when you deploy the Azure Automation Update Management feature or the Change Tracking and Inventory feature on your VMs. This article describes the errors that might occur and how to resolve them. ## Known issues |
automation | Update Agent Issues Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md | Title: Troubleshooting Linux update agent issues in Azure Automation description: This article tells how to troubleshoot and resolve issues with the Linux Windows update agent in Update Management. Previously updated : 11/01/2021 Last updated : 08/30/2024 |
automation | Update Agent Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md | Title: Troubleshoot Windows update agent issues in Azure Automation description: This article tells how to troubleshoot and resolve issues with the Windows update agent during Update Management. Previously updated : 01/25/2020 Last updated : 08/30/2024 |
automation | Update Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md | Title: Troubleshoot Azure Automation Update Management issues description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. Previously updated : 06/29/2024 Last updated : 08/30/2024 |
automation | Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-alerts.md | Title: How to create alerts for Azure Automation Update Management description: This article tells how to configure Azure alerts to notify about the status of update assessments or deployments. Previously updated : 07/15/2024 Last updated : 08/30/2024 # How to create alerts for Update Management + Alerts in Azure proactively notify you of results from runbook jobs, service health issues, or other scenarios related to your Automation account. Azure Automation does not include pre-configured alert rules, but you can create your own based on data that it generates. This article provides guidance on creating alert rules using the metrics included with Update Management ## Available metrics |
automation | Deploy Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md | Title: How to create update deployments for Azure Automation Update Management description: This article describes how to schedule update deployments and review their status. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Enable From Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md | Title: Enable Azure Automation Update Management from Automation account description: This article tells how to enable Update Management from an Automation account. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Enable From Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-portal.md | Title: Enable Azure Automation Update Management from the Azure portal description: This article tells how to enable Update Management from the Azure portal. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Enable From Runbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-runbook.md | description: This article tells how to enable Update Management from a runbook. Previously updated : 07/15/2024 Last updated : 08/30/2024 # Enable Update Management from a runbook + This article describes how you can use a runbook to enable the [Update Management](overview.md) feature for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM with Update Management. > [!NOTE] |
automation | Enable From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md | |
automation | Enable From Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-vm.md | Title: Enable Azure Automation Update Management for an Azure VM description: This article tells how to enable Update Management for an Azure VM. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Manage Updates For Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md | |
automation | Mecmintegration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/mecmintegration.md | Title: Integrate Azure Automation Update Management with Microsoft Configuration description: This article tells how to configure Microsoft Configuration Manager with Update Management to deploy software updates to manager clients. Previously updated : 07/15/2024 Last updated : 08/30/2024 # Integrate Update Management with Microsoft Configuration Manager + Customers who have invested in Microsoft Configuration Manager to manage PCs, servers, and mobile devices also rely on its strength and maturity in managing software updates as part of their software update management (SUM) cycle. You can report and update managed Windows servers by creating and pre-staging software update deployments in Microsoft Configuration Manager, and get detailed status of completed update deployments using [Update Management](overview.md). If you use Microsoft Configuration Manager for update compliance reporting, but not for managing update deployments with your Windows servers, you can continue reporting to Microsoft Configuration Manager while security updates are managed with Azure Automation Update Management. |
automation | Operating System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md | description: This article describes the supported Windows and Linux operating sy Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | description: This article provides an overview of the Update Management feature Previously updated : 07/15/2024 Last updated : 08/30/2024 -> - Azure Automation Update Management will retire on **31 August 2024**. Follow the guidelines for [migration to Azure Update Manager](../../update-manager/guidance-migration-automation-update-management-azure-update-manager.md). -> - Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update Management solution relies on this agent and may encounter issues once the agent is retired as it does not work with Azure Monitoring Agent (AMA). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. All the capabilities of Azure Automation Update management solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](../../update-center/guidance-migration-automation-update-management-azure-update-manager.md) to move your machines and schedules from Automation Update Management to Azure Update Manager. +> - Azure Automation Update Management has retired on **31 August 2024**. Follow the guidelines for [migration to Azure Update Manager](../../update-manager/guidance-migration-automation-update-management-azure-update-manager.md). +> - Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) has [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update Management solution relies on this agent and may encounter issues once the agent is retired as it does not work with Azure Monitoring Agent (AMA). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. All the capabilities of Azure Automation Update management solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](../../update-center/guidance-migration-automation-update-management-azure-update-manager.md) to move your machines and schedules from Automation Update Management to Azure Update Manager. You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, physical or VMs in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates and manage the process of installing required updates for your machines reporting to Update Management. |
automation | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md | description: This article describes the considerations and decisions to be made Previously updated : 07/15/2024 Last updated : 08/30/2024 # Plan your Update Management deployment + ## Step 1: Automation account Update Management is an Azure Automation feature, and therefore requires an Automation account. You can use an existing Automation account in your subscription, or create a new account dedicated only for Update Management and no other Automation features. |
automation | Pre Post Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/pre-post-scripts.md | Title: Manage pre-scripts and post-scripts in your Update Management deployment description: This article tells how to configure and manage pre-scripts and post-scripts for update deployments. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
automation | Remove Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/remove-feature.md | Title: Remove Azure Automation Update Management feature description: This article tells how to stop using Update Management and unlink an Automation account from the Log Analytics workspace. Previously updated : 07/15/2024 Last updated : 08/30/2024 + # Remove Update Management from Automation account + After you enable management of updates on your virtual machines using Azure Automation Update Management, you may decide to stop using it and remove the configuration from the account and linked Log Analytics workspace. This article tells you how to completely remove Update Management from the managed VMs, your Automation account, and Log Analytics workspace. ## Sign into the Azure portal |
automation | Remove Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/remove-vms.md | |
automation | View Update Assessments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md | Title: View Azure Automation update assessments description: This article tells how to view update assessments for Update Management deployments. Previously updated : 07/15/2024 Last updated : 08/30/2024 |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | This article provides information on how to troubleshoot and resolve the issues ## Troubleshoot issues while enabling Guest Management -**Troubleshoot issues while enabling Guest Management on:** - # [Arc agent installation fails on a domain-joined Linux VM](#tab/linux) **Error message**: Enabling Guest Management on a domain-joined Linux VM fails with the error message **InvalidGuestLogin: Failed to authenticate to the system with the credentials**. |
azure-functions | Create First Function Azure Developer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-azure-developer-cli.md | + + Title: Create functions in Azure using the Azure Developer CLI +description: "Learn how to use the Azure Developer CLI (azd) to create resources and deploy the local project to a Flex Consumption plan on Azure." Last updated : 08/27/2024++zone_pivot_groups: programming-languages-set-functions +#Customer intent: As a developer, I need to know how to use the Azure Developer CLI to create and deploy my function code securely to a new function app in the Flex Consumption plan in Azure by using azd templates and the azd up command. +++# Quickstart: Create and deploy functions to Azure Functions using the Azure Developer CLI ++In this Quickstart, you use Azure Developer command-line tools to create functions that respond to HTTP requests. After testing the code locally, you deploy it to a new serverless function app you create running in a Flex Consumption plan in Azure Functions. ++The project source uses the Azure Developer CLI (azd) to simplify deploying your code to Azure. This deployment follows current best practices for secure and scalable Azure Functions deployments. +++By default, the Flex Consumption plan follows a _pay-for-what-you-use_ billing model, which means to complete this quickstart incurs a small cost of a few USD cents or less in your Azure account. ++## Prerequisites +++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+++ [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd).+++ [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).+++ [.NET 8.0 SDK](https://dotnet.microsoft.com/download). ++ [Java 17 Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure)+ + If you use another [supported version of Java](supported-languages.md?pivots=programming-language-java#languages-by-runtime-version), you must update the project's pom.xml file. + + The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK. ++ [Apache Maven 3.8.x](https://maven.apache.org) ++ [Node.js 20](https://nodejs.org/) ++ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows)+++ [.NET 6.0 SDK](https://dotnet.microsoft.com/download) ++ [Python 3.11](https://www.python.org/).++ A [secure HTTP test tool](functions-develop-local.md#http-test-tools) for sending requests with JSON payloads to your function endpoints. This article uses `curl`.++## Initialize the project ++You can use the `azd init` command to create a local Azure Functions code project from a template. ++1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template functions-quickstart-dotnet-azd -e flexquickstart-dotnet + ``` ++ This command pulls the project files from the [template repository](https://github.com/Azure-Samples/functions-quickstart-dotnet-azd) and initializes the project in the current folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Run this command to navigate to the `http` app folder: ++ ```console + cd http + ``` ++1. Create a file named _local.settings.json_ in the `http` folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated" + } + } + ``` ++ This file is required when running locally. +1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template azure-functions-java-flex-consumption-azd -e flexquickstart-java + ``` ++ This command pulls the project files from the [template repository](https://github.com/Azure-Samples/azure-functions-java-flex-consumption-azd) and initializes the project in the current folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Run this command to navigate to the `http` app folder: ++ ```console + cd http + ``` ++1. Create a file named _local.settings.json_ in the `http` folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "java" + } + } + ``` ++ This file is required when running locally. +1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template functions-quickstart-javascript-azd -e flexquickstart-js + ``` ++ This command pulls the project files from the [template repository](https://github.com/Azure-Samples/functions-quickstart-javascript-azd) and initializes the project in the root folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Create a file named _local.settings.json_ in the root folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "node" + } + } + ``` ++ This file is required when running locally. +1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template functions-quickstart-powershell-azd -e flexquickstart-ps + ``` ++ This command pulls the project files from the [template repository](https://github.com/Azure-Samples/functions-quickstart-powershell-azd) and initializes the project in the root folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Run this command to navigate to the `src` app folder: ++ ```console + cd src + ``` ++1. Create a file named _local.settings.json_ in the `src` folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "powershell", + "FUNCTIONS_WORKER_RUNTIME_VERSION": "7.2" + } + } + ``` ++ This file is required when running locally. +1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template functions-quickstart-typescript-azd -e flexquickstart-ts + ``` ++ This command pulls the project files from the [template repository](https://github.com/Azure-Samples/functions-quickstart-typescript-azd) and initializes the project in the root folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Create a file named _local.settings.json_ in the root folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "node" + } + } + ``` ++ This file is required when running locally. +1. In your local terminal or command prompt, run this `azd init` command in an empty folder: + + ```console + azd init --template functions-quickstart-python-http-azd -e flexquickstart-py + ``` + + This command pulls the project files from the [template repository](https://github.com/Azure-Samples/functions-quickstart-python-http-azd) and initializes the project in the root folder. The `-e` flag sets a name for the current environment. In `azd`, the environment is used to maintain a unique deployment context for your app, and you can define more than one. It's also used in the name of the resource group you create in Azure. ++1. Create a file named _local.settings.json_ in the root folder that contains this JSON data: ++ ```json + { + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "python" + } + } + ``` ++ This file is required when running locally. ++## Create and activate a virtual environment ++In the root folder, run these commands to create and activate a virtual environment named `.venv`: ++### [Linux/macOS](#tab/linux) ++```bash +python3 -m venv .venv +source .venv/bin/activate +``` ++If Python didn't install the venv package on your Linux distribution, run the following command: ++```bash +sudo apt-get install python3-venv +``` ++### [Windows (bash)](#tab/windows-bash) ++```bash +py -m venv .venv +source .venv/scripts/activate +``` ++### [Windows (Cmd)](#tab/windows-cmd) ++```shell +py -m venv .venv +.venv\scripts\activate +``` +++++## Run in your local environment ++1. Run this command from your app folder in a terminal or command prompt: ++ ::: zone pivot="programming-language-csharp, programming-language-powershell,programming-language-python,programming-language-javascript" + ```console + func start + ``` + ::: zone-end + ::: zone pivot="programming-language-java" + ```console + mvn clean package + mvn azure-functions:run + ``` + ::: zone-end + ::: zone pivot="programming-language-typescript" + ```console + npm start + ``` + ::: zone-end ++ When the Functions host starts in your local project folder, it writes the URL endpoints of your HTTP triggered functions to the terminal output. ++1. In your browser, navigate to the `httpget` endpoint, which should look like this URL: ++ <http://localhost:7071/api/httpget> ++1. From a new terminal or command prompt window, run this `curl` command to send a POST request with a JSON payload to the `httppost` endpoint: ++ ```console + curl -i http://localhost:7071/api/httppost -H "Content-Type: text/json" -d @testdata.json + ``` ++ This command reads JSON payload data from the `testdata.json` project file. You can find examples of both HTTP requests in the `test.http` project file. ++1. When you're done, press Ctrl+C in the terminal window to stop the `func.exe` host process. +5. Run `deactivate` to shut down the virtual environment. ++## Review the code (optional) ++You can review the code that defines the two HTTP trigger function endpoints: + +### [`httpget`](#tab/get) +This `function.json` file defines the `httpget` function: +This `run.ps1` file implements the function code: + +### [`httppost`](#tab/post) + +This `function.json` file defines the `httppost` function: +This `run.ps1` file implements the function code: ++++After you verify your functions locally, it's time to publish them to Azure. +## Create Azure resources ++This project is configured to use the `azd provision` command to create a function app in a Flex Consumption plan, along with other required Azure resources. ++>[!NOTE] +>This project includes a set of Bicep files that `azd` uses to create a secure deployment to a Flex consumption plan that follows best practices. +> +>The `azd up` and `azd deploy` commands aren't currently supported for Java apps. ++1. In the root folder of the project, run this command to create the required Azure resources: ++ ```console + azd provision + ``` ++ The root folder contains the `azure.yaml` definition file required by `azd`. ++ If you aren't already signed-in, you're asked to authenticate with your Azure account. ++1. When prompted, provide these required deployment parameters: ++ | Parameter | Description | + | - | - | + | _Azure subscription_ | Subscription in which your resources are created.| + | _Azure location_ | Azure region in which to create the resource group that contains the new Azure resources. Only regions that currently support the Flex Consumption plan are shown.| + + The `azd provision` command uses your response to these prompts with the Bicep configuration files to create and configure these required Azure resources: ++ + Flex Consumption plan and function app + + Azure Storage (required) and Application Insights (recommended) + + Access policies and roles for your account + + Service-to-service connections using managed identities (instead of stored connection strings) + + Virtual network to securely run both the function app and the other Azure resources ++ After the command completes successfully, you can deploy your project code to this new function app in Azure. ++## Deploy to Azure ++You can use Core Tools to package your code and deploy it to Azure from the `target` output folder. ++1. Navigate to the app folder equivalent in the `target` output folder: ++ ```console + cd http/target/azure-functions/contoso-functions + ``` + + This folder should have a host.json file, which indicates that it's the root of your compiled Java function app. + +1. Run these commands to deploy your compiled Java code project to the new function app resource in Azure using Core Tools: + + ### [bash](#tab/bash) ++ ```bash + APP_NAME=$(azd env get-value AZURE_FUNCTION_NAME) + func azure functionapp publish $APP_NAME + ``` ++ ### [Cmd](#tab/cmd) + ```cmd + for /f "tokens=*" %i in ('azd env get-value AZURE_FUNCTION_NAME') do set APP_NAME=%i + func azure functionapp publish %APP_NAME% + ``` ++ ++ The `azd env get-value` command gets your function app name from the local environment, which is required for deployment using `func azure functionapp publish`. After publishing completes successfully, you see links to the HTTP trigger endpoints in Azure. +## Deploy to Azure ++This project is configured to use the `azd up` command to deploy this project to a new function app in a Flex Consumption plan in Azure. ++>[!TIP] +>This project includes a set of Bicep files that `azd` uses to create a secure deployment to a Flex consumption plan that follows best practices. ++1. Run this command to have `azd` create the required Azure resources in Azure and deploy your code project to the new function app: ++ ```console + azd up + ``` ++ The root folder contains the `azure.yaml` definition file required by `azd`. ++ If you aren't already signed-in, you're asked to authenticate with your Azure account. ++1. When prompted, provide these required deployment parameters: ++ | Parameter | Description | + | - | - | + | _Azure subscription_ | Subscription in which your resources are created.| + | _Azure location_ | Azure region in which to create the resource group that contains the new Azure resources. Only regions that currently support the Flex Consumption plan are shown.| + + The `azd up` command uses your response to these prompts with the Bicep configuration files to complete these deployment tasks: ++ + Create and configure these required Azure resources (equivalent to `azd provision`): ++ + Flex Consumption plan and function app + + Azure Storage (required) and Application Insights (recommended) + + Access policies and roles for your account + + Service-to-service connections using managed identities (instead of stored connection strings) + + Virtual network to securely run both the function app and the other Azure resources ++ + Package and deploy your code to the deployment container (equivalent to `azd deploy`). The app is then started and runs in the deployed package. ++ After the command completes successfully, you see links to the resources you created. +## Invoke the function on Azure ++You can now invoke your function endpoints in Azure by making HTTP requests to their URLs using your HTTP test tool or from the browser (for GET requests). When your functions run in Azure, access key authorization is enforced, and you must provide a function access key with your request. ++You can use the Core Tools to obtain the URL endpoints of your functions running in Azure. ++1. In your local terminal or command prompt, run these commands to get the URL endpoint values: + + ### [bash](#tab/bash) ++ ```bash + SET APP_NAME=(azd env get-value AZURE_FUNCTION_NAME) + func azure functionapp list-functions $APP_NAME --show-keys + ``` ++ ### [Cmd](#tab/cmd) + ```cmd + for /f "tokens=*" %i in ('azd env get-value AZURE_FUNCTION_NAME') do set APP_NAME=%i + func azure functionapp list-functions %APP_NAME% --show-keys + ``` ++ ++ The `azd env get-value` command gets your function app name from the local environment. Using the `--show-keys` option with `func azure functionapp list-functions` means that the returned **Invoke URL:** value for each endpoint includes a function-level access key. ++1. As before, use your HTTP test tool to validate these URLs in your function app running in Azure. +## Redeploy your code ++You can run the `azd up` command as many times as you need to both provision your Azure resources and deploy code updates to your function app. ++>[!NOTE] +>Deployed code files are always overwritten by the latest deployment package. ++Your initial responses to `azd` prompts and any environment variables generated by `azd` are stored locally in your named environment. Use the `azd env get-values` command to review all of the variables in your environment that were used when creating Azure resources. +## Clean up resources ++When you're done working with your function app and related resources, you can use this command to delete the function app and its related resources from Azure and avoid incurring any further costs: ++```console +azd down --no-prompt +``` ++>[!NOTE] +>The `--no-prompt` option instructs `azd` to delete your resource group without a confirmation from you. +> +>This command doesn't affect your local code project. ++## Related content +++ [Flex Consumption plan](flex-consumption-plan.md)++ [Azure Developer CLI (azd)](/azure/developer/azure-developer-cli/)++ [azd reference](/azure/developer/azure-developer-cli/reference)++ [Azure Functions Core Tools reference](functions-core-tools-reference.md)++ [Code and test Azure Functions locally](functions-develop-local.md) |
azure-functions | Dedicated Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dedicated-plan.md | Using an App Service plan, you can manually scale out by adding more VM instance ## App Service Environments -Running in an App Service Environment (ASE) lets you fully isolate your functions and take advantage of higher numbers of instances than an App Service Plan. To get started, see [Introduction to the App Service Environments](../app-service/environment/intro.md). +Running in an App Service Environment (ASE) lets you fully isolate your functions and take advantage of higher numbers of instances than an App Service Plan. To get started, see [Introduction to the App Service Environments](../app-service/environment/overview.md). If you just want to run your function app in a virtual network, you can do this using the [Premium plan](functions-premium-plan.md). To learn more, see [Establish Azure Functions private site access](functions-create-private-site-access.md). |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | Title: Develop Azure Functions by using Visual Studio Code description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code.-+ ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python |
azure-functions | Recover Python Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md | There are several common build issues that can cause Python functions to not be * The agent pool must be running on Ubuntu to guarantee that packages are restored correctly from the build step. Make sure your deployment template requires an Ubuntu environment for build and deployment. -* When the function app isn't at the root of the source repo, make sure that the `pip install` step references the correct location in which to create the `.python-packages` folder. Keep in mind that this location is case sensitive, such as in this command example: +* When the function app isn't at the root of the source repo, make sure that the `pip install` step references the correct location in which to create the `.python_packages` folder. Keep in mind that this location is case sensitive, such as in this command example: ``` pip install --target="./FunctionApp1/.python_packages/lib/site-packages" -r ./FunctionApp1/requirements.txt |
azure-maps | Clustering Point Data Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md | Title: Clustering point data in the Web SDK | Microsoft Azure Maps description: Learn how to cluster point data on maps. See how to use the Azure Maps Web SDK to cluster data, react to cluster mouse events, and display cluster aggregates. Previously updated : 07/29/2019 Last updated : 08/29/2024 To display the size of the cluster on top of the bubble, use a symbol layer with For a complete working sample of how to implement displaying clusters using a bubble layer, see [Point Clusters in Bubble Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Point Clusters in Bubble Layer source code]. <!- <br/> Use clustering to show the data points density while keeping a clean user interf For a complete working sample of how to implement displaying clusters using a symbol layer, see [Display clusters with a Symbol Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Display clusters with a Symbol Layer source code]. <!- <br/> Heat maps are a great way to display the density of data on the map. This visual For a complete working sample that demonstrates how to create a heat map that uses clustering on the data source, see [Cluster weighted Heat Map] in the [Azure Maps Samples]. For the source code for this sample, see [Cluster weighted Heat Map source code]. <!- <br/> function clusterClicked(e) { } ``` <!- <br/> The point data that a cluster represents is spread over an area. In this sample For a complete working sample that demonstrates how to do this, see [Display cluster area with Convex Hull] in the [Azure Maps Samples]. For the source code for this sample, see [Display cluster area with Convex Hull source code]. <!- <br/> Often clusters are represented using a symbol with the number of points that are The [Cluster aggregates] sample uses an aggregate expression. The code calculates a count based on the entity type property of each data point in a cluster. When a user selects a cluster, a popup shows with additional information about the cluster. For the source code for this sample, see [Cluster aggregates source code]. <!- > [!VIDEO //codepen.io/azuremaps/embed/jgYyRL/?height=500&theme-id=0&default-tab=js,result&editable=true] |
azure-maps | Map Add Heat Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md | Title: Add a heat map layer to a map | Microsoft Azure Maps description: Learn how to create a heat map and customize heat map layers using the Azure Maps Web SDK. Previously updated : 06/06/2023 Last updated : 08/28/2024 map.layers.add(new atlas.layer.HeatMapLayer(datasource, null, { The [Simple Heat Map Layer] sample demonstrates how to create a simple heat map from a data set of point features. For the source code for this sample, see [Simple Heat Map Layer source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/gQqdQB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] The previous example customized the heat map by setting the radius and opacity o The [Heat Map Layer Options] sample shows how the different options of the heat map layer that affects rendering. For the source code for this sample, see [Heat Map Layer Options source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/WYPaXr/?height=700&theme-id=0&default-tab=result] Scaling the radius so that it doubles with each zoom level creates a heat map th The [Consistent zoomable Heat Map] sample shows how to create a heat map where the radius of each data point covers the same physical area on the ground, creating a more consistent user experience when zooming the map. The heat map in this sample scales consistently between zoom levels 10 and 22. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Doubling the radius with each zoom level creates a heat map that looks consistent across all zoom levels. For the source code for this sample, see [Consistent zoomable Heat Map source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/OGyMZr/?height=500&theme-id=0&default-tab=js,result&editable=true] |
azure-maps | Map Add Image Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md | Title: Add an Image layer to a map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Web SDK to customize image layers and overlay images on fixed sets of coordinates. Previously updated : 06/06/2023 Last updated : 08/28/2024 map.layers.add(new atlas.layer.ImageLayer({ For a fully functional sample that shows how to overlay an image of a map of Newark New Jersey from 1922 as an Image layer, see [Simple Image Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Simple Image Layer source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/eQodRo/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] The code uses the static `getCoordinatesFromEdges` function from the [ImageLayer For a fully functional sample that shows how to use a KML Ground Overlay as Image Layer, see [KML Ground Overlay as Image Layer] in the [Azure Maps Samples]. For the source code for this sample, see [KML Ground Overlay as Image Layer source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/EOJgpj/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] For a fully functional sample that shows how to use a KML Ground Overlay as Imag The image layer has many styling options. For a fully functional sample that shows how the different options of the image layer affect rendering, see [Image Layer Options] in the [Azure Maps Samples]. For the source code for this sample, see [Image Layer Options source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/RqOGzx/?height=700&theme-id=0&default-tab=result] |
azure-maps | Map Add Line Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md | Title: Add a line layer to a map | Microsoft Azure Maps description: Learn how to add lines to maps. See examples that use the Azure Maps Web SDK to add line layers to maps and to customize lines with symbols and color gradients. Previously updated : 06/06/2023 Last updated : 08/28/2024 map.layers.add(new atlas.layer.LineLayer(dataSource, null, { The following screenshot shows a sample of the above functionality. <!-- > [!VIDEO //codepen.io/azuremaps/embed/qomaKv/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] function InitMap() This code creates a map that appears as follows: <!-- > [!VIDEO //codepen.io/azuremaps/embed/drBJwX/?height=500&theme-id=0&default-tab=js,result&editable=true] For a fully functional sample that shows how to apply a stroke gradient to a lin The Line layer has several styling options. For a fully functional sample that interactively demonstrates the line options, see [Line Layer Options] in the [Azure Maps Samples]. For the source code for this sample, see [Line Layer Options source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/GwLrgb/?height=700&theme-id=0&default-tab=result] |
azure-maps | Map Add Shape | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md | function InitMap() ``` <!-- > [!VIDEO //codepen.io/azuremaps/embed/yKbOvZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] function InitMap() } ``` <! > [!VIDEO //codepen.io/azuremaps/embed/aRyEPy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] In addition to filling a polygon with a color, you may use an image pattern to f For a fully functional sample that shows how to use an image template as a fill pattern in a polygon layer, see [Fill polygon with built-in icon template] in the [Azure Maps Samples]. For the source code for this sample, see [Fill polygon with built-in icon template source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/JzQpYX/?height=500&theme-id=0&default-tab=js,result] For a fully functional sample that shows how to use an image template as a fill The Polygon layer only has a few styling options. See the [Polygon Layer Options] sample map in the [Azure Maps Samples] to try them out. For the source code for this sample, see [Polygon Layer Options source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/LXvxpg/?height=700&theme-id=0&default-tab=result] function InitMap() } ``` <! > [!VIDEO //codepen.io/azuremaps/embed/PRmzJX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] |
azure-maps | Map Add Tile Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md | |
azure-maps | Map Extruded Polygon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md | |
azure-maps | Map Show Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md | |
azure-maps | Webgl Custom Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md | |
azure-monitor | Vminsights Dependency Agent Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md | Title: VM Insights Dependency Agent -description: This article describes how to upgrade the VM insights Dependency agent using command-line, setup wizard, and other methods. +description: This article describes how to upgrade the VM Insights Dependency Agent using command-line, setup wizard, and other methods. Last updated 09/28/2023 > [!CAUTION] > This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). -The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation. +Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade it manually or through automation. >[!NOTE]-> The Dependency Agent sends heartbeat data to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table, for which you incur data ingestion charges. This behavior is different from Azure Monitor Agent, which sends agent health data to the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which is free from data collection charges. +> Dependency Agent sends heartbeat data to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table, for which you incur data ingestion charges. This behavior is different from Azure Monitor Agent, which sends agent health data to the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which is free from data collection charges. ## Dependency Agent requirements -* The Dependency Agent requires the Azure Monitor Agent to be installed on the same machine. -* On both the Windows and Linux versions, the Dependency Agent collects data using a user-space service and a kernel driver. - * Dependency Agent supports the same [Windows versions that Azure Monitor Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI. - * For Linux, see [Dependency Agent Linux support](#dependency-agent-linux-support). +> [!div class="checklist"] +> * Requires the Azure Monitor Agent to be installed on the same machine. +> * Collects data using a user-space service and a kernel driver on both Windows and Linux. +> * Supports the same [Windows versions that Azure Monitor Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI. For Linux, see [Dependency Agent Linux support](#dependency-agent-linux-support). ## Install or upgrade Dependency Agent -You can upgrade the Dependency agent for Windows and Linux manually or automatically, depending on the deployment scenario and environment the machine is running in, using these methods: +You can upgrade Dependency Agent for Windows and Linux manually or automatically, depending on the deployment scenario and environment the machine is running in, using these methods: -|Environment |Installation method |Upgrade method | -||--|| -|Azure VM | Dependency agent VM extension for [Windows](/azure/virtual-machines/extensions/agent-dependency-windows) and [Linux](/azure/virtual-machines/extensions/agent-dependency-linux) | Agent is automatically upgraded by default unless you configured your Azure Resource Manager template to opt out by setting the property *autoUpgradeMinorVersion* to **false**. The upgrade for minor version where auto upgrade is disabled, and a major version upgrade follow the same method - uninstall and reinstall the extension. | -| Custom Azure VM images | Manual install of Dependency agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.| -| Non-Azure VMs | Manual install of Dependency agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. | +| Environment | Installation method | Upgrade method | +|-||-| +| Azure VM | Dependency Agent VM extension for [Windows](/azure/virtual-machines/extensions/agent-dependency-windows) and [Linux](/azure/virtual-machines/extensions/agent-dependency-linux) | Agent is automatically upgraded by default unless you configured your Azure Resource Manager template to opt out by setting the property `autoUpgradeMinorVersion` to **false**. The upgrade for minor version where auto upgrade is disabled, and a major version upgrade follow the same method - uninstall and reinstall the extension. | +| Custom Azure VM images | Manual install of Dependency Agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. | +| Non-Azure VMs | Manual install of Dependency Agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. | ++> [!NOTE] +> Dependency Agent is installed automatically when VM Insights is enabled for process and connection data via the [Azure portal](vminsights-enable-portal.md), [PowerShell](vminsights-enable-powershell.md), [ARM template deployment](vminsights-enable-resource-manager.md), or [Azure policy](vminsights-enable-policy.md). +> +> If VM Insights is enabled exclusively for performance data, Dependency Agent won't be installed. ### Manually install or upgrade Dependency Agent on Windows -Update the agent on a Windows VM from the command prompt, with a script or other automation solution, or by using the InstallDependencyAgent-Windows.exe Setup Wizard. +Update the agent on a Windows VM from the command prompt, with a script or other automation solution, or by using the InstallDependencyAgent-Windows.exe Setup Wizard. ++#### Prerequisites -[Download the latest version of the Windows agent](https://aka.ms/dependencyagentwindows). +> [!div class="checklist"] +> * Download the latest version of the Windows agent from [aka.ms/dependencyagentwindows](https://aka.ms/dependencyagentwindows). #### Using the Setup Wizard 1. Sign on to the computer with an account that has administrative rights. -2. Execute **InstallDependencyAgent-Windows.exe** to start the Setup Wizard. +1. Execute **InstallDependencyAgent-Windows.exe** to start the Setup Wizard. -3. Follow the **Dependency Agent Setup** wizard to uninstall the previous version of the dependency agent and then install the latest version. -+1. Follow the **Dependency Agent Setup** wizard to uninstall the previous version of Dependency Agent and then install the latest version. #### From the command line -1. Sign on to the computer with an account that has administrative rights. +1. Sign in on the computer using an account with administrative rights. -2. Run the following command. +1. Run the following command: ```cmd InstallDependencyAgent-Windows.exe /S /RebootMode=manual Update the agent on a Windows VM from the command prompt, with a script or other The `/RebootMode=manual` parameter prevents the upgrade from automatically rebooting the machine if some processes are using files from the previous version and have a lock on them. -3. To confirm the upgrade was successful, check the `install.log` for detailed setup information. The log directory is *%Programfiles%\Microsoft Dependency Agent\logs*. +1. To confirm the upgrade was successful, check the `install.log` for detailed setup information. The log directory is *%Programfiles%\Microsoft Dependency Agent\logs*. -### Manually install or upgrade Dependency Agent on Linux +### Manually install or upgrade Dependency Agent on Linux -Upgrade from prior versions of the Dependency Agent on Linux is supported and performed following the same command as a new installation. +Upgrading from prior versions of Dependency Agent on Linux is supported and performed following the same command as a new installation. -You can download the latest version of the Linux agent from [here](https://aka.ms/dependencyagentlinux). +#### Prerequisites -1. Sign on to the computer with an account that has administrative rights. +> [!div class="checklist"] +> * Download the latest version of the Linux agent from [aka.ms/dependencyagentlinux](https://aka.ms/dependencyagentlinux) or via curl: ++```bash +curl -L -o DependencyAgent-Linux64.bin https://aka.ms/dependencyagentlinux +``` ++> [!NOTE] +> Curl doesn't automatically set execution permissions. You need to manually set them using chmod: +> +> ```bash +> chmod +x DependencyAgent-Linux64.bin +> ``` ++#### From the command line ++1. Sign in on the computer with a user account that has sudo privileges to execute commands as root. -2. Run the following command as root. +1. Run the following command: ```bash- ./InstallDependencyAgent-Linux64.bin -s + sudo <path>/InstallDependencyAgent-Linux64.bin ``` -If the Dependency agent fails to start, check the logs for detailed error information. On Linux agents, the log directory is */var/opt/microsoft/dependency-agent/log*. +If Dependency Agent fails to start, check the logs for detailed error information. On Linux agents, the log directory is */var/opt/microsoft/dependency-agent/log*. -## Uninstall Dependency Agent +## Uninstall Dependency Agent -To uninstall Dependency Agent: +> [!NOTE] +> If Dependency Agent was installed manually, it wonΓÇÖt show in the Azure portal and has to be uninstalled manually. It will only show if it was installed via the [Azure portal](vminsights-enable-portal.md), [PowerShell](vminsights-enable-powershell.md), [ARM template deployment](vminsights-enable-resource-manager.md), or [Azure policy](vminsights-enable-policy.md). 1. From the **Virtual Machines** menu in the Azure portal, select your virtual machine.+ 1. Select **Extensions + applications** > **DependencyAgentWindows** or **DependencyAgentLinux** > **Uninstall**. :::image type="content" source="media/vminsights-dependency-agent-maintenance/azure-monitor-uninstall-dependency-agent.png" alt-text="Screenshot showing the Extensions and applications screen for a virtual machine." lightbox="media/vminsights-dependency-agent-maintenance/azure-monitor-uninstall-dependency-agent.png"::: +### Manually uninstall Dependency Agent on Windows ++**Method 1:** In Windows, go to **Add and remove programs**, find Microsoft Dependency Agent, click on the ellipsis to open the context menu, and select **Uninstall**. ++**Method 2:** Use the uninstaller located in the Microsoft Dependency Agent folder, for example, `C:\Program Files\Microsoft Dependency Agent"\Uninstall_v.w.x.y.exe` (where v.w.x.y is the version number). ++### Manually uninstall Dependency Agent on Linux ++1. Sign in on the computer with a user account that has sudo privileges to execute commands as root. ++1. Run the following command: ++ ```bash + sudo /opt/microsoft/dependency-agent/uninstall -s + ``` + ## Dependency Agent Linux support -Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent. +Since Dependency Agent works at the kernel level, support is also dependent on the kernel version. As of Dependency Agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for Dependency Agent. [!INCLUDE [dependency-agent-linux-versions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/vm-insights-dependency-agent-linux-versions.md)] ## Next steps -If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md). +If you want to stop monitoring your VMs for a while or remove VM Insights entirely, see [Disable monitoring of your VMs in VM Insights](../vm/vminsights-optout.md). |
azure-netapp-files | Performance Linux Filesystem Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-filesystem-cache.md | These two tunables define the amount of RAM made usable for data modified but no ### `vm.dirty_background_ratio | vm.dirty_background_bytes` -These tunables define the starting point where the Linux write-back mechanism begins flushing dirty blocks to stable storage. Redhat defaults to 10% of physical memory, which, on a large memory system, is a significant amount of data to start flushing. Taking SAS GRID for example, historically the recommendation was to set `vm.dirty_background` to 1/5 size of `vm.dirty_ratio` or `vm.dirty_bytes`. Considering how aggressively the `vm.dirty_bytes` setting is set for SAS GRID, no specific value is being set here. +These tunables define the starting point where the Linux write-back mechanism begins flushing dirty blocks to stable storage. Redhat defaults to 10% of physical memory, which, on a large memory system, is a significant amount of data to start flushing. With SAS GRID as an example, historically the recommendation was to set `vm.dirty_background` to 1/5 size of `vm.dirty_ratio` or `vm.dirty_bytes`. Considering how aggressively the `vm.dirty_bytes` setting is set for SAS GRID, no specific value is being set here. ### `vm.dirty_expire_centisecs` -This tunable defines how old a dirty buffer can be before it must be tagged for asynchronously writing out. Take SAS ViyaΓÇÖs CAS workload for example. An ephemeral write-dominant workload found that setting this value to 300 centiseconds (3 seconds) was optimal, with 3000 centiseconds (30 seconds) being the default. +This tunable defines how old a dirty buffer can be before it must be tagged for asynchronously writing out. Take SAS ViyaΓÇÖs CAS workload for example. An ephemeral write-dominant workload found that setting this value to 300 centiseconds (3 seconds) was optimal, with 3000 centiseconds (30 seconds) being the default. -SAS Viya shares CAS data into multiple small chunks of a few megabytes each. Rather than closing these file handles after writing data to each shard, the handles are left open and the buffers within are memory-mapped by the application. Without a close, there's no flush until either memory pressure or 30 seconds has passed. Waiting for memory pressure proved suboptimal as did waiting for a long timer to expire. Unlike SAS GRID, which looked for the best overall throughput, SAS Viya looked to optimize write bandwidth. +SAS Viya shares CAS data into multiple small chunks of a few megabytes each. Rather than closing these file handles after writing data to each shard, the handles are left open and the buffers within are memory-mapped by the application. Without a close, there's no flush until the passage of either memory pressure or 30 seconds. Waiting for memory pressure proved suboptimal as did waiting for a long timer to expire. Unlike SAS GRID, which looked for the best overall throughput, SAS Viya looked to optimize write bandwidth. ### `vm.dirty_writeback_centisecs` The kernel flusher thread is responsible for asynchronously flushing dirty buffe ## Impact of an untuned filesystem cache -Considering the default virtual memory tunables and the amount of RAM in modern systems, write-back potentially slows down other storage-bound operations from the perspective of the specific client driving this mixed workload. The following symptoms may be expected from an untuned, write-heavy, cache-laden Linux machine. +When you consider the default virtual memory tunables and the amount of RAM in modern systems, write-back potentially slows down other storage-bound operations from the perspective of the specific client driving this mixed workload. The following symptoms can be expected from an untuned, write-heavy, cache-laden Linux machine. * Directory lists `ls` take long enough as to appear unresponsive. * Read throughput against the filesystem decreases significantly in comparison to write throughput. Setting the filesystem cache parameters as described in this section has been sh To understand what is going with virtual memory and the write-back, consider the following code snippet and output. *Dirty* represents the amount dirty memory in the system, and *writeback* represents the amount of memory actively being written to storage. -`# while true; do echo "###" ;date ; egrep "^Cached:|^Dirty:|^Writeback:|file" /proc/meminfo; sleep 5; done` +``` +# while true; do echo "###" ;date ; egrep "^Cached:|^Dirty:|^Writeback:|file" /proc/meminfo; sleep 5; done` +``` The following output comes from an experiment where the `vm.dirty_ratio` and the `vm.dirty_background` ratio were set to 2% and 1% of physical memory respectively. In this case, flushing began at 3.8 GiB, 1% of the 384-GiB memory system. Writeback closely resembled the write throughput to NFS. |
azure-netapp-files | Snapshots Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-delete.md | -You can delete snapshots that you no longer need to keep. +You can delete snapshots that you no longer need. > [!IMPORTANT]-> The snapshot deletion operation cannot be undone. A deleted snapshot cannot be recovered. +> You can't undo the snapshot deletion. You can't recover a deleted snapshot. ## Considerations -* You can't delete a snapshot if it is part of an active file-restore operation or if it is in the process of being cloned. +* You can't delete a snapshot if it's part of an active file-restore operation or if it's in the process of being cloned. * You can't delete a replication generated snapshot that is used for volume baseline data replication. ## Steps -1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to delete. Select **Delete**. +1. Go to the **Snapshots** menu of a volume. Select the three dots at the end of the row of the snapshot want to delete. Select **Delete**. - ![Screenshot that describes the right-click menu of a snapshot](./media/shared/snapshot-right-click-menu.png) + ![Screenshot illustrating the Snapshot overview menu.](./media/shared/snapshot-right-click-menu.png) -2. In the Delete Snapshot window, confirm that you want to delete the snapshot by clicking **Yes**. +2. In the Delete Snapshot window, confirm that you want to delete the snapshot by selecting **Yes**. - ![Screenshot that confirms snapshot deletion](./media/snapshots-delete/snapshot-confirm-delete.png) + ![Screenshot showing confirmed snapshot deletion.](./media/snapshots-delete/snapshot-confirm-delete.png) ## Next steps * [Learn more about snapshots](snapshots-introduction.md)-* [Azure NetApp Files Snapshot Overview](https://anfcommunity.com/2021/01/31/azure-netapp-files-snapshot-overview/) +* [Azure NetApp Files snapshot overview](https://anfcommunity.com/2021/01/31/azure-netapp-files-snapshot-overview/) |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t * Availability zone volume placement enhancement - [**Populate existing volumes**](manage-availability-zone-volume-placement.md#populate-an-existing-volume-with-availability-zone-information) is now generally available (GA). - * [Cross-zone replication](cross-zone-replication-introduction.md) is now generally available (GA). Cross-zone replication allows you to replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone (AZ) to another within the same region. Using technology similar to the cross-region replication feature and Azure NetApp Files availability zone volume placement feature, cross-zone replication replicates data in-region across different zones; only changed blocks are sent over the network in a compressed, efficient format. It helps you protect your data from unforeseeable zone failures without the need for host-based data replication. This feature minimizes the amount of data required to replicate across the zones, limiting data transfers required and shortens the replication time so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs and is highly cost-effective. - Cross-zone replication is available in all [AZ-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp®ions=all&rar=true). + Cross-zone replication is available in all [AZ-enabled regions](../reliability/availability-zones-service-support.md) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp®ions=all&rar=true). * [Transition a volume to customer-managed keys](configure-customer-managed-keys.md#transition) (Preview) Azure NetApp Files is updated regularly. This article provides a summary about t This feature is now in public preview, currently available in [16 Azure regions](azure-netapp-files-network-topologies.md). It will roll out to other regions. Stay tuned for further information as more regions become available. -* [Azure Application Consistent Snapshot tool (AzAcSnap) 8 (GA)](azacsnap-introduction.md) +* [Azure Application Consistent Snapshot tool (AzAcSnap) 8](azacsnap-introduction.md) is now generally available (GA) Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements: Azure NetApp Files is updated regularly. This article provides a summary about t * [Troubleshooting enhancement: break file locks](troubleshoot-file-locks.md) - In some cases you may encounter (stale) file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client. + You might sometimes encounter stale file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client. ## April 2023 Azure NetApp Files is updated regularly. This article provides a summary about t * [Single-file snapshot restore](snapshots-restore-file-single.md) (Preview) - Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach drastically reduces RTO and network resource usage when restoring large files. + Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach drastically reduces recovery time objective (RTO) and network resource usage when restoring large files. * Features that are now generally available (GA) Azure NetApp Files is updated regularly. This article provides a summary about t * [Azure NetApp Files backup](backup-introduction.md) (Preview) - Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion. + Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and zone redundant Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion. Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault. Azure NetApp Files is updated regularly. This article provides a summary about t * [AES encryption for AD authentication](create-active-directory-connections.md#create-an-active-directory-connection) (Preview) - Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview. + Azure NetApp Files now supports AES encryption on LDAP connections to domain controllers (DC) to enable AES encryption for an SMB volume. This feature is currently in preview. * New [metrics](azure-netapp-files-metrics.md): |
azure-resource-manager | Learn Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md | After that, you might be interested in adding your Bicep code to a deployment pi :::column-end::: :::row-end::: +## Use deployment stacks ++Learn how to manage resource lifecycles with deployment stacks. ++ [<img src="media/learn-bicep/manage-resource-lifecycles-deployment-stacks.svg" width="101" height="120" alt="The trophy for the Manage resource lifecycles with deployment stacks." role="presentation"></img>](/training/modules/manage-resource-lifecycles-deployment-stacks/) + + [Manage resource lifecycles with deployment stacks](/training/modules/manage-resource-lifecycles-deployment-stacks) + ## Next steps * For a short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md). |
azure-resource-manager | Tutorial Use Deployment Stacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/tutorial-use-deployment-stacks.md | - Title: Use deployment stack with Bicep -description: Learn how to use Bicep to create and deploy a deployment stack. Previously updated : 05/22/2024-----# Tutorial: use deployment stack with Bicep --In this tutorial, you learn the process of creating and managing a deployment stack. The tutorial focuses on creating the deployment stack at the resource group scope. However, you can also create deployment stacks at either the subscription scope. To gain further insights into creating deployment stacks, see [Create deployment stacks](./deployment-stacks.md). --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure PowerShell [version 12.0.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.61.0 or later](/cli/azure/install-azure-cli).-- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).--## Create a Bicep file --Create a Bicep file in Visual Studio Code to create a storage account and a virtual network. This file is used to create your deployment stack. --```bicep -param resourceGroupLocation string = resourceGroup().location -param storageAccountName string = 'store${uniqueString(resourceGroup().id)}' -param vnetName string = 'vnet${uniqueString(resourceGroup().id)}' --resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: storageAccountName - location: resourceGroupLocation - kind: 'StorageV2' - sku: { - name: 'Standard_LRS' - } -} --resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-11-01' = { - name: vnetName - location: resourceGroupLocation - properties: { - addressSpace: { - addressPrefixes: [ - '10.0.0.0/16' - ] - } - subnets: [ - { - name: 'Subnet-1' - properties: { - addressPrefix: '10.0.0.0/24' - } - } - { - name: 'Subnet-2' - properties: { - addressPrefix: '10.0.1.0/24' - } - } - ] - } -} -``` --Save the Bicep file as _main.bicep_. --## Create a deployment stack --To create a resource group and a deployment stack, execute the following commands, ensuring you provide the appropriate Bicep file path based on your execution location. --# [CLI](#tab/azure-cli) --```azurecli -az group create \ - --name 'demoRg' \ - --location 'centralus' --az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --Use the `action-on-unmanage` switch to define what happens to resources that are no longer managed after a stack is updated or deleted. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion). The `deny-settings-mode` switch assigns a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. For more information, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources). --# [PowerShell](#tab/azure-powershell) --```azurepowershell -New-AzResourceGroup ` - -Name "demoRg" ` - -Location "centralus" --New-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` --Use the `ActionOnUnmanage` switch to define what happens to resources that are no longer managed after a stack is updated or deleted. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion). The `DenySettingsMode` switch assigns a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. For more information, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources). ----## List the deployment stack and the managed resources --To verify the deployment, you can list the deployment stack and list the managed resources of the deployment stack. --To list the deployed deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --resource-group 'demoRg' \ - --name 'demoStack' -``` --The output shows two managed resources - one storage account and one virtual network: --```output -{ - "actionOnUnmanage": { - "managementGroups": "detach", - "resourceGroups": "detach", - "resources": "detach" - }, - "debugSetting": null, - "deletedResources": [], - "denySettings": { - "applyToChildScopes": false, - "excludedActions": null, - "excludedPrincipals": null, - "mode": "none" - }, - "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-24051714epybc", - "deploymentScope": null, - "description": null, - "detachedResources": [], - "duration": "PT32.5330364S", - "error": null, - "failedResources": [], - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack", - "location": null, - "name": "demoStack", - "outputs": null, - "parameters": {}, - "parametersLink": null, - "provisioningState": "succeeded", - "resourceGroup": "demoRg", - "resources": [ - { - "denyStatus": "none", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk", - "resourceGroup": "demoRg", - "status": "managed" - }, - { - "denyStatus": "none", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk", - "resourceGroup": "demoRg", - "status": "managed" - } - ], - "systemData": { - "createdAt": "2024-05-17T14:50:18.382948+00:00", - "createdBy": "johndoe@contoso.com", - "createdByType": "User", - "lastModifiedAt": "2024-05-17T14:50:18.382948+00:00", - "lastModifiedBy": "johndoe@contoso.com", - "lastModifiedByType": "User" - }, - "tags": {}, - "template": null, - "templateLink": null, - "type": "Microsoft.Resources/deploymentStacks" -} -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Get-AzResourceGroupDeploymentStack ` - -ResourceGroupName "demoRg" ` - -Name "demoStack" -``` --The output shows two managed resources - one storage account and one virtual network: --```output -Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack -Name : demoStack -ProvisioningState : succeeded -resourcesCleanupAction : detach -resourceGroupsCleanupAction : detach -managementGroupsCleanupAction : detach -CorrelationId : 62f1631c-a823-46c1-b240-9182ccf39cfa -DenySettingsMode : none -CreationTime(UTC) : 5/17/2024 3:37:42 PM -DeploymentId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-24051715b17ls -Resources : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk - /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk -``` ----You can also verify the deployment by listing the managed resources in the deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --output 'json' -``` --The output is similar to: --```output -{ - "actionOnUnmanage": { - "managementGroups": "detach", - "resourceGroups": "detach", - "resources": "detach" - }, - "debugSetting": null, - "deletedResources": [], - "denySettings": { - "applyToChildScopes": false, - "excludedActions": null, - "excludedPrincipals": null, - "mode": "none" - }, - "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-24051714epybc", - "deploymentScope": null, - "description": null, - "detachedResources": [], - "duration": "PT32.5330364S", - "error": null, - "failedResources": [], - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack", - "location": null, - "name": "demoStack", - "outputs": null, - "parameters": {}, - "parametersLink": null, - "provisioningState": "succeeded", - "resourceGroup": "demoRg", - "resources": [ - { - "denyStatus": "none", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk", - "resourceGroup": "demoRg", - "status": "managed" - }, - { - "denyStatus": "none", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk", - "resourceGroup": "demoRg", - "status": "managed" - } - ], - "systemData": { - "createdAt": "2024-05-17T14:50:18.382948+00:00", - "createdBy": "johndoe@contoso.com", - "createdByType": "User", - "lastModifiedAt": "2024-05-17T14:50:18.382948+00:00", - "lastModifiedBy": "johndoe@contoso.com", - "lastModifiedByType": "User" - }, - "tags": {}, - "template": null, - "templateLink": null, - "type": "Microsoft.Resources/deploymentStacks" -} -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources -``` --The output is similar to: --```output -Status DenyStatus Id - - -- -managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk -managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk -``` ----## Update the deployment stack --To update a deployment stack, make the necessary modifications to the underlying Bicep file, and then rerun the command for creating the deployment stack or use the set command in Azure PowerShell. --In this tutorial, you perform the following activities: --- Update a property of a managed resource.-- Add a resource to the stack.-- Detach a managed resource.-- Attach an existing resource to the stack.-- Delete a managed resource.--### Update a managed resource --At the end of the previous step, you have one stack with two managed resources. You will update a property of the storage account resource. --Edit the **main.bicep** file to change the sku name from `Standard_LRS` to `Standard_GRS`: --```bicep -resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: storageAccountName - location: location - kind: 'StorageV2' - sku: { - name: 'Standard_GRS' - } -} -``` --Update the managed resource by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --# [PowerShell](#tab/azure-powershell) --The following sample shows the set command. You can also use the `Create-AzResourceGroupDeploymentStack` commandlet. --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` ----You can verify the SKU property by running the following command: --# [CLI](#tab/azure-cli) --az resource list --resource-group 'demoRg' --# [PowerShell](#tab/azure-powershell) --Get-azStorageAccount -ResourceGroupName "demoRg" ----### Add a managed resource --At the end of the previous step, you have one stack with two managed resources. You will add one more storage account resource to the stack. --Edit the **main.bicep** file to include another storage account definition: --```bicep -resource storageAccount1 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: '1${storageAccountName}' - location: location - kind: 'StorageV2' - sku: { - name: 'Standard_LRS' - } -} -``` --Update the deployment stack by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` ----You can verify the deployment by listing the managed resources in the deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --output 'json' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources -``` ----You shall see the new storage account in addition to the two existing resources. --### Detach a managed resource --At the end of the previous step, you have one stack with three managed resources. You will detach one of the managed resources. After the resource is detached, it will remain in the resource group. --Edit the **main.bicep** file to remove the following storage account definition from the previous step: --```bicep -resource storageAccount1 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: '1${storageAccountName}' - location: location - kind: 'StorageV2' - sku: { - name: 'Standard_LRS' - } -} -``` --Update the deployment stack by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` ----You can verify the deployment by listing the managed resources in the deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --output 'json' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources -``` ----You shall see two managed resources in the stack. However, the detached the resource is still listed in the resource group. You can list the resources in the resource group by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az resource list --resource-group 'demoRg' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Get-azResource -ResourceGroupName "demoRg" -``` --There are three resources in the resource group, even though the stack only contains two resources. ----### Attach an existing resource to the stack --At the end of the previous step, you have one stack with two managed resources. There is an unmanaged resource in the same resource group as the managed resources. You will attach this unmanaged resource to the stack. --Edit the **main.bicep** file to include the storage account definition of the unmanaged resource: --```bicep -resource storageAccount1 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: '1${storageAccountName}' - location: location - kind: 'StorageV2' - sku: { - name: 'Standard_LRS' - } -} -``` --Update the deployment stack by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` ----You can verify the deployment by listing the managed resources in the deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --output 'json' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources -``` ----You shall see three managed resources. --### Delete a managed resource --At the end of the previous step, you have one stack with three managed resources. In one of the previous steps, you detached a managed resource. Sometimes, you might want to delete resources instead of detaching one. To delete a resource, you use a action-on-unmanage switch with the create/set command. --Edit the **main.bicep** file to remove the following storage account definition: --```bicep -resource storageAccount1 'Microsoft.Storage/storageAccounts@2023-04-01' = { - name: '1${storageAccountName}' - location: location - kind: 'StorageV2' - sku: { - name: 'Standard_LRS' - } -} -``` --Run the following command with the `--action-on-unmanage 'deleteResources'` switch: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'deleteResources' \ - --deny-settings-mode 'none' -``` --In addition to `deleteResources`, there are two other values available: `deleteAll` and `detachAll`. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion). --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "deleteResources" ` - -DenySettingsMode "none" -``` --In addition to the `-ActionOnUnmanage "deleteResources"`, there are two other values available: `deleteAll` and `detachAll`. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion). ----You can verify the deployment by listing the managed resources in the deployment stack: --# [CLI](#tab/azure-cli) --```azurecli -az stack group show \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --output 'json' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources -``` ----You shall see two managed resources in the stack. The resource is also removed from the resource group. You can verify the resource group by running the following command: --# [CLI](#tab/azure-cli) --```azurecli -az resource list --resource-group 'demoRg' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Get-azResource -ResourceGroupName "demoRg" -``` ----## Configure deny settings --When creating a deployment stack, it is possible to assign a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. These settings are refereed as deny settings. --# [PowerShell](#tab/azure-powershell) --The Azure PowerShell includes these parameters to customize the deny assignment: --- `DenySettingsMode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `None`, `DenyDelete`, and `DenyWriteAndDelete`.-- `DenySettingsApplyToChildScopes`: Deny settings are applied to child Azure management scopes.-- `DenySettingsExcludedActions`: List of role-based management operations that are excluded from the deny settings. Up to 200 actions are permitted.-- `DenySettingsExcludedPrincipals`: List of Microsoft Entra principal IDs excluded from the lock. Up to five principals are permitted.--# [CLI](#tab/azure-cli) --The Azure CLI includes these parameters to customize the deny assignment: --- `deny-settings-mode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `none`, `denyDelete`, and `denyWriteAndDelete`.-- `deny-settings-apply-to-child-scopes`: Deny settings are applied to child Azure management scopes.-- `deny-settings-excluded-actions`: List of role-based access control (RBAC) management operations excluded from the deny settings. Up to 200 actions are allowed.-- `deny-settings-excluded-principals`: List of Microsoft Entra principal IDs excluded from the lock. Up to five principals are allowed.----In this tutorial, you configure the deny settings mode. For more information about other deny settings, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources). --At the end of the previous step, you have one stack with two managed resources. --Run the following command with the deny settings mode switch set to deny-delete: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'denyDelete' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "DenyDelete" -``` ----The following delete command shall fail because the deny settings mode is set to deny-delete: --# [CLI](#tab/azure-cli) --```azurecli -az resource delete \ - --resource-group 'demoRg' \ - --name '<storage-account-name>' \ - --resource-type 'Microsoft.Storage/storageAccounts' -``` --# [PowerShell](#tab/azure-powershell) --Remove-AzResource ` - -ResourceGroupName "demoRg" ` - -ResourceName "\<storage-account-name\>" ` - -ResourceType "Microsoft.Storage/storageAccounts" ----Update the stack with the deny settings mode to none, so you can complete the rest of the tutorial: --# [CLI](#tab/azure-cli) --```azurecli -az stack group create \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --template-file './main.bicep' \ - --action-on-unmanage 'detachAll' \ - --deny-settings-mode 'none' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Set-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -TemplateFile "./main.bicep" ` - -ActionOnUnmanage "detachAll" ` - -DenySettingsMode "none" -``` ----## Export template from the stack --By exporting a deployment stack, you can generate a Bicep file. This Bicep file serves as a resource for future development and subsequent deployments. --# [CLI](#tab/azure-cli) --```azurecli -az stack group export \ - --name 'demoStack' \ - --resource-group 'demoRg' -``` --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Export-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` -``` ----You can pipe the output to a file. --## Delete the deployment stack --To delete the deployment stack, and the managed resources, run the following command: --# [CLI](#tab/azure-cli) --```azurecli -az stack group delete \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --action-on-unmanage 'deleteAll' -``` --To delete the deployment stack, but detach the managed resources: --```azurecli -az stack group delete \ - --name 'demoStack' \ - --resource-group 'demoRg' \ - --action-on-unmanage 'detachAll' -``` --For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks). --# [PowerShell](#tab/azure-powershell) --```azurepowershell -Remove-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" ` - -ActionOnUnmanage "deleteAll" -``` --To delete the deployment stack, but detach the managed resources: --```azurepowershell -Remove-AzResourceGroupDeploymentStack ` - -Name "demoStack" ` - -ResourceGroupName "demoRg" - -ActionOnUnmanage "detachAll" ` -``` --For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks). ----## Next steps --> [!div class="nextstepaction"] -> [Deployment stacks](./deployment-stacks.md) |
azure-resource-manager | Update Managed Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/update-managed-resources.md | Title: Update managed resources description: Describes how to work on resources in the managed resource group for an Azure managed application. Previously updated : 06/24/2024 Last updated : 08/29/2024 # Work with resources in the managed resource group for Azure managed application -This article describes how to update resources that are deployed as part of a managed application. As the publisher of a managed application, you have access to the resources in the managed resource group. To update these resources, you need to find the managed resource group associated with a managed application, and access the resource in that resource group. +This article describes how to update resources that are deployed as part of a managed application. As the publisher of a managed application, you have management access to resources in the managed resource group in the customer's Azure tenant. To update these resources, you need to sign in to the customer's subscription, find the managed resource group associated with a managed application, and access the resources in the managed resource group. For more information about permissions, see [Publisher and customer permissions](./overview.md#publisher-and-customer-permissions). This article assumes you deployed the managed application in the [Managed Web Application (IaaS) with Azure management services](https://github.com/Azure/azure-managedapp-samples/tree/master/Managed%20Application%20Sample%20Packages/201-managed-web-app) sample project. That managed application includes a **Standard_D1_v2** virtual machine. If you didn't deploy that managed application, you can still use this article to become familiar with the steps for updating a managed resource group. In this article, you use Azure CLI to: ## Get managed application and managed resource group -To get the managed applications in a resource group, use: +To get the managed applications in a resource group, use the following commands. Replace `<resourceGroupName>` with your resource group name. ```azurecli-interactive-az managedapp list --query "[?contains(resourceGroup,'DemoApp')]" +az managedapp list --query "[?contains(resourceGroup,'<resourceGroupName>')]" ``` To get the ID of the managed resource group, use: ```azurecli-interactive-az managedapp list --query "[?contains(resourceGroup,'DemoApp')].{ managedResourceGroup:managedResourceGroupId }" +az managedapp list --query "[?contains(resourceGroup,'<resourceGroupName>')].{ managedResourceGroup:managedResourceGroupId }" ``` ## Resize VMs in managed resource group -To see the virtual machines in the managed resource group, provide the name of the managed resource group. +To see the virtual machines in the managed resource group, provide the name of the managed resource group. Replace `<mrgName>` with your managed resource group's name. ```azurecli-interactive-az vm list -g DemoApp6zkevchqk7sfq --query "[].{VMName:name,OSType:storageProfile.osDisk.osType,VMSize:hardwareProfile.vmSize}" +az vm list -g <mrgName> --query "[].{VMName:name,OSType:storageProfile.osDisk.osType,VMSize:hardwareProfile.vmSize}" ``` To update the size of the VMs, use: ```azurecli-interactive-az vm resize --size Standard_D2_v2 --ids $(az vm list -g DemoApp6zkevchqk7sfq --query "[].id" -o tsv) +az vm resize --size Standard_D2_v2 --ids $(az vm list -g <mrgName> --query "[].id" -o tsv) ``` After the operation completes, verify the application is running on Standard D2 v2. After the operation completes, verify the application is running on Standard D2 ## Apply policy to managed resource group -Get the managed resource group and assignment a policy at that scope. The policy **e56962a6-4747-49cd-b67b-bf8b01975c4c** is a built-in policy for specifying allowed locations. +Get the managed resource group and assign a policy at that scope. The policy **e56962a6-4747-49cd-b67b-bf8b01975c4c** is a built-in policy to specify allowed locations. ```azurecli-interactive-managedGroup=$(az managedapp show --name <app-name> --resource-group DemoApp --query managedResourceGroupId --output tsv) +managedGroup=$(az managedapp show --name <app-name> --resource-group <resourceGroupName> --query managedResourceGroupId --output tsv) az policy assignment create --name locationAssignment --policy e56962a6-4747-49cd-b67b-bf8b01975c4c --scope $managedGroup --params '{ "listofallowedLocations": { The policy assignment appears in the portal. ## Next steps -- For an introduction to managed applications, see [Managed application overview](overview.md).+- For an introduction to managed applications, see [Azure Managed Applications overview](overview.md). - For sample projects, see [Sample projects for Azure managed applications](sample-projects.md). |
azure-resource-manager | App Service Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md | This article describes the steps to move App Service resources between resource If you want to move App Services to a new region, see [Move an App Service resource to another region](../../../app-service/manage-move-across-regions.md). +You can move App Service resources to a new resource group or subscription but you need to delete and upload its TLS/SSL certificates to the new resource group or subscription. Also, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). + ## Move across subscriptions When you move a Web App across subscriptions, the following guidance applies: When you move a Web App across subscriptions, the following guidance applies: - App Service Environments can't be moved to a new resource group or subscription. - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment. - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment.-- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service apps with virtual network integration cannot be moved. Remove the virtual network integration and reconnect it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section. |
cdn | Cdn Custom Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md | To enable HTTPS on a custom domain, follow these steps: 3. In the list of CDN endpoints, select the endpoint containing your custom domain. - ![Endpoints list](./media/cdn-custom-ssl/cdn-select-custom-domain-endpoint.png) -+ :::image type="content" source="./media/cdn-custom-ssl/cdn-select-custom-domain-endpoint.png" alt-text="Screenshot of endpoints list."::: The **Endpoint** page appears. 4. In the list of custom domains, select the custom domain for which you want to enable HTTPS. - ![Screenshot shows the Custom domain page with the option to Use my own certificate.](./media/cdn-custom-ssl/cdn-custom-domain.png) + :::image type="content" source="./media/cdn-custom-ssl/cdn-custom-domain.png" alt-text="Screenshot of the custom domain page with the option to use my own certificate."::: The **Custom domain** page appears. To enable HTTPS on a custom domain, follow these steps: 6. Select **On** to enable HTTPS. - ![Custom domain HTTPS status](./media/cdn-custom-ssl/cdn-select-cdn-managed-certificate.png) + :::image type="content" source="./media/cdn-custom-ssl/cdn-select-cdn-managed-certificate.png" alt-text="Screen shot of the custom domain HTTPS status."::: 7. Continue to [Validate the domain](#validate-the-domain). DigiCert sends a verification email to the following email addresses. Verify tha You should receive an email in a few minutes for you to approve the request. In case you're using a spam filter, add verification@digicert.com to its allowlist. If you don't receive an email within 24 hours, contact Microsoft support. -![Domain validation email](./media/cdn-custom-ssl/domain-validation-email.png) When you select the approval link, you're directed to the following online approval form: -![Domain validation form](./media/cdn-custom-ssl/domain-validation-form.png) Follow the instructions on the form; you have two verification options: After approval, DigiCert completes the certificate creation for your custom doma After the domain name is validated, it can take up to 6-8 hours for the custom domain HTTPS feature to be activated. When the process completes, the custom HTTPS status in the Azure portal is changed to **Enabled**. The four operation steps in the custom domain dialog are marked as complete. Your custom domain is now ready to use HTTPS. -![Enable HTTPS dialog](./media/cdn-custom-ssl/cdn-enable-custom-ssl-complete.png) ### Operation progress In this section, you learn how to disable HTTPS for your custom domain. 4. Choose the custom domain for which you want to disable HTTPS. - ![Custom domains list](./media/cdn-custom-ssl/cdn-custom-domain-HTTPS-enabled.png) + :::image type="content" source="./media/cdn-custom-ssl/cdn-custom-domain-certificate-deployed.png" alt-text="Screenshot of the custom domains list."::: 5. Choose **Off** to disable HTTPS, then select **Apply**. - ![Custom HTTPS dialog](./media/cdn-custom-ssl/cdn-disable-custom-ssl.png) + :::image type="content" source="./media/cdn-custom-ssl/cdn-disable-custom-ssl.png" alt-text="Screenshot of the custom HTTPS dialog."::: ### Wait for propagation -After the custom domain HTTPS feature is disabled, it can take up to 6-8 hours for it to take effect. When the process is complete, the custom HTTPS status in the Azure portal is changed to **Disabled**. The three operation steps in the custom domain dialog are marked as complete. Your custom domain can no longer use HTTPS. --![Disable HTTPS dialog](./media/cdn-custom-ssl/cdn-disable-custom-ssl-complete.png) --#### Operation progress --The following table shows the operation progress that occurs when you disable HTTPS. After you disable HTTPS, three operation steps appear in the custom domain dialog. When a step becomes active, details appear under the step. After a step successfully completes, a green check mark appears next to it. --| Operation progress | Operation details | -| | | -| 1 Submitting request | Submitting your request | -| 2 Certificate deprovisioning | Deleting certificate | -| 3 Complete | Certificate deleted | +After the custom domain HTTPS feature is disabled, it can take up to 6-8 hours for it to take effect. When the process is complete, the custom HTTPS status in the Azure portal is changed to **Disabled**. Your custom domain can no longer use HTTPS. ## Frequently asked questions |
cdn | Cdn Purge Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-purge-endpoint.md | This guide walks you through purging assets from all edge nodes of an endpoint. 3. **Root domain purge**: Purge the root of the endpoint with "/" in the path. > [!TIP]- > 1. Paths must be specified for purge and must be a relative URL that fit the following [RFC 3986 - Uniform Resource Identifier (URI: Generic Syntax](https://datatracker.ietf.org/doc/html/rfc3986#section-3.3). + > 1. Paths must be specified for purge and must be a relative URL that fit the following [RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax](https://datatracker.ietf.org/doc/html/rfc3986#section-3.3). > > 1. In Azure CDN from Microsoft, query strings in the purge URL path are not considered. If the path to purge is provided as `/TestCDN?myname=max`, only `/TestCDN` is considered. The query string `myname=max` is omitted. Both `TestCDN?myname=max` and `TestCDN?myname=clark` will be purged. |
cloud-shell | Faq Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/faq-troubleshooting.md | description: This article answers common questions and explains how to troubleshoot Cloud Shell issues. -ms.contributor: jahelmic Previously updated : 08/22/2024 Last updated : 08/29/2024 tags: azure-resource-manager command that requires elevated permissions. - **Details**: When creating the Cloud Shell storage account for first-time users, it's unsuccessful due to an Azure Policy assignment placed by your admin. The error message includes: - > The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by - > one or more policies. + ``` + The resource action 'Microsoft.Storage/storageAccounts/write' is disallowed by + one or more policies. + ``` - **Resolution**: Contact your Azure administrator to remove or update the Azure Policy assignment denying storage creation. command that requires elevated permissions. following domains: - `*.console.azure.com` - `*.servicebus.windows.net`+ - `*.servicebus.usgovcloudapi.net` for Azure Government Cloud -### Accessing Cloud Shell from VNET Isolation with a Private DNS Zone - Failed to request a terminal +### Failed to request a terminal - Accessing Cloud Shell from a network that uses a private DNS resolver - **Details**: Cloud Shell uses Azure Relay for terminal connections. Cloud Shell can fail to request a terminal due to DNS resolution problems. This failure can be caused when you launch a- nonisolated Cloud Shell session from within a VNet-isolated environment that includes a private - DNS Zone for the servicebus domain. + Cloud Shell session from a host in a network that has a private DNS Zone for the servicebus + domain. This error can also occur if you're using a private on-premises DNS server. -- **Resolution**: There are two ways to resolve this problem. You can follow the instructions in- [Deploy Cloud Shell in a virtual network][01]. Or, you can add a DNS record for the Azure Relay - instance that Cloud Shell uses. +- **Resolution**: You can add a DNS record for the Azure Relay instance that Cloud Shell uses. The following steps show you how to identify the DNS name of the Cloud Shell instance and how to create a DNS record for that name. command that requires elevated permissions. corner. Search for `terminals?` to find the request for a Cloud Shell terminal. Select the one of the request entries found by the search. In the **Headers** tab, find the hostname in the **Request URL**. The name is similar to- `ccon-prod-<region-name>-aci-XX.servicebus.windows.net`. + `ccon-prod-<region-name>-aci-XX.servicebus.windows.net`. For Azure Government Cloud, the + hostname ends with `servicebus.usgovcloudapi.net`. The following screenshot shows the Developer Tools in Microsoft Edge for a successful request for a terminal. The hostname is `ccon-prod-southcentalus-aci-02.servicebus.windows.net`. In command that requires elevated permissions. [![Screenshot of the browser developer tools.](media/faq-troubleshooting/devtools-small.png)](media/faq-troubleshooting/devtools-large.png#lightbox) + For information about accessing the Developer Tools in other browsers, see + [Capture a browser trace for troubleshooting][03]. + 1. From a host outside of your private network, run the `nslookup` command to find the IP address of the hostname as found in the previous step. command that requires elevated permissions. ```Output Server: 168.63.129.16- Address: 168.63.129.16#53 + Address: 168.63.129.16 Non-authoritative answer: ccon-prod-southcentralus-aci-02.servicebus.windows.net canonical name = ns-sb2-prod-sn3-012.cloudapp.net. command that requires elevated permissions. Address: 40.84.152.91 ``` - 1. Add an A record for the public IP in the Private DNS Zone of the VNET isolated setup. For this + 1. Add an A record for the public IP in the Private DNS Zone of your private network. For this example, the DNS record would have the following properties: - Name: ccon-prod-southcentralus-aci-02 command that requires elevated permissions. For more information about creating DNS records in a private DNS zone, see [Manage DNS record sets and records with Azure DNS][02]. + > [!NOTE] + > This IP address is subject to change periodically. You might need to repeat this process to + > discover the new IP address. ++ Alternately, you can deploy your own private Cloud Shell instance. For more information, see + [Deploy Cloud Shell in a virtual network][01]. + ## Managing Cloud Shell ### Manage personal data Use the following steps to delete your user settings. <!-- link references --> [01]: /azure/cloud-shell/vnet/overview [02]: /azure/dns/dns-operations-recordsets-portal+[03]: /azure/azure-portal/capture-browser-trace |
communication-services | Sdk Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md | Publishing locations for individual SDK packages: ##### Android platform support -The Android ecosystem is extensive, encompassing various versions and specialized platforms designed for diverse types of devices. The next table lists the Android platforms currently available: +The Android ecosystem is extensive, encompassing various versions and specialized platforms designed for diverse types of devices. The next table lists the Android platforms currently supported: | Devices | Description | Support | | -- | --| -- | | Phones and tablets | Standard devices running [Android Commercial](https://developer.android.com/get-started). | Fully support with [the video resolution](./voice-video-calling/calling-sdk-features.md?#supported-video-resolutions). |-| TV apps or gaming | Apps running running [Android TV](https://developer.android.com/tv), optimized for the TV experience, focused on streaming services and gaming. |Audio-only support | -| Smartwatches or wearables devices | Simple user interface and lower power consumption, designed to operate on small screens with limited hardware, using [Wear OS](https://wearos.google.com/). |Audio-only support | -| Automobile | Car head units running [Android Automotive OS (AAOS)](https://source.android.com/docs/automotive/start/what_automotive). |Audio-only support | -| Mirror auto applications | Apps that allow driver to mirror their phone to a carΓÇÖs built-in screens, running [Android Auto](https://www.android.com/auto/). | Audio-only support | -| Custom devices | Custom devices or applications using [Android Open Source Project (AOSP)](https://source.android.com/), running custom operating systems for specialized hardware, like ruggedized devices, kiosks, or smart glasses; devices where performance, security, or customization is critical. |Audio-only support | > [!NOTE] > We **only support video calls on phones and tablets**. For use cases involving video on non-standard devices or platforms (such as smart glasses or custom devices), we suggest [contacting us](https://github.com/Azure/communication) early in your development process to help determine the most suitable integration approach. |
communication-services | Call Recording | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md | An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` "sampleRate": <number>, // sample rate for audio recording "bitRate": <number>, // bitrate for audio recording "channels": <number> // number of audio channels in output recording+ }, + "videoConfiguration": { + "longerSideLength": <number>, // longerSideLength for video recording + "shorterSideLength": <number>, // shorterSideLength for video recording + "frameRate": <number>, // frameRate for video recording + "bitRate": <number> // bitrate for video recording } }, "participants": [ |
container-apps | Azure Resource Manager Api Spec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md | -This article describes the ARM and YAML configurations for frequently used Container Apps resources. For a complete list of Container Apps resources see [Azure Resource Manager templates for Container Apps](/azure/templates/microsoft.app/containerapps?pivots=deployment-language-arm-template). +This article includes examples of the ARM and YAML configurations for frequently used Container Apps resources. For a complete list of Container Apps resources see [Azure Resource Manager templates for Container Apps](/azure/templates/microsoft.app/containerapps?pivots=deployment-language-arm-template). The code listed in this article is for example purposes only. For full schema and type information, see the JSON definitions for your required API version. ## API versions |
container-apps | Java Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-admin.md | When running Admin for Spring in Azure Container Apps, be aware of the following Before you begin to work with the Admin for Spring, you first need to create the required resources. +### [Azure CLI](#tab/azure-cli) + The following commands help you create your resource group and Container Apps environment. 1. Create variables to support your application configuration. These values are provided for you for the purposes of this lesson. ```bash export LOCATION=eastus- export RESOURCE_GROUP=my-demo-resource-group + export RESOURCE_GROUP=my-resource-group export ENVIRONMENT=my-environment export JAVA_COMPONENT_NAME=admin export APP_NAME=sample-admin-client The following commands help you create your resource group and Container Apps en --location $LOCATION ``` +### [Azure portal](#tab/azure-portal) ++Use the following steps to create each of the resources necessary to create a container app. ++1. Search for **Container Apps** in the Azure portal and select **Create**. ++1. Enter the following values to *Basics* tab. ++ | Property | Value | + ||| + | **Subscription** | Select your Azure subscription. | + | **Resource group** | Select **Create new** link to create a new resource group named **my-resource-group**. | + | **Container app name** | Enter **sample-admin-client**. | + | **Deployment source** | Select **Container image**. | + | **Region** | Select the region nearest you. | + | **Container Apps environment** | Select the **Create new** link to create a new environment. | ++1. In the *Create Container Apps environment* window, enter the following values. ++ | Property | Value | + ||| + | **Environment name** | Enter **my-environment**. | + | **Zone redundancy** | Select **Disabled**. | + + Select the **Create** button, and then select the **Container** tab. + +1. In *Container* tab, enter the following values. ++ | Property | Value | + ||| + | **Name** | Enter **sample-admin-client**. | + | **Image source** | Select **Docker Hub or other registries**. | + | **Image type** | Select **Public**. | + | **Registry login server** | Enter **mcr.microsoft.com**. | + | **Image and tag** | Enter **javacomponents/samples/sample-admin-for-spring-client:latest**. | + + Select the **Ingress** tab. ++1. In *Ingress* tab, enter the following and leave the rest of the form with their default values. ++ | Property | Value | + ||| + | **Ingress** | Select **Enabled**. | + | **Ingress traffic** | Select **Accept traffic from anywhere**. | + | **Ingress type** | Select **HTTP**. | + | **Target port** | Enter **8080**. | + + Select **Review + create**. ++1. Once the validation checks pass, select **Create** to create your container app. +++ ## Use the component +### [Azure CLI](#tab/azure-cli) + Now that you have an existing environment, you can create your container app and bind it to a Java component instance of Admin for Spring component. 1. Create the Admin for Spring Java component. Now that you have an existing environment, you can create your container app and --max-replicas 2 ``` +### [Azure portal](#tab/azure-portal) ++Now that you have an existing environment and admin client container app, you can create a Java component instance of Admin for Spring. ++1. Go to your container app's environment in the portal. ++1. From the left menu, under *Services* category, select **Services**. ++1. Select **+ Configure** drop down, and select **Java component**. ++1. In the *Configure Java component* panel, enter the following values. ++ | Property | Value | + ||| + | **Java component type** | Select **Admin for Spring**. | + | **Java component name** | Enter **admin**. | ++1. Select **Next**. ++1. On the *Review* tab, select **Configure**. ++++## Bind your container app to the Admin for Spring Java component ++### [Azure CLI](#tab/azure-cli) + 1. Create the container app and bind to the Admin for Spring. ```azurecli Now that you have an existing environment, you can create your container app and --bind $JAVA_COMPONENT_NAME ``` - The `--bind` parameter binds the container app to the Admin for Spring Java component. The container app can now read the configuration values from environment variables, primarily the `SPRING_BOOT_ADMIN_CLIENT_URL` property and connect to the Admin for Spring. +### [Azure portal](#tab/azure-portal) - The binding also injects the following property: +1. Go to your container app environment in the portal. - ```bash - "SPRING_BOOT_ADMIN_CLIENT_INSTANCE_PREFER-IP": "true", - ``` +1. From the left menu, under *Services* category, select **Services**. ++1. From the list, select **admin**. ++1. Under *Bindings*, select the *App name* drop-down and select **sample-admin-client**. ++1. Select the **Review** tab. ++1. Select the **Configure** button. ++1. Return to your container app in the portal and copy the URL of your app to a text editor so you can use it in a coming step. ++++The bind operation binds the container app to the Admin for Spring Java component. The container app can now read the configuration values from environment variables, primarily the `SPRING_BOOT_ADMIN_CLIENT_URL` property and connect to the Admin for Spring. ++The binding also injects the following property: ++```bash +"SPRING_BOOT_ADMIN_CLIENT_INSTANCE_PREFER-IP": "true", +``` - This property indicates that the Admin for Spring component client should prefer the IP address of the container app instance when connecting to the Admin for Spring server. +This property indicates that the Admin for Spring component client should prefer the IP address of the container app instance when connecting to the Admin for Spring server. - You can also [remove a binding](java-admin-for-spring-usage.md#unbind) from your application. +## (Optional) Unbind your container app from the Admin for Spring Java component ++### [Azure CLI](#tab/azure-cli) ++To remove a binding from a container app, use the `--unbind` option. ++``` azurecli + az containerapp update \ + --name $APP_NAME \ + --unbind $JAVA_COMPONENT_NAME \ + --resource-group $RESOURCE_GROUP +``` ++### [Azure portal](#tab/azure-portal) ++1. Go to your container app environment in the portal. ++1. From the left menu, under *Services* category, select **Services**. ++1. From the list, select **admin**. ++1. Under *Bindings*, find the line for *sample-admin-client* select and select **Delete**. ++1. Select **Next**. ++1. Select the **Review** tab. ++1. Select the **Configure** button. ++ ## View the dashboard |
container-apps | Java Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-config-server.md | If you want to customize your own `SPRING_CONFIG_IMPORT`, you can refer to the e You can also remove a binding from your application. -## Unbind your container app from the Config Server for Spring Java component +## (Optional) Unbind your container app from the Config Server for Spring Java component ### [Azure CLI](#tab/azure-cli) To remove a binding from a container app, use the `--unbind` option. |
container-apps | Java Eureka Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-eureka-server.md | Use the following steps to create each of the resources necessary to create a co | Property | Value | |||- | **Name** | Enter **my-config-client**. | + | **Name** | Enter **my-eureka-client**. | | **Image source** | Select **Docker Hub or other registries**. | | **Image type** | Select **Public**. | | **Registry login server** | Enter **mcr.microsoft.com**. | Now that you have an existing environment, you can create your container app and Now that you have an existing environment and eureka client container app, you can create a Java component instance of Eureka Server for Spring. -Now that you have an existing environment and config server client container app, you can create a Java component instance of Config Server for Spring. - 1. Go to your container app's environment in the portal. 1. From the left menu, under *Services* category, select **Services**. Now that you have an existing environment and config server client container app | **Java component type** | Select **Eureka Server for Spring**. | | **Java component name** | Enter **eureka**. | -1. In the *Bindings* section, select the *App name* drop-down and select **my-component-app**. - 1. Select **Next**. 1. On the *Review* tab, select **Configure**. The `eureka.client.register-with-eureka` property is set to `true` to enforce re The `eureka.instance.prefer-ip-address` is set to `true` due to the specific DNS resolution rule in the container app environment. Don't modify this value so you don't break the binding. -## Unbind your container app from the Eureka Server for Spring Java component +## (Optional) Unbind your container app from the Eureka Server for Spring Java component ### [Azure CLI](#tab/azure-cli) |
container-apps | Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/samples.md | Refer to the following samples to learn how to use Azure Container Apps in diffe | [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps)<br /> | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. | | [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps)<br /> | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. | | [Deploy Drupal on Azure Container Apps](https://github.com/Azure-Samples/drupal-on-azure-container-apps) | Demonstrates how to deploy a Drupal site to Azure Container Apps, with Azure Database for MariaDB, and Azure Files to store static assets.|+| [Launch Your First Java app](https://github.com/spring-projects/spring-petclinic) |A monolithic Java application called PetClinic built with Spring Framework. PetClinic is a well-known sample application provided by the Spring Framework community. | |
cost-management-billing | Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/azure-openai.md | You can save money on Azure OpenAI provisioned throughput by committing to a res To purchase an Azure OpenAI reservation, you choose an Azure region, quantity, and then add the Azure OpenAI SKU to your cart. Then you choose the quantity of provisioned throughput units that you want to purchase. -When you purchase a reservation, the Azure OpenAI provisioned throughput usage that matches the reservation attributes is no longer charged at the pay-as-you-go rates. +When you purchase a reservation, the Azure OpenAI provisioned throughput usage that matches the reservation attributes is no longer charged at the hourly rates. A reservation applies to provisioned deployments only and doesn't include other offerings such as standard deployments or fine tuning. Azure OpenAI Service Provisioned Reservations also don't guarantee capacity availability. To ensure capacity availability, the recommended best practice is to create your deployments before you buy your reservation. -When the reservation expires, Azure OpenAI deployments continue to run but are billed at the pay-as-you-go rate. +When the reservation expires, Azure OpenAI deployments continue to run but are billed at the hourly rate. You can choose to enable automatic renewal of reservations by selecting the option in the renewal settings or at time of purchase. With Azure OpenAI reservation auto renewal, the reservation renews using the same reservation order ID, and a new reservation doesn't get purchased. You can also choose to replace this reservation with a new reservation purchase in renewal settings and a replacement reservation is purchased when the reservation expires. By default, the replacement reservation has the same attributes as the expiring reservation. You can optionally change the name, billing frequency, term, or quantity in the renewal settings. Any user with owner access on the reservation and the subscription used for billing can set up renewal. For more information about how enterprise customers and pay-as-you-go customers The Azure OpenAI reservation size should be based on the total provisioned throughput units that you consume via deployments. Reservation purchases are made in one provisioned throughput unit increments. -For example, assume that your total consumption of provisioned throughput units is 64 units. You want to purchase a reservation for all of it, so you should purchase 64 of reservation quantity. +For example, assume that your total consumption of provisioned throughput units is 100 units. You want to purchase a reservation for all of it, so you should purchase 100 of reservation quantity. ## Buy a Microsoft Azure OpenAI reservation |
cost-management-billing | Understand Suse Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-suse-reservation-charges.md | For example, if your usage has product **Red Hat Enterprise Linux - 1-4 vCPU VM Get the product name from your usage data and buy the SUSE plan with the same type and size. -For example, if your usage is for product **SUSE Linux Enterprise Server Priority - 2-4 vCPU VM Support**, you should purchase **SUSE Linux Enterprise Server Priority** for **2-4 vCPU**. -+For example, if your usage is for product **SUSE for SAP Linux Enterprise Server** **- 2-4 vCPU VM Support**, you should purchase **SUSE for SAP Linux Enterprise Server** for **2-4 vCPU**. ## Discount applies to different VM sizes for SUSE plans The following tables show the software plans you can buy a reservation for, thei |SUSE Linux Enterprise Server for HPC 3-4 vCPUs|4ed70d2d-e2bb-4dcd-b6fa-42da71861a1c|1.92308|D4s_v3| |SUSE Linux Enterprise Server for HPC 5+ vCPUs |907a85de-024f-4dd6-969c-347d47a1bdff|2.92308|D8s_v3| -### SUSE Linux Enterprise Server for SAP applications +### SUSE for SAP Linux Enterprise Server |SUSE VM | MeterId | Ratio|Example VM size| | - || | |-|SUSE Linux Enterprise Server for SAP applications 1-2 vCPUs|497fe0b6-fa3c-4e3d-a66b-836097244142|1|D2s_v3| -|SUSE Linux Enterprise Server for SAP applications 3-4 vCPUs |847887de-68ce-4adc-8a33-7a3f4133312f|2|D4s_v3| -|SUSE Linux Enterprise Server for SAP applications 5+ vCPUs |18ae79cd-dfce-48c9-897b-ebd3053c6058|2.41176|D8s_v3| +|SUSE for SAP Linux Enterprise Server 1-2 vCPUs|797618eb-cecb-59e7-a10e-1ee1e4e62d32|1|D2s_v3| +|SUSE for SAP Linux Enterprise Server 3-4 vCPUs |1c0fb48a-e518-53c2-ab56-6feddadbb9a3|2|D4s_v3| +|SUSE for SAP Linux Enterprise Server 5+ vCPUs |3ce5649c-142b-5a59-9b2a-6889da9b56f5|2.41176|D8s_v3| ### SUSE Linux Enterprise Server |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | The following connectors are scheduled for deprecation on December 31, 2024. You - [Phoenix](connector-phoenix.md) -## Connectors deprecated +## Connectors that is deprecated The following connector was deprecated. -- [Amazon Marketplace Web Service (MWS)](connector-amazon-marketplace-web-service.md)+- [Amazon Marketplace Web Service](connector-amazon-marketplace-web-service.md) ## Options to replace deprecated connectors |
deployment-environments | Concept Azure Developer Cli With Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-azure-developer-cli-with-deployment-environments.md | Title: Use Azure Developer CLI with Azure Deployment Environments description: Understand ADE and `azd` work together to provision application infrastructure and deploy application code to the new infrastructure. -+ Last updated 02/24/2024 With ADE, you can create environments from an environment definition in a catalo ## How does `azd` work with ADE? -`azd` works with ADE to enable you to create environments from where youΓÇÖre working. +`azd` works with ADE to enable you to create environments from where you're working. With ADE and `azd`, individual developers working with unique infrastructure and code that they want to upload to the cloud can create an environment from a local folder. They can use `azd` to provision an environment and deploy their code seamlessly. |
deployment-environments | Concept Deployment Environments Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-deployment-environments-role-based-access-control.md | Title: Azure role-based access control description: Learn how Azure Deployment Environments provides protection with Azure role-based access control (Azure RBAC) integration.--+ |
deployment-environments | Concept Environment Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environment-yaml.md | Title: environment.yaml schema description: Learn how to use environment.yaml to define parameters in your environment definition. -+ Last updated 11/17/2023 |
deployment-environments | How To Configure Azure Developer Cli Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-azure-developer-cli-deployment-environments.md | Title: Configure Azure Developer CLI templates for use with ADE description: Understand how ADE and AZD work together to provision application infrastructure and deploy application code to the new infrastructure. -+ Last updated 03/26/2024 Sign in to Azure at the CLI using the following command: #### Enable AZD support for ADE -When `platform.type` is set to `devcenter`, all AZD remote environment state and provisioning uses dev center components. AZD uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. In this configuration, the *infra* folder in your local templates isnΓÇÖt used. +When `platform.type` is set to `devcenter`, all AZD remote environment state and provisioning uses dev center components. AZD uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. In this configuration, the *infra* folder in your local templates isn't used. # [Visual Studio Code](#tab/visual-studio-code) |
deployment-environments | How To Request Quota Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-request-quota-increase.md | Title: Request a quota limit increase for Azure Deployment Environments resources description: Learn how to request a quota increase to extend the number of Deployment Environments resources you can use in your subscription. --+ |
dev-box | Quickstart Create Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md | You can create and manage multiple dev boxes as a dev box user. Create a dev box To complete this quickstart, you need: - Your organization must have configured Microsoft Dev Box with at least one project and dev box pool before you can create a dev box. - - Platform engineers can follow these steps to configure Microsoft Dev Box: [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) - + - Platform engineers can follow these steps to configure Microsoft Dev Box: [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md). - You must have permissions as a [Dev Box User](quickstart-configure-dev-box-service.md#provide-access-to-a-dev-box-project) for a project that has an available dev box pool. If you don't have permissions to a project, contact your administrator. ## Create a dev box To create a dev box in the Microsoft Dev Box developer portal: 1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal). -1. Select **Add a dev box**. -- :::image type="content" source="./media/quickstart-create-dev-box/welcome-to-developer-portal.png" alt-text="Screenshot of the developer portal and the button for adding a dev box." lightbox="./media/quickstart-create-dev-box/welcome-to-developer-portal.png"::: +1. Select **New** > **New dev box**. 1. In **Add a dev box**, enter the following values: To create a dev box in the Microsoft Dev Box developer portal: 1. Select **Create** to begin creating your dev box. 1. Use the dev box tile in the developer portal to track the progress of creation.-- > [!Note] - > If you encounter a vCPU quota error with a *QuotaExceeded* message, ask your administrator to [request an increased quota limit](/azure/dev-box/how-to-request-quota-increase). If your admin can't increase the quota limit at this time, try selecting another pool with a region close to your location. :::image type="content" source="./media/quickstart-create-dev-box/dev-box-tile-creating.png" alt-text="Screenshot of the developer portal that shows the dev box card with a status of Creating." lightbox="./media/quickstart-create-dev-box/dev-box-tile-creating.png":::-+ + > [!Note] + > If you encounter a vCPU quota error with a *QuotaExceeded* message, ask your administrator to [request an increased quota limit](/azure/dev-box/how-to-request-quota-increase). If your admin can't increase the quota limit at this time, try selecting another pool with a region close to your location. [!INCLUDE [dev box runs on creation note](./includes/note-dev-box-runs-on-creation.md)] |
dns | Dns Operations Recordsets Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-cli.md | The record set name given must be a *relative* name, meaning it must exclude the If a new record set is created, a default time-to-live (TTL) of 3600 is used. For instructions on how to use different TTLs, see [Create a DNS record set](#create-a-dns-record-set). -The following example creates an A record called *www* in the zone *contoso.com* in the resource group *MyResourceGroup*. The IP address of the A record is *1.2.3.4*. +The following example creates an A record called *www* in the zone *contoso.com* in the resource group *MyResourceGroup*. The IP address of the A record is *203.0.113.11*. ```azurecli-interactive-az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 1.2.3.4 +az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 203.0.113.11 ``` To create a record set in the apex of the zone (in this case, "contoso.com"), use the record name "\@", including the quotation marks: ```azurecli-interactive-az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "@" --ipv4-address 1.2.3.4 +az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "@" --ipv4-address 203.0.113.11 ``` ## Create a DNS record set There's no example for create an SOA record set, since SOAs are created and dele ### Create an AAAA record ```azurecli-interactive-az network dns record-set aaaa add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name test-aaaa --ipv6-address 2607:f8b0:4009:1803::1005 +az network dns record-set aaaa add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name test-aaaa --ipv6-address FD00::1 ``` ### Create a CAA record az network dns record-set mx add-record --resource-group myresourcegroup --zone- ### Create an NS record ```azurecli-interactive-az network dns record-set ns add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name test-ns --nsdname ns1.contoso.com +az network dns record-set ns add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name test-ns --nsdname ns1.fabrikam.com ``` ### Create a PTR record This command deletes a DNS record from a record set. If the last record in a rec When you use the `az network dns record-set <record-type> add-record` command, you need to specify the record getting deleted and the zone to delete from. These parameters are described in [Create a DNS record](#create-a-dns-record) and [Create records of other types](#create-records-of-other-types) above. -The following example deletes the A record with value '1.2.3.4' from the record set named *www* in the zone *contoso.com*, in the resource group *MyResourceGroup*. +The following example deletes the A record with value '203.0.113.11' from the record set named *www* in the zone *contoso.com*, in the resource group *MyResourceGroup*. ```azurecli-interactive-az network dns record-set a remove-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "www" --ipv4-address 1.2.3.4 +az network dns record-set a remove-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "www" --ipv4-address 203.0.113.11 ``` ## Modify an existing record set Each record set contains a [time-to-live (TTL)](dns-zones-records.md#time-to-liv To modify an existing record of type A, AAAA, CAA, MX, NS, PTR, SRV, or TXT, you should first add a new record and then delete the existing record. For detailed instructions on how to delete and add records, see the earlier sections of this article. -The following example shows how to modify an 'A' record, from IP address 1.2.3.4 to IP address 5.6.7.8: +The following example shows how to modify an 'A' record, from IP address 203.0.113.11 to IP address 203.0.113.22: ```azurecli-interactive-az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 5.6.7.8 -az network dns record-set a remove-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 1.2.3.4 +az network dns record-set a add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 203.0.113.22 +az network dns record-set a remove-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name www --ipv4-address 203.0.113.11 ``` You can't add, remove, or modify the records in the automatically created NS record set at the zone apex (`--Name "@"`, including quote marks). For this record set, the only changes permitted are to modify the record set TTL and metadata. This restriction applies only to the NS record set at the zone apex. Other NS re The following example shows how to add another name server to the NS record set at the zone apex: ```azurecli-interactive-az network dns record-set ns add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "@" --nsdname ns1.myotherdnsprovider.com +az network dns record-set ns add-record --resource-group myresourcegroup --zone-name contoso.com --record-set-name "@" --nsdname ns1.fabrikam.com ``` ### To modify the TTL of an existing record set |
energy-data-services | Concepts Reference Data Values | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-reference-data-values.md | Within the OSDU Data Platform framework, reference data values play a crucial ro In addition to enabling data interpretation and collaboration, reference data is required for data ingestion via the OSDU manifest ingestion workflow. Manifests provide a specific container for reference data values, which are then used to validate the ingested data and generate metadata for later discovery and use. To learn more about manifest-based ingestion, see [Manifest-based ingestion concepts](concepts-manifest-ingestion.md). The OSDU Data Platform categorizes Reference data values into the following three buckets:-* **FIXED** values, which are universally recognized and used across OSDU deployments and the energy sector. These values can't be extended or changed except by OSDU community governance updates -* **OPEN** values. The OSDU community provides an initial list of OPEN values upon which you can extend but not otherwise change -* **LOCAL** values. The OSDU community provides an initial list of LOCAL values that you can freely change, extend, or entirely replace +* **FIXED** values: This set of reference values is universally recognized and used across OSDU deployments and the energy sector. These values can't be extended or changed except by OSDU community governance updates +* **OPEN** values: The OSDU community provides an initial list of OPEN values upon which you can extend but not otherwise change +* **LOCAL** values: The OSDU community provides an initial list of LOCAL values that you can freely change, extend, or entirely replace For more information about OSDU reference data values and their different types, see [OSDU Data Definitions / Data Definitions / Reference Data](https://community.opengroup.org/osdu/dat#22-reference-data). If you extend OPEN values after instance creation, we recommend creating and usi **NameAlias updates** don't require a separate entitlement. Updates to the `NameAlias` field are governed by the same access control mechanisms as updates to any other part of a storage record. In effect, OWNER access confers the entitlement to update the `NameAlias` field. ## Current scope of Azure Data Manager for Energy reference data value syncing-Currently, Azure Data Manager for Energy syncs reference data values at instance creation and at new partition creation. Reference values are synced to those from the OSDU community, corresponding to the OSDU milestone supported by Azure Data Manager for Energy at the time of instance or partition creation. For information on the current milestone supported by and available OSDU service in Azure Data Manager for Energy, refer [OSDU services available in Azure Data Manager for Energy](osdu-services-on-adme.md). +Currently, Azure Data Manager for Energy syncs reference data values at instance creation and at new partition creation for newly created instances after feature enablement. Reference values are synced to those from the OSDU community, corresponding to the OSDU milestone supported by Azure Data Manager for Energy at the time of instance or partition creation. For information on the current milestone supported by and available OSDU service in Azure Data Manager for Energy, refer [OSDU services available in Azure Data Manager for Energy](osdu-services-on-adme.md). ## Next steps - [Quickstart: Create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) |
energy-data-services | How To Set Up Private Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md | You can connect to an Azure Data Manager for Energy instance that's configured w This article describes how to set up a private endpoint for Azure Data Manager for Energy. +> [!NOTE] +> To enable private endpoint, public access must be disabled for Azure Data Manager for Energy. If public access is enabled and private endpoint is created, the instance will only be accessed via private endpoint and not by public access. + > [!NOTE] > Terraform currently does not support private endpoint creation for Azure Data Manager for Energy. |
energy-data-services | Osdu Services On Adme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/osdu-services-on-adme.md | description: This article provides an overview of the OSDU services available on - Previously updated : 06/14/2024 Last updated : 08/30/2024 -# OSDU® M18 services available on Azure Data Manager for Energy -Azure Data Manager for Energy is currently compliant with the M18 OSDU┬« milestone release. Below you'll find an overview of the OSDU® services that are currently available on Azure Data Manager for Energy. This page will be regularly updated as service versions and availability evolve. +# OSDU® M23 services available on Azure Data Manager for Energy +Azure Data Manager for Energy is currently compliant with the M23 OSDU┬« milestone release. Below you'll find an overview of the OSDU® services that are currently available on Azure Data Manager for Energy. This page will be regularly updated as service versions and availability evolve. ### Core and helper services - **CRS Catalog**: Provides API endpoints to work with geodetic reference data, allowing developers to retrieve CRS definitions, select appropriate CRSs for data ingestion, and search for CRSs based on various constraints. - **CRS Conversion**: Enables the conversion of coordinates from one coordinate reference system (CRS) to another.-- **CSV Parser DAG**: Helps in parsing CSV files into a format for ingestion and processing. - **Dataset**: Provides internal and external API endpoints to allow and application or user fetch storage/retrieval instructions for various types of datasets. - **Entitlements**: Used to enable authorization in OSDU Data Ecosystem. The service allows for the creation of groups. A group name defines a permission. Users who are added to that group obtain that permission. The main motivation for entitlements service is data authorization, but the functionality enables three use cases: Data groups used for data authorization, Service groups used for service authorization, User groups used for hierarchical grouping of user and service identities. - **File**: Provides internal and external API endpoints to let the application or user fetch any records from the system or request file location data. Azure Data Manager for Energy is currently compliant with the M18 OSDU┬« milesto - **Register**: Allow an application to register an action (the function to be triggered). It expects data (context) to come from OSDU to enable the action, and the application can register a filter (enable/disable) to say what data can be used with this action. - **Schema**: Enables a centralized governance and management of schema in the Data Ecosystem. It offers an implementation of the schema standard. Schema Service provides all necessary APIs to Fetch, create, update, and mark a schema obsolete. - **Search**: Provides a mechanism for searching indexes. Supports full-text search on string fields, range queries on dates, numeric, or string fields, etc., along with geo-spatial search.-- **Secret**: Facilitates the storage and retrieval of various types of secrets in a specified repository(ies) so that secrets can be secure, separated from the secrets in the infrastructure repository, and be managed easily by interfacing applications.+- **Secret (Preview)**: Facilitates the storage and retrieval of various types of secrets in a specified repository(ies) so that secrets can be secure, separated from the secrets in the infrastructure repository, and be managed easily by interfacing applications. - **Seismic File Metadata**: Manages metadata associated with seismic data. It annotates dimensions, value channels, and generic key/value pairs. - **Storage**: Provides a set of APIs to manage the entire metadata life-cycle such as ingestion (persistence), modification, deletion, versioning, and data schema. - **Unit**: Provides dimension/measurement and unit definitions. Azure Data Manager for Energy is currently compliant with the M18 OSDU┬« milesto - **EDS Fetch & Ingest DAG**: Facilitates fetching data from external providers and ingesting it into the OSDU platform. It involves steps like registering with providers, creating data jobs, and triggering ingestion. - **EDS Scheduler DAG**: Automates data fetching based on predefined schedules and sends emails to recipients as needed. It ensures data remains current without manual intervention - **Ingestion Workflow**: Initiates business processes within the system. During the prototype phase, it facilitates CRUD operations on workflow metadata and triggers workflows in Apache Airflow. Additionally, the service manages process startup records, acting as a wrapper around Airflow functions.+- **Manifest Ingestion DAG**: Used for ingesting single or multiple metadata artifacts about datasets in Azure Data Manager for Energy instance. Learn more about [Manifest-based ingestion](concepts-manifest-ingestion.md). +- **CSV Parser DAG**: Helps in parsing CSV files into a format for ingestion and processing. - **osdu-airflow-lib**: A library that enables user context ingestion within the Airflow workflows. - **osdu-ingestion-lib**: A library that supports user context ingestion and includes various fixes related to Python versioning and authority replacement. - **SegY-to-oVDS DAG**: Converts SegY file formats to oVDS. Azure Data Manager for Energy is currently compliant with the M18 OSDU┬« milesto ## OSDU® services unavailable on Azure Data Manager for Energy Note: The following OSDU® services are currently unavailable on Azure Data Manager for Energy.+- **Reservoir DDMS** - **EDS Naturalization DAG**-- **Energistics Parser DAG**-- **Geospatial Consumption Zone** - **Manifest Ingestion by Reference DAG**+- **Seismic DDMS v4 APIs** +- **Rock and Fluid Sample DDMS** +- **Production DDMS - Historian** +- **WITSML Parser DAG** +- **Energistcs Parser DAG (WITSML Parser v2, Resqml Parser, ProdML Parser)** +- **Geospatial Consumption Zone** - **Partition** Note: Operations can still be performed using the available data partition APIs or through Azure portal. - **Policy Service**-- **Reservoir DDMS**-- **WITSML Parser DAG**+- **Schema Upgrade** +- **Wellbore Domain Services Worker** |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | This page is updated with the details about the upcoming release approximately a <hr width = 100%> +## August 2024 ++### Compliant with M23 OSDU® release +Azure Data Manager for Energy has now been upgraded with the supported set of services with the M23 OSDU® milestone release. With this release, you can take advantage of the key improvements made in the OSDU® latest + community features and capabilities available in the [OSDU® M23](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M23-Release-Notes) The upgrade with the OSDU® M23 release is limited to the services available and supported and you can refer [here](osdu-services-on-adme.md) for a detailed list of services available and unavailable on Azure Data Manager for Energy. See the [updated API Swaggers here](https://microsoft.github.io/adme-samples/). ++### Syncing Reference Values +We are releasing a Limited Preview for syncing Reference Values with your Azure Data Manager for Energy data partitions. Note that this feature is currently only available for newly created Azure Data Manager for Energy after feature enablement for your Azure subscription. Learn more about [Reference Values on Azure Data Manager for Energy](concepts-reference-data-values.md). + ## June 2024 ### Azure Data Manager for Energy Developer Tier Price Update |
event-grid | Webhook Event Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/webhook-event-delivery.md | Webhooks are one of the many ways to receive events from Azure Event Grid. When Like many other services that support webhooks, Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. ++## Endpoint validation with CloudEvents v1.0 +CloudEvents v1.0 implements its own abuse protection semantics using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism. + ## Endpoint validation with Event Grid events When you use any of the following three Azure services, the Azure infrastructure automatically handles this validation: If you're using any other type of endpoint, such as an HTTP trigger based Azure - **Synchronous handshake**: At the time of event subscription creation, Event Grid sends a subscription validation event to your endpoint. The schema of this event is similar to any other Event Grid event. The data portion of this event includes a `validationCode` property. Your application verifies that the validation request is for an expected event subscription, and returns the validation code in the response synchronously. This handshake mechanism is supported in all Event Grid versions. -- **Asynchronous handshake**: In certain cases, you can't return the `validationCode` in response synchronously. For example, if you use a third-party service (like [`Zapier`](https://zapier.com) or [IFTTT](https://ifttt.com/)), you can't programmatically respond with the validation code.+- **Asynchronous handshake**: In certain cases, you can't return the `validationCode` in response synchronously. For example, if you use a non-Microsoft service (like [`Zapier`](https://zapier.com) or [IFTTT](https://ifttt.com/)), you can't programmatically respond with the validation code. Event Grid supports a manual validation handshake. If you're creating an event subscription with an SDK or tool that uses API version 2018-05-01-preview or later, Event Grid sends a `validationUrl` property in the data portion of the subscription validation event. To complete the handshake, find that URL in the event data and do a GET request to it. You can use either a REST client or your web browser. If you're using any other type of endpoint, such as an HTTP trigger based Azure - The `data` property of the event includes a `validationCode` property with a randomly generated string. For example, `validationCode: acb13…`. - The event data also includes a `validationUrl` property with a URL for manually validating the subscription. - The array contains only the validation event. Other events are sent in a separate request after you echo back the validation code.-- The EventGrid data plane SDKs have classes corresponding to the subscription validation event data and subscription validation response.+- The Event Grid data plane SDKs have classes corresponding to the subscription validation event data and subscription validation response. An example SubscriptionValidationEvent is shown in the following example: To prove endpoint ownership, echo back the validation code in the `validationRes And, follow one of these steps: -- You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it may be reattempted after 5 seconds. If all the attempts fail, then it's treated as validation handshake error.+- You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it's reattempted after 5 seconds. If all the attempts fail, then it's treated as validation handshake error. The fact that your application is prepared to handle and return the validation code indicates that you created the event subscription and expected to receive the event. Imagine the scenario that there's no handshake validation supported and a hacker gets to know your application URL. The hacker can create a topic and an event subscription with your application's URL, and start conducting a DoS attack to your application by sending a lot of events. The handshake validation prevents that to happen. And, follow one of these steps: For an example of handling the subscription validation handshake, see a [C# sample](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/blob/master/EventGridConsumer/EventGridConsumer/Function1.cs). -## Endpoint validation with CloudEvents v1.0 -CloudEvents v1.0 implements its own abuse protection semantics using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism. - ## Event schema compatibility When a topic is created, an incoming event schema is defined. And, when a subscription is created, an outgoing event schema is defined. The following table shows you the compatibility allowed when creating a subscription. When a topic is created, an incoming event schema is defined. And, when a subscr | | Cloud Events v1.0 schema | Yes | | | Custom input schema | Yes | + ## Next steps See the following article to learn how to troubleshoot event subscription validations: [Troubleshoot event subscription validations](troubleshoot-subscription-validation.md). |
governance | Create Management Group Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-portal.md | directory. You receive a notification when the process is complete. For more inf on this management group. This identifier isn't editable after creation as it's used throughout the Azure system to identify this group. The [root management group](./overview.md#root-management-group-for-each-directory) is- automatically created with an ID that is the Azure Active Directory ID. For all other + automatically created with an ID that is the Microsoft Entra ID. For all other management groups, assign a unique ID. - The display name field is the name that is displayed within the Azure portal. A separate display name is an optional field when creating the management group and can be changed at any time. |
governance | Remediation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md | Title: Details of the policy remediation task structure description: Describes the policy remediation task definition used by Azure Policy to bring resources into compliance. Previously updated : 11/03/2022 Last updated : 08/30/2024 + # Azure Policy remediation task structure -The Azure Policy remediation task feature is used to bring resources into compliance established from a definition and assignment. Resources that are non-compliant to a [modify](./effects.md#modify) or [deployIfNotExist](./effects.md#deployifnotexists) definition assignment, can be brought into compliance using a remediation task. Remediation task deploys the deployIFNotExist template or the modify operations to the selected non-compliant resources using the identity specified in the assignment. See [policy assignment structure](./assignment-structure.md#identity). to understand how the identity is define and [remediate non-compliant resources tutorial](../how-to/remediate-resources.md#configure-the-managed-identity) to configure the identity. +The Azure Policy remediation task feature is used to bring resources into compliance established from a definition and assignment. Resources that are non-compliant to a [modify](./effect-modify.md) or [deployIfNotExists](./effect-deploy-if-not-exists.md) definition assignment, can be brought into compliance using a remediation task. A remediation task deploys the `deployIfNotExists` template or the `modify` operations to the selected non-compliant resources using the identity specified in the assignment. For more information, see [policy assignment structure](./assignment-structure.md#identity) to understand how the identity is defined and [remediate non-compliant resources tutorial](../how-to/remediate-resources.md#configure-the-managed-identity) to configure the identity. ++Remediation tasks remediate existing resources that aren't compliant. Resources that are newly created or updated that are applicable to a `deployIfNotExists` or `modify` definition assignment are automatically remediated. > [!NOTE]-> Remediation tasks remediate exisiting resources that are not compliant. Resources that are newly created or updated that are applicable to a deployIfNotExist or modify definition assignment are automatically remediated. +> The Azure Policy service deletes remediation task resources 60 days after their last modification. You use JavaScript Object Notation (JSON) to create a policy remediation task. The policy remediation task contains elements for: You use JavaScript Object Notation (JSON) to create a policy remediation task. T - [provisioning state and deployment summary](#provisioning-state-and-deployment-summary) -For example, the following JSON shows a policy remediation task for policy definition named `requiredTags` a part of -an initiative assignment named `resourceShouldBeCompliantInit` with all default settings. +For example, the following JSON shows a policy remediation task for policy definition named `requiredTags` a part of an initiative assignment named `resourceShouldBeCompliantInit` with all default settings. ```json {- "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", - "apiVersion": "2021-10-01", - "name": "remediateNotCompliant", - "type": "Microsoft.PolicyInsights/remediations", - "properties": { - "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", - "policyDefinitionReferenceIds": "requiredTags", - "resourceCount": 42, - "parallelDeployments": 6, - "failureThreshold": { - "percentage": 0.1 - } + "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", + "apiVersion": "2021-10-01", + "name": "remediateNotCompliant", + "type": "Microsoft.PolicyInsights/remediations", + "properties": { + "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", + "policyDefinitionReferenceId": "requiredTags", + "resourceCount": 42, + "parallelDeployments": 6, + "failureThreshold": { + "percentage": 0.1 }+ } } ```-Steps on how to trigger a remediation task at [how to remediate non-compliant resources guide](../how-to/remediate-resources.md) --> [!NOTE] -> These settings cannot be changed once the remediation task has started. +Steps on how to trigger a remediation task at [how to remediate non-compliant resources guide](../how-to/remediate-resources.md). These settings can't be changed after the remediation task begins. ## Display name and description -You use **displayName** and **description** to identify the policy remediation task and provide context for -its use. **displayName** has a maximum length of _128_ characters and -**description** a maximum length of _512_ characters. +You use `displayName` and `description` to identify the policy remediation task and provide context for its use. `displayName` has a maximum length of _128_ characters and `description` a maximum length of _512_ characters. ## Policy assignment ID -This field must be the full path name of either a policy assignment or an initiative assignment. -`policyAssignmentId` is a string and not an array. This property defines which assignment the parent -resource hierarchy or individual resource to remediate. +This field must be the full path name of either a policy assignment or an initiative assignment. `policyAssignmentId` is a string and not an array. This property defines which assignment the parent resource hierarchy or individual resource to remediate. ## Policy definition ID -If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceId** property must be used to specify which policy definition in the initiative the subject resource(s) are to be remediated. As a remediation can only remediate in a scope of one definition, -this property is a _string_ and not an array. The value must match the value in the initiative definition in the -`policyDefinitions.policyDefinitionReferenceId` field instead of the global identifier for policy definition `Id`. +If the `policyAssignmentId` is for an initiative assignment, the `policyDefinitionReferenceId` property must be used to specify which policy definition in the initiative the subject resources are to be remediated. As a remediation can only remediate in a scope of one definition, this property is a _string_ and not an array. The value must match the value in the initiative definition in the `policyDefinitions.policyDefinitionReferenceId` field instead of the global identifier for policy definition `Id`. ## Resource count and parallel deployments -Use **resource count** to determine how many non-compliant resources to remediate in a given remediation task. The default value is 500, with the maximum number being 50,000. **Parallel deployments** determines how many of those resources to remediate at the same time. The allowed range is between 1 to 30 with the default value being 10. +Use `resourceCount` to determine how many non-compliant resources to remediate in a given remediation task. The default value is 500, with the maximum number being 50,000. `parallelDeployments` determines how many of those resources to remediate at the same time. The allowed range is between 1 to 30 with the default value being 10. -> [!NOTE] -> Parallel deployments are the number of deployments within a singular remediation task with a maximum of 30. There can be a maximum of 100 remediation tasks running in parallel for a single policy definition or policy reference within an initiative. +Parallel deployments are the number of deployments within a singular remediation task with a maximum of 30. There can be a maximum of 100 remediation tasks running in parallel for a single policy definition or policy reference within an initiative. ## Failure threshold -An optional property used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. The **failure threshold** is represented as a percentage number from 0 to 100. By default, the failure threshold is 100%, meaning that the remediation task will continue to remediate other resources even if resources fail to remediate. +An optional property used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. The `failureThreshold` is represented as a percentage number from 0 to 100. By default, the failure threshold is 100%, meaning that the remediation task continues to remediate other resources even if resources fail to remediate. -## Remediation filters +## Remediation filters -An optional property refines what resources are applicable to the remediation task. The allowed filter is resource location. Unless specified, resources from any region can be remediated. +An optional property refines what resources are applicable to the remediation task. The allowed filter is resource location. Unless specified, resources from any region can be remediated. ## Resource discovery mode -This property decides how to discover resources that are eligible for remediation. For a resource to be eligible, it must be non-compliant. By default, this property is set to `ExistingNonCompliant`. It could also be set to `ReEvaluateCompliance`, which will trigger a new compliance scan for that assignment and remediate any resources that are found non-compliant. +This property decides how to discover resources that are eligible for remediation. For a resource to be eligible, it must be non-compliant. By default, this property is set to `ExistingNonCompliant`. It could also be set to `ReEvaluateCompliance`, which triggers a new compliance scan for that assignment and remediate any resources that are found non-compliant. ## Provisioning state and deployment summary -Once a remediation task is created, **provisioning state** and **deployment summary** properties are populated. **Provisioning state** indicates the status of the remediation task. Allow values are `Running`, `Canceled`, `Cancelling`, `Failed`, `Complete`, or `Succeeded`. **Deployment summary** is an array property indicating the number of deployments along with number of successful and failed deployments. +Once a remediation task is created, `ProvisioningState` and `DeploymentSummary` properties are populated. The `ProvisioningState` indicates the status of the remediation task. Allow values are `Running`, `Canceled`, `Cancelling`, `Failed`, `Complete`, or `Succeeded`. The `DeploymentSummary` is an array property indicating the number of deployments along with number of successful and failed deployments. -Sample of remediation task that completed successfully: +Sample of remediation task that completed successfully: ```json {- "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", - "Type": "Microsoft.PolicyInsights/remediations", - "Name": "remediateNotCompliant", - "PolicyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", - "policyDefinitionReferenceIds": "requiredTags", - "resourceCount": 42, - "parallelDeployments": 6, - "failureThreshold": { - "percentage": 0.1 - }, - "ProvisioningState": "Succeeded", - "DeploymentSummary": { - "TotalDeployments": 42, - "SuccessfulDeployments": 42, - "FailedDeployments": 0 - }, + "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", + "Type": "Microsoft.PolicyInsights/remediations", + "Name": "remediateNotCompliant", + "PolicyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", + "policyDefinitionReferenceId": "requiredTags", + "resourceCount": 42, + "parallelDeployments": 6, + "failureThreshold": { + "percentage": 0.1 + }, + "ProvisioningState": "Succeeded", + "DeploymentSummary": { + "TotalDeployments": 42, + "SuccessfulDeployments": 42, + "FailedDeployments": 0 + }, } ``` Sample of remediation task that completed successfully: - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Understand how to [react to Azure Policy state change events](./event-overview.md).-- Learn about the [policy definition structure](./definition-structure.md).+- Learn about the [policy definition structure](./definition-structure-basics.md). - Learn about the [policy assignment structure](./assignment-structure.md). |
healthcare-apis | Dicom Services Conformance Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md | Title: DICOM Conformance Statement version 1 for Azure Health Data Services description: Read about the features and specifications of the DICOM service v1 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. -+ Last updated 10/13/2023-+ # DICOM Conformance Statement v1 The Medical Imaging Server for DICOM® supports a subset of the DICOMweb Stan * [Request Cancellation](#request-cancellation) * [Search Workitems](#search-workitems) -Additionally, the following nonstandard API(s) are supported: +Additionally, the following nonstandard APIs are supported: * [Change Feed](change-feed-overview.md) * [Extended Query Tags](dicom-extended-query-tags-overview.md) The service uses REST API versioning. The version of the REST API must be explic This version of the conformance statement corresponds to the `v1` version of the REST APIs. -For more information on how to specify the version when making requests, see the [API Versioning Documentation](api-versioning-dicom-service.md). +For information on how to specify the version when making requests, see the [API Versioning Documentation](api-versioning-dicom-service.md). You can find example requests for supported transactions in the [Postman collection](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json). The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/p This transaction uses the POST or PUT method to store representations of studies, series, and instances contained in the request payload. | Method | Path | Description |-| :-- | :-- | :- | +| | | -- | | POST | ../studies | Store instances. | | POST | ../studies/{study} | Store instances for a specific study. | | PUT | ../studies | Upsert instances. | | PUT | ../studies/{study} | Upsert instances for a specific study. | -Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code. +The parameter `study` corresponds to the DICOM attribute `StudyInstanceUID`. If specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code. -The following `Accept` header(s) for the response are supported: +The following `Accept` header for the response is supported: * `application/dicom+json` -The following `Content-Type` header(s) are supported: +The following `Content-Type` headers are supported: * `multipart/related; type="application/dicom"` * `application/dicom` Only transfer syntaxes with explicit Value Representations are accepted. #### Store response status codes | Code | Description |-| :-- | :- | +| | -- | | `200 (OK)` | All the SOP instances in the request are stored. | | `202 (Accepted)` | Some instances in the request are stored but others failed. | | `204 (No Content)` | No content was provided in the store transaction request. | Only transfer syntaxes with explicit Value Representations are accepted. ### Store response payload -The response payload populates a DICOM dataset with the following elements: +The response payload populates a DICOM dataset with the following elements. | Tag | Name | Description |-| :-- | :-- | :- | -| (0008, 1190) | `RetrieveURL` | The Retrieve URL of the study if the StudyInstanceUID was provided in the store request and at least one instance is successfully stored. | -| (0008, 1198) | `FailedSOPSequence` | The sequence of instances that failed to store. | -| (0008, 1199) | `ReferencedSOPSequence` | The sequence of stored instances. | +| | | -- | +| (0008, 1190) | `RetrieveURL` | The Retrieve URL of the study, if the `StudyInstanceUID` was provided in the store request and at least one instance is successfully stored | +| (0008, 1198) | `FailedSOPSequence` | The sequence of instances that failed to store | +| (0008, 1199) | `ReferencedSOPSequence` | The sequence of stored instances | -Each dataset in the `FailedSOPSequence` has the following elements (if the DICOM file attempting to be stored could be read): +Each dataset in the `FailedSOPSequence` has the following elements (if the DICOM file attempting to be stored could be read). | Tag | Name | Description |-| :-- | :-- | :- | -| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. | -| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. | -| (0008, 1197) | `FailureReason` | The reason code why this instance failed to store. | -| (0074, 1048) | `FailedAttributesSequence` | The sequence of `ErrorComment` that includes the reason for each failed attribute. | +| | | -- | +| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store | +| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store | +| (0008, 1197) | `FailureReason` | The reason code why this instance failed to store | +| (0074, 1048) | `FailedAttributesSequence` | The sequence of `ErrorComment` that includes the reason for each failed attribute | -Each dataset in the `ReferencedSOPSequence` has the following elements: +Each dataset in the `ReferencedSOPSequence` has the following elements. | Tag | Name | Description |-| :-- | :-- | :- | -| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. | -| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. | -| (0008, 1190) | `RetrieveURL` | The retrieve URL of this instance on the DICOM server. | +| | | -- | +| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store | +| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store | +| (0008, 1190) | `RetrieveURL` | The retrieve URL of this instance on the DICOM server | An example response with `Accept` header `application/dicom+json`: An example response with `Accept` header `application/dicom+json`: #### Store failure reason codes | Code | Description |-| :- | :- | +| -- | -- | | `272` | The store transaction didn't store the instance because of a general failure in processing the operation. | | `43264` | The DICOM instance failed the validation. | | `43265` | The provided instance `StudyInstanceUID` didn't match the specified `StudyInstanceUID` in the store request. | An example response with `Accept` header `application/dicom+json`: #### Store warning reason codes | Code | Description |-| :- | :- | +| -- | -- | | `45063` | A DICOM instance Data Set doesn't match SOP Class. The Studies Store Transaction (Section 10.5) observed that the Data Set didn't match the constraints of the SOP Class during storage of the instance. | ### Store Error Codes | Code | Description |-| :- | :- | +| -- | -- | | `100` | The provided instance attributes didn't meet the validation criteria. | ### Retrieve (WADO-RS) An example response with `Accept` header `application/dicom+json`: This Retrieve Transaction offers support for retrieving stored studies, series, instances, and frames by reference. | Method | Path | Description |-| :-- | :| :- | -| GET | ../studies/{study} | Retrieves all instances within a study. | -| GET | ../studies/{study}/metadata | Retrieves the metadata for all instances within a study. | -| GET | ../studies/{study}/series/{series} | Retrieves all instances within a series. | -| GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. | -| GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. | -| GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. | +| | -| -- | +| GET | ../studies/{study} | Retrieves all instances within a study | +| GET | ../studies/{study}/metadata | Retrieves the metadata for all instances within a study | +| GET | ../studies/{study}/series/{series} | Retrieves all instances within a series | +| GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series | +| GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance | +| GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance | | GET | ../studies/{study}/series/{series}/instances/{instance}/rendered | Retrieves an instance rendered into an image format |-| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, use a comma to separate each frame to return. For example, `/studies/1/series/2/instance/3/frames/4,5,6`. | -| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frame}/rendered | Retrieves a single frame rendered into an image format. | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance; To specify more than one frame, use a comma to separate each frame to return. For example, `/studies/1/series/2/instance/3/frames/4,5,6`. | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frame}/rendered | Retrieves a single frame rendered into an image format | #### Retrieve instances within study or series -The following `Accept` header(s) are supported for retrieving instances within a study or a series: +The following `Accept` headers are supported for retrieving instances within a study or a series. * `multipart/related; type="application/dicom"; transfer-syntax=*` * `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) The following `Accept` header(s) are supported for retrieving instances within a #### Retrieve an Instance -The following `Accept` header(s) are supported for retrieving a specific instance: +The following `Accept` headers are supported for retrieving a specific instance: * `application/dicom; transfer-syntax=*` * `multipart/related; type="application/dicom"; transfer-syntax=*` The following `Accept` header(s) are supported for retrieving a specific instanc #### Retrieve Frames -The following `Accept` headers are supported for retrieving frames: +The following `Accept` headers are supported for retrieving frames. * `multipart/related; type="application/octet-stream"; transfer-syntax=*` * `multipart/related; type="application/octet-stream";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default) The following `Accept` headers are supported for retrieving frames: #### Retrieve transfer syntax -When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the following formats for transcoding to succeed, otherwise transcoding might fail: +When the requested transfer syntax is different from original file, the original file is transcoded to the requested transfer syntax. The original file needs to be one of the following formats for transcoding to succeed, otherwise transcoding might fail. * 1.2.840.10008.1.2 (Little Endian Implicit) * 1.2.840.10008.1.2.1 (Little Endian Explicit) An unsupported `transfer-syntax` results in `406 Not Acceptable`. ### Retrieve metadata (for study, series, or instance) -The following `Accept` header is supported for retrieving metadata for a study, a series, or an instance: +The following `Accept` header is supported for retrieving metadata for a study, a series, or an instance. * `application/dicom+json` -Retrieving metadata doesn't return attributes with the following value representations: +Retrieving metadata doesn't return attributes with the following value representations. | VR Name | Description | | : | : | Retrieving metadata doesn't return attributes with the following value represent Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists: -* Data is unchanged since the last request: `HTTP 304 (Not Modified)` response is sent with no response body. -* Data changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body. +* Data is unchanged since the last request: the `HTTP 304 (Not Modified)` response is sent with no response body. +* Data has changed since the last request: the `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body. ### Retrieve rendered image (for instance or frame)-The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: +The following `Accept` headers are supported for retrieving a rendered image an instance or a frame. - `image/jpeg` - `image/png` -In the case that no `Accept` header is specified the service renders an `image/jpeg` by default. +In the case that no `Accept` header is specified, the service renders an `image/jpeg` by default. -The service only supports rendering of a single frame. If rendering is requested for an instance with multiple frames, then only the first frame is rendered as an image by default. +The service only supports rendering of a single frame. If rendering is requested for an instance with multiple frames, then by default only the first frame is rendered as an image. When specifying a particular frame to return, frame indexing starts at 1. -The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) might be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`. +The `quality` query parameter is also supported. An integer value from `1` to `100` inclusive (1 being worst quality, and 100 being best quality) might be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified, the parameter defaults to `100`. ### Retrieve response status codes | Code | Description |-| : | :- | +| - | -- | | `200 (OK)` | All requested data was retrieved. |-| `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. | +| `304 (Not Modified)` | The requested data hasn't been modified since the last request. In this case, content isn't added to the response body. For more information, see the preceding section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |-| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance didn't contain pixel data. | +| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for a rendered request the instance didn't contain pixel data. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcodes requests the file requested was too large. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | The `quality` query parameter is also supported. An integer value between `1` an Query based on ID for DICOM Objects (QIDO) enables you to search for studies, series, and instances by attributes. | Method | Path | Description |-| :-- | :-- | : | -| *Search for Studies* | +| | | - | +| *Search for Studies* | | | | GET | ../studies?... | Search for studies |-| *Search for Series* | +| *Search for Series* | | | | GET | ../series?... | Search for series | | GET |../studies/{study}/series?... | Search for series in a study |-| *Search for Instances* | +| *Search for Instances* | | | | GET |../instances?... | Search for instances | | GET |../studies/{study}/instances?... | Search for instances in a study | | GET |../studies/{study}/series/{series}/instances?... | Search for instances in a series | -The following `Accept` header(s) are supported for searching: +The following `Accept` header is supported for searching: * `application/dicom+json` ### Supported search parameters -The following parameters for each query are supported: +The following parameters for each query are supported. -| Key | Support Value(s) | Allowed Count | Description | -| :-- | :-- | : | :- | -| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. | -| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | -| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | -| `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. | -| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However, "ohn" doesn't match. | +| Key | Support Values | Allowed Count | Description | +| | | - | -- | +| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query | +| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response; Both public and private tags are supported.<br/>When `all` is provided. Refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. | +| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response; <br/>Value can be between the range 1 >= x <= 200, defaulted to 100. | +| `offset=` | `{value}` | 0..1 | Skip `{value}` results; <br/>If an offset larger than the number of search query results is provided, a `204 (no content)` response is returned. | +| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute; It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However, "ohn" doesn't match. | #### Searchable attributes We support searching the following attributes and search types. | Attribute Keyword | All Studies | All Series | All Instances | Study's Series | Study's Instances | Study Series' Instances |-| :- | :: | :-: | :: | :: | :-: | :: | +| -- | -- | | -- | -- | | -- | | `StudyInstanceUID` | X | X | X | | | | | `PatientName` | X | X | X | | | | | `PatientID` | X | X | X | | | | We support searching the following attributes and search types. We support the following matching types. | Search Type | Supported Attribute | Example |-| :- | : | : | -| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | +| -- | - | - | +| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag, which is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | | Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name that starts with the value. | #### Attribute ID -Tags can be encoded in several ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported: +Tags can be encoded in several ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported. | Value | Example |-| : | : | +| - | - | | `{group}{element}` | `0020000D` | | `{dicomKeyword}` | `StudyInstanceUID` | -Example query searching for instances: +Here's an example query searching for instances: `../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0` ### Search response -The response is an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned: +The response is an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned. #### Default Study tags | Tag | Attribute Name |-| :-- | :- | +| | -- | | (0008, 0005) | `SpecificCharacterSet` | | (0008, 0020) | `StudyDate` | | (0008, 0030) | `StudyTime` | The response is an array of DICOM datasets. Depending on the resource, by *defau #### Default Series tags | Tag | Attribute Name |-| :-- | :- | +| | -- | | (0008, 0005) | `SpecificCharacterSet` | | (0008, 0060) | `Modality` | | (0008, 0201) | `TimezoneOffsetFromUTC` | The response is an array of DICOM datasets. Depending on the resource, by *defau #### Default Instance tags | Tag | Attribute Name |-| :-- | :- | +| | -- | | (0008, 0005) | `SpecificCharacterSet` | | (0008, 0016) | `SOPClassUID` | | (0008, 0018) | `SOPInstanceUID` | If `includefield=all`, the following attributes are included along with default #### Extra Study tags | Tag | Attribute Name |-| :-- | :- | +| | -- | | (0008, 1030) | `Study Description` | | (0008, 0063) | `AnatomicRegionsInStudyCodeSequence` | | (0008, 1032) | `ProcedureCodeSequence` | If `includefield=all`, the following attributes are included along with default #### Other Series tags | Tag | Attribute Name |-| :-- | :- | +| | -- | | (0020, 0011) | `SeriesNumber` | | (0020, 0060) | `Laterality` | | (0008, 0021) | `SeriesDate` | The following attributes are returned: ### Search response codes -The query API returns one of the following status codes in the response: +The query API returns one of the following status codes in the response. | Code | Description |-| : | :- | +| - | -- | | `200 (OK)` | The response payload contains all the matching resources. | | `204 (No Content)` | The search completed successfully but returned no results. |-| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. | +| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. The response body contains details of the failure. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | The query API returns one of the following status codes in the response: * Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported. * The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.-* When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data. -* Paged results are optimized to return matched _newest_ instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added. -* Matching is case in-sensitive and accent in-sensitive for PN VR types. -* Matching is case in-sensitive and accent sensitive for other string VR types. -* Only the first value is indexed of a single valued data element that incorrectly has multiple values. +* When the target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins, and you can search only on the latest data. +* Paged results are optimized to return the matched _newest_ instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added. +* Matching is _not_ case sensitive, and _not_ accent sensitive for PN VR types. +* Matching is _not_ case sensitive, and _is_ accent sensitive for other string VR types. +* Only the first value is indexed if a single valued data element incorrectly has multiple values. ### Delete This transaction isn't part of the official DICOMwe Standard. It uses the DELETE method to remove representations of Studies, Series, and Instances from the store. | Method | Path | Description |-| :-- | : | :- | -| DELETE | ../studies/{study} | Delete all instances for a specific study. | -| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. | -| DELETE | ../studies/{study}/series/{series}/instances/{instance} | Delete a specific instance within a series. | +| | - | -- | +| DELETE | ../studies/{study} | Delete all instances for a specific study | +| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study | +| DELETE | ../studies/{study}/series/{series}/instances/{instance} | Delete a specific instance within a series | -Parameters `study`, `series`, and `instance` correspond to the DICOM attributes `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` respectively. +The parameters `study`, `series`, and `instance` correspond to the DICOM attributes `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` respectively. There are no restrictions on the request's `Accept` header, `Content-Type` header or body content. There are no restrictions on the request's `Accept` header, `Content-Type` heade ### Response status codes | Code | Description |-| : | :- | -| `204 (No Content)` | When all the SOP instances are deleted. | -| `400 (Bad Request)` | The request was badly formatted. | -| `401 (Unauthorized)` | The client isn't authenticated. | -| `403 (Forbidden)` | The user isn't authorized. | -| `404 (Not Found)` | When the specified series wasn't found within a study or the specified instance wasn't found within the series. | +| - | -- | +| `204 (No Content)` | When all the SOP instances are deleted | +| `400 (Bad Request)` | The request was badly formatted | +| `401 (Unauthorized)` | The client isn't authenticated | +| `403 (Forbidden)` | The user isn't authorized | +| `404 (Not Found)` | When the specified series wasn't found within a study, or the specified instance wasn't found within the series | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | ### Delete response payload Throughout, the variable `{workitem}` in a URI template stands for a Workitem UI Available UPS-RS endpoints include: |Verb| Path | Description |-|: |: |: | -|POST| {s}/workitems{?AffectedSOPInstanceUID}| Create a work item| -|POST| {s}/workitems/{instance}{?transaction}| Update a work item -|GET| {s}/workitems{?query*} | Search for work items -|GET| {s}/workitems/{instance}| Retrieve a work item -|PUT| {s}/workitems/{instance}/state| Change work item state -|POST| {s}/workitems/{instance}/cancelrequest | Cancel work item| +|- |- |- | +|POST| {s}/workitems{?AffectedSOPInstanceUID}| Create a work item | +|POST| {s}/workitems/{instance}{?transaction}| Update a work item | +|GET| {s}/workitems{?query*} | Search for work items | +|GET| {s}/workitems/{instance}| Retrieve a work item | +|PUT| {s}/workitems/{instance}/state| Change work item state | +|POST| {s}/workitems/{instance}/cancelrequest | Cancel work item | |POST |{s}/workitems/{instance}/subscribers/{AETitle}{?deletionlock} | Create subscription| |POST| {s}/workitems/1.2.840.10008.5.1.4.34.5/ | Suspend subscription|-|DELETE | {s}/workitems/{instance}/subscribers/{AETitle} | Delete subscription +|DELETE | {s}/workitems/{instance}/subscribers/{AETitle} | Delete subscription | |GET | {s}/subscribers/{AETitle}| Open subscription channel | ### Create Workitem Available UPS-RS endpoints include: This transaction uses the POST method to create a new Workitem. | Method | Path | Description |-| :-- | :-- | :- | -| POST | ../workitems | Create a Workitem. | +| | | -- | +| POST | ../workitems | Create a Workitem | | POST | ../workitems?{workitem} | Creates a Workitem with the specified UID. | If not specified in the URI, the payload dataset must contain the Workitem in the `SOPInstanceUID` attribute. required to be present, required to not be present, required to be empty, or req found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3). > [!NOTE]-> Although the reference table says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWeb. SOP Instance UID should be present in the dataset if not in the URI. +> Although the reference table says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWeb. The SOP Instance UID should be present in the dataset if not in the URI. > [!NOTE] > All the conditional requirement codes including 1C and 2C are treated as optional. found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/p #### Create response status codes | Code | Description |-| :-- | :- | +| | -- | | `201 (Created)` | The target Workitem was successfully created. | | `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements. | | `401 (Unauthorized)` | The client isn't authenticated. | A failure response payload contains a message describing the failure. This transaction enables the user to request cancellation of a nonowned Workitem. -There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1): +There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1). * `SCHEDULED` * `IN PROGRESS` * `CANCELED` * `COMPLETED` -This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` returns failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction. +This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` return a failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction. | Method | Path | Description |-| : | :- | :-- | +| - | -- | | | POST | ../workitems/{workitem}/cancelrequest | Request the cancellation of a scheduled Workitem | The `Content-Type` header is required, and must have the value `application/dicom+json`. The request payload might include Action Information as [defined in the DICOM St #### Request cancellation response status codes | Code | Description |-| : | :- | +| - | -- | | `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state is unchanged. | | `400 (Bad Request)` | There was a problem with the syntax of the request. | | `401 (Unauthorized)` | The client isn't authenticated. | This transaction retrieves a Workitem. It corresponds to the UPS DIMSE N-GET ope Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.5 -If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve the attribute's role as an access lock. +If the Workitem exists on the origin server, the Workitem is returned in an Acceptable Media Type. The returned Workitem won't contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve the attribute's role as an access lock. | Method | Path | Description |-| : | :- | : | +| - | -- | - | | GET | ../workitems/{workitem} | Request to retrieve a Workitem | The `Accept` header is required and must have the value `application/dicom+json`. The `Accept` header is required and must have the value `application/dicom+json` #### Retrieve Workitem response status codes | Code | Description |-| :- | :- | -| 200 (OK) | Workitem Instance was successfully retrieved. | -| 400 (Bad Request) | There was a problem with the request. | -| 401 (Unauthorized) | The client isn't authenticated. | -| 403 (Forbidden) | The user isn't authorized. | -| 404 (Not Found) | The Target Workitem wasn't found. | +| -- | -- | +| 200 (OK) | Workitem Instance was successfully retrieved. | +| 400 (Bad Request) | There was a problem with the request. | +| 401 (Unauthorized) | The client isn't authenticated. | +| 403 (Forbidden) | The user isn't authorized. | +| 404 (Not Found) | The Target Workitem wasn't found. | #### Retrieve Workitem response payload * A success response has a single part payload containing the requested Workitem in the Selected Media Type.-* The returned Workitem shall not contain the Transaction UID (0008, 1195) attribute of the Workitem, since that should only be known to the Owner. +* The returned Workitem won't contain the Transaction UID (0008, 1195) attribute of the Workitem, since that should only be known to the Owner. ### Update Workitem This transaction modifies attributes of an existing Workitem. It corresponds to Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.6 -To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` attribute shall not be present. For a Workitem in the `IN PROGRESS` state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the `COMPLETED` or `CANCELED` states, the response is `400 (Bad Request)`. +To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` attribute shouldn't be present. For a Workitem in the `IN PROGRESS` state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the `COMPLETED` or `CANCELED` states, the response is `400 (Bad Request)`. | Method | Path | Description |-| : | : | :-- | +| - | - | | | POST | ../workitems/{workitem}?{transaction-uid} | Update Workitem Transaction | The `Content-Type` header is required, and must have the value `application/dicom+json`. found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/p #### Update Workitem transaction response status codes | Code | Description |-| :- | :- | +| -- | -- | | `200 (OK)` | The Target Workitem was updated. |-| `400 (Bad Request)` | There was a problem with the request. For example: (1) the Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect. (4) the dataset didn't conform to the requirements. +| `400 (Bad Request)` | There was a problem with the request. For example: (1) The Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) The Transaction UID is missing. (3) The Transaction UID is incorrect. (4) The dataset didn't conform to the requirements. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The Target Workitem wasn't found. | found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/p #### Update Workitem transaction response payload -The origin server shall support header fields as required in [Table 11.6.3-2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#table_11.6.3-2). +The origin server supports header fields as required in [Table 11.6.3-2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#table_11.6.3-2). -A success response shall have either no payload or a payload containing a Status Report document. +A success response has either no payload, or a payload containing a Status Report document. A failure response payload might contain a Status Report describing any failures, warnings, or other useful information. This transaction is used to change the state of a Workitem. It corresponds to th Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7 -If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this Attribute's role as an access lock as described [here.](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#sect_CC.1.1) +If the Workitem exists on the origin server, the Workitem is returned in an Acceptable Media Type. The returned Workitem won't contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this Attribute's role as an access lock as described [here.](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#sect_CC.1.1) | Method | Path | Description |-| : | : | :-- | +| - | - | | | PUT | ../workitems/{workitem}/state | Change Workitem State | The `Accept` header is required, and must have the value `application/dicom+json`. -The request payload shall contain the Change UPS State Data Elements. These data elements are: +The request payload contains the Change UPS State Data Elements. These data elements are: -* **Transaction UID (0008, 1195)**. The request payload shall include a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem. +* **Transaction UID (0008, 1195)**. The request payload includes a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem. * **Procedure Step State (0074, 1000)**. The legal values correspond to the requested state transition. They are: `IN PROGRESS`, `COMPLETED`, or `CANCELED`. #### Change Workitem state response status codes | Code | Description |-| :- | :- | -| `200 (OK)` | Workitem Instance was successfully retrieved. | -| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request isn't valid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect | +| -- | -- | +| `200 (OK)` | The Workitem Instance was successfully retrieved. | +| `400 (Bad Request)` | The request can't be performed for one of the following reasons. (1) The request isn't valid given the current state of the Target Workitem. (2) The Transaction UID is missing. (3) The Transaction UID is incorrect | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The Target Workitem wasn't found. | The request payload shall contain the Change UPS State Data Elements. These data #### Change Workitem state response payload * Responses include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2).-* A success response shall have no payload. +* A success response has no payload. * A failure response payload might contain a Status Report describing any failures, warnings, or other useful information. ### Search Workitems The request payload shall contain the Change UPS State Data Elements. These data This transaction enables you to search for Workitems by attributes. | Method | Path | Description |-| :-- | :- | :-- | +| | -- | | | GET | ../workitems? | Search for Workitems | -The following `Accept` header(s) are supported for searching: +The following `Accept` header is supported for searching. * `application/dicom+json` #### Supported Search Parameters -The following parameters for each query are supported: +The following parameters for each query are supported. -| Key | Support Value(s) | Allowed Count | Description | -| : | :- | : | :- | -| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. | -| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Only top-level attributes can be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. | -| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. | -| `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. | -| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` does **not** match. | +| Key | Support Values | Allowed Count | Description | +| - | -- | - | -- | +| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query | +| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response; Only top-level attributes can be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. | +| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response; The Value can be between the range `1 >= x <= 200`, defaulted to `100`. | +| `offset=` | `{value}` | 0...1 | Skip {value} results; If an offset larger than the number of search query results is provided, a `204 (no content)` response is returned. | +| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR); A prefix word match of any name part inside these attributes is performed. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` does **not** match. | ##### Searchable Attributes -We support searching on these attributes: +We support searching on these attributes. |Attribute Keyword|-|:| +|-| |`PatientName`| |`PatientID`| |`ReferencedRequestSequence.AccessionNumber`| We support searching on these attributes: ##### Search Matching -We support these matching types: +We support these matching types. | Search Type | Supported Attribute | Example |-| :- | : | : | +| -- | - | - | | Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values must be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | | Exact Match | All supported attributes | `{attributeID}={value1}` |-| Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. | +| Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value | > [!NOTE] > While we don't support full sequence matching, we do support exact match on the attributes listed that are contained in a sequence. ##### Attribute ID -Tags can be encoded in many ways for the query parameter. We partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported: +Tags can be encoded in many ways for the query parameter. We partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported. | Value | Example |-| :-- | : | -| `{group}{element}` | `00100010` | +| | - | +| `{group}{element}` | `00100010` | | `{dicomKeyword}` | `PatientName` | Example query: Example query: The response is an array of `0...N` DICOM datasets with the following attributes returned: -* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2 -* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met -* All other Workitem attributes passed as match parameters -* All other Workitem attributes passed as `includefield` parameter values +* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2. +* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met. +* All other Workitem attributes passed as match parameters. +* All other Workitem attributes passed as `includefield` parameter values. #### Search Response Codes -The query API returns one of the following status codes in the response: +The query API returns one of the following status codes in the response. | Code | Description |-| :-- | :- | +| | -- | | `200 (OK)` | The response payload contains all the matching resource. | | `206 (Partial Content)` | The response payload contains only some of the search results, and the rest can be requested through the appropriate request. | | `204 (No Content)` | The search completed successfully but returned no results. | The query API returns one of the following status codes in the response: #### Other Notes -The query API doesn't `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved. +The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved. * Paged results are optimized to return matched newest instance first, which might result in duplicate records in subsequent pages if newer data matching the query was added.-* Matching is case insensitive and accent insensitive for PN VR types. -* Matching is case insensitive and accent sensitive for other string VR types. -* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query likely excludes the Workitem that's getting updated and the response code is `206 (Partial Content)`. +* Matching is _not_ case sensitive, and _not_ accent sensitive for PN VR types. +* Matching is _not_ case sensitive, and _is_ accent sensitive for other string VR types. +* If there's a scenario where canceling a Workitem and querying the same Workitem happens at the same time, then the query likely excludes the Workitem that's getting updated, and the response code is `206 (Partial Content)`. [!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)] |
healthcare-apis | Dicomweb Standard Apis C Sharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md | Title: Use C# and DICOMweb Standard APIs in Azure Health Data Services description: Learn how to use C# and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 10/18/2023-+ # Use C# and DICOMweb Standard APIs With the DicomWebClient, we can now perform the Store, Retrieve, Search, and Del ## Store DICOM instances (STOW) -By using the DicomWebClient, we can now store DICOM files. +Using the DicomWebClient, we can now store DICOM files. ### Store single instance DicomWebResponse response = await client.StoreAsync(new[] { dicomFile }); ### Store instances for a specific study -Store instances for a specific study demonstrate how to upload a DICOM file into a specified study. +Store instances for a specific study demonstrates how to upload a DICOM file into a specified study. _Details:_ Before moving on to the next part of the tutorial, upload the `green-square.dcm` The code snippets show how to perform each of the retrieve queries using the DicomWebClient created previously. -The variables are used throughout the rest of the examples: +The variables are used throughout the rest of the examples. ```c# string studyInstanceUid = "1.2.826.0.1.3680043.8.498.13230779778012324449356534479549187420"; //StudyInstanceUID for all 3 examples _Details:_ DicomWebResponse response = await client.RetrieveSeriesMetadataAsync(studyInstanceUid, seriesInstanceUid); ``` -This series has two instances (green-square and red-triangle), so the response should return metadata for both instances. Validate that the response has a status code of OK and that both instances of the metadata are returned. +The series has two instances (green-square and red-triangle), so the response should return metadata for both instances. Validate that the response has a status code of OK and that both instances of the metadata are returned. ### Retrieve a single instance within a series of a study _Details:_ DicomWebResponse response = await client.RetrieveInstanceAsync(studyInstanceUid, seriesInstanceUid, sopInstanceUid); ``` -This response should only return the instance red-triangle. Validate that the response has a status code of OK and that the instance is returned. +The response should only return the instance red-triangle. Validate that the response has a status code of OK and that the instance is returned. ### Retrieve metadata of a single instance within a series of a study _Details:_ DicomWebResponse response = await client.RetrieveInstanceMetadataAsync(studyInstanceUid, seriesInstanceUid, sopInstanceUid); ``` -This response should only return the metadata for the instance red-triangle. Validate that the response has a status code of OK and that the metadata is returned. +The response should only return the metadata for the instance red-triangle. Validate that the response has a status code of OK and that the metadata is returned. ### Retrieve one or more frames from a single instance DicomWebResponse response = await client.RetrieveFramesAsync(studyInstanceUid, s ``` -This response should return the only frame from the red-triangle. Validate that the response has a status code of OK and that the frame is returned. +The response should return the only frame from the red-triangle. Validate that the response has a status code of OK and that the frame is returned. ## Query DICOM (QIDO) string query = $"/studies?StudyInstanceUID={studyInstanceUid}"; DicomWebResponse response = await client.QueryStudyAsync(query); ``` -Validates that the response includes one study, and that the response code is OK. +Validate that the response includes one study, and that the response code is OK. ### Search for series string query = $"/series?SeriesInstanceUID={seriesInstanceUid}"; DicomWebResponse response = await client.QuerySeriesAsync(query); ``` -Validates that the response includes one series, and that the response code is OK. +Validate that the response includes one series, and that the response code is OK. ### Search for series within a study string query = $"/studies/{studyInstanceUid}/series?SeriesInstanceUID={seriesIns DicomWebResponse response = await client.QueryStudySeriesAsync(studyInstanceUid, query); ``` -Validates that the response includes one series, and that the response code is OK. +Validate that the response includes one series, and that the response code is OK. ### Search for instances string query = $"/instances?SOPInstanceUID={sopInstanceUid}"; DicomWebResponse response = await client.QueryInstancesAsync(query); ``` -Validates that the response includes one instance, and that the response code is OK. +Validate that the response includes one instance, and that the response code is OK. ### Search for instances within a study string query = $"/studies/{studyInstanceUid}/instances?SOPInstanceUID={sopInstan DicomWebResponse response = await client.QueryStudyInstanceAsync(studyInstanceUid, query); ``` -Validates that the response includes one instance, and that the response code is OK. +Validate that the response includes one instance, and that the response code is OK. ### Search for instances within a study and series string query = $"/studies/{studyInstanceUid}/series/{seriesInstanceUid}/instance DicomWebResponse response = await client.QueryStudySeriesInstanceAsync(studyInstanceUid, seriesInstanceUid, query); ``` -Validates that the response includes one instance, and that the response code is OK. +Validate that the response includes one instance, and that the response code is OK. ## Delete DICOM string sopInstanceUidRed = "1.2.826.0.1.3680043.8.498.47359123102728459884412887 DicomWebResponse response = await client.DeleteInstanceAsync(studyInstanceUid, seriesInstanceUid, sopInstanceUidRed); ``` -This repose deletes the red-triangle instance from the server. If it's successful, the response status code contains no content. +This response deletes the red-triangle instance from the server. If it's successful, the response status code contains no content. ### Delete a specific series within a study _Details:_ DicomWebResponse response = await client.DeleteSeriesAsync(studyInstanceUid, seriesInstanceUid); ``` -This response deletes the green-square instance (it's the only element left in the series) from the server. If it's successful, the response status code contains no content. +The response deletes the green-square instance from the server (it's the only element left in the series). If it's successful, the response status code contains no content. ### Delete a specific study _Details:_ DicomWebResponse response = await client.DeleteStudyAsync(studyInstanceUid); ``` -This response deletes the blue-circle instance (it's the only element left in the series) from the server. If it's successful, the response status code contains no content. +The response deletes the blue-circle instance from the server (it's the only element left in the series). If it's successful, the response status code contains no content. [!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)] |
healthcare-apis | Dicomweb Standard Apis Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md | Title: Use cURL and DICOMweb Standard APIs in Azure Health Data Services description: Use cURL and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 10/18/2023-+ # Use DICOMweb Standard APIs with cURL The filename, studyUID, seriesUID, and instanceUID of the sample DICOM files are To use the DICOM Standard APIs, you must have an instance of the DICOM service deployed. For more information, see [Deploy the DICOM service using the Azure portal](deploy-dicom-services-in-azure.md). -After you deploy an instance of the DICOM service, retrieve the URL for your App service. +After deploying an instance of the DICOM service, retrieve the URL for your App service. 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Search **Recent resources** and select your DICOM service instance. For this code, we access a Public Preview Azure service. It's important that you The DICOMweb Standard makes heavy use of `multipart/related` HTTP requests combined with DICOM specific accept headers. Developers familiar with other REST-based APIs often find working with the DICOMweb Standard awkward. However, after you get it up and running, it's easy to use. It just takes a little familiarity to get started. -The cURL commands each contain at least one, and sometimes two, variables that must be replaced. To simplify running the commands, search and replace the following variables by replacing them with your specific values: +The cURL commands each contain at least one, and sometimes two, variables that must be replaced. To simplify running the commands, search and replace the following variables your specific values. * {Service URL} The service URL is the URL to access your DICOM service that you provisioned in the Azure portal, for example, ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the url when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md). * {path-to-dicoms} - The path to the directory that contains the red-triangle.dcm file, such as `C:/dicom-server/docs/dcms`- * Ensure to use forward slashes as separators and end the directory _without_ a trailing forward slash. + * Ensure using forward slashes as separators and end the directory _without_ a trailing forward slash. ## Upload DICOM instances (STOW) _Details:_ * Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value -Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully. +Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. The following tools can be used successfully. * Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 * Content-Type: multipart/related _Details:_ * Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value -Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully. +Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. The following tools can be used successfully. * Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 _Details:_ * Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value -Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully: +Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. The following tools can be used successfully. * Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 * Content-Type: multipart/related _Details:_ * Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value -Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully: +Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. The following tools can be used successfully. * Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 curl --request PUT "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1 ### Upsert single instance > [!NOTE]-> This is a non-standard API that allows the upsert of a single DICOM files. +> This is a non-standard API that allows the upsert of a single DICOM file. -Use this method to upload a single DICOM file: +Use this method to upload a single DICOM file. _Details:_ * Path: ../studies curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1 ### Retrieve one or more frames from a single instance -This request retrieves one or more frames from a single instance, and returns them as a collection of multipart/related bytes. Multiple frames can be retrieved by passing a comma-separated list of frame numbers. All DICOM instances with images have at minimum one frame, which is often just the image associated with the instance itself. +This request retrieves one or more frames from a single instance, and returns them as a collection of multipart/related bytes. Multiple frames can be retrieved by passing a comma-separated list of frame numbers. All DICOM instances with images have at minimum one frame, which is often simply the image associated with the instance itself. _Details:_ * Path: ../studies/{study}/series{series}/instances/{instance}/frames/1,2,3 |
healthcare-apis | Dicomweb Standard Apis Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md | Title: Use Python and DICOMweb Standard APIs in Azure Health Data Services description: Use Python and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 02/15/2022-+ # Use DICOMweb Standard APIs with Python The filename, studyUID, seriesUID, and instanceUID of the sample DICOM files are |blue-circle.dcm|1.2.826.0.1.3680043.8.498.13230779778012324449356534479549187420|1.2.826.0.1.3680043.8.498.77033797676425927098669402985243398207|1.2.826.0.1.3680043.8.498.13273713909719068980354078852867170114| > [!NOTE]-> Each of these files represents a single instance and is part of the same study. Also,the green-square and red-triangle are part of the same series, while the blue-circle is in a separate series. +> Each of these files represents a single instance and is part of the same study. Also, the green-square and red-triangle are part of the same series, while the blue-circle is in a separate series. ## Prerequisites instance_uid = "1.2.826.0.1.3680043.8.498.47359123102728459884412887463296905395 ### Authenticate to Azure and get a token -`DefaultAzureCredential` allows us to use various ways to get tokens to log into the service. In this example, use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that are also possible to use. To use the AzureCliCredential, you need to sign in to Azure from the CLI before running this code. For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md). Alternatively, copy and paste the token retrieved while signing in from the CLI. +`DefaultAzureCredential` allows us to use various ways to get tokens to log into the service. In this example, use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that you may use. To use the AzureCliCredential, you need to sign in to Azure from the CLI before running this code. For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md). Alternatively, copy and paste the token retrieved while signing in from the CLI. > [!NOTE]-> `DefaultAzureCredential` returns several different Credential objects. We reference the `AzureCliCredential` as the 5th item in the returned collection. This may not be consistent. If so, uncomment the `print(credential.credential)` line. This will list all the items. Find the correct index, recalling that Python uses zero-based indexing. +> `DefaultAzureCredential` returns several different Credential objects. We reference the `AzureCliCredential` as the 5th item in the returned collection. This may not always be the case. If not, uncomment the `print(credential.credential)` line. This will list all the items. Find the correct index, recalling that Python uses zero-based indexing. > [!NOTE] > If you have not logged into Azure using the CLI, this will fail. You must be logged into Azure from the CLI for this to work. bearer_token = f'Bearer {token.token}' The `Requests` libraries (and most Python libraries) don't work with `multipart\related` in a way that supports DICOMweb. Because of these libraries, we must add a few methods to support working with DICOM files. -`encode_multipart_related` takes a set of fields (in the DICOM case, these libraries are generally Part 10 dam files) and an optional user-defined boundary. It returns both the full body, along with the content_type, which it can be used. +`encode_multipart_related` takes a set of fields (in the DICOM case, these libraries are generally Part 10 dam files) and an optional user-defined boundary. It returns both the full body, along with the content_type, which can be used. ```python def encode_multipart_related(fields, boundary=None): The following examples highlight persisting DICOM files. ### Store instances using `multipart/related` -This example demonstrates how to upload a single DICOM file, and it uses a bit of a Python to preload the DICOM file (as bytes) into memory. When an array of files is passed to the fields parameter of `encode_multipart_related`, multiple files can be uploaded in a single POST. It's sometimes used to upload several instances inside a complete series or study. +This example demonstrates how to upload a single DICOM file, and it uses Python to preload the DICOM file into memory as bytes. When an array of files is passed to the fields parameter `encode_multipart_related`, multiple files can be uploaded in a single POST. It's sometimes used to upload several instances inside a complete series or study. _Details:_ response = client.post(url, body, headers=headers, verify=False) ### Store instances for a specific study -This example demonstrates how to upload multiple DICOM files into the specified study. It uses a bit of a Python to preload the DICOM file (as bytes) into memory. +This example demonstrates how to upload multiple DICOM files into the specified study. It uses Python to preload the DICOM file into memory as bytes. -When an array of files is passed to the fields parameter of `encode_multipart_related`, multiple files can be uploaded in a single POST. It's sometimes used to upload a complete series or study. +When an array of files is passed to the fields parameter `encode_multipart_related`, multiple files can be uploaded in a single POST. It's sometimes used to upload a complete series or study. _Details:_ * Path: ../studies/{study} _Details:_ * Accept: multipart/related; type="application/dicom"; transfer-syntax=* * Authorization: Bearer $token" -All three of the dcm files that we uploaded previously are part of the same study so the response should return all three instances. Validate that the response has a status code of OK and that all three instances are returned. +All three of the dcm files that uploaded previously are part of the same study, so the response should return all three instances. Validate that the response has a status code of OK and that all three instances are returned. ```python url = f'{base_url}/studies/{study_uid}' response = client.get(url, headers=headers) #, verify=False) ### Use the retrieved instances -The instances are retrieved as binary bytes. You can loop through the returned items and convert the bytes into a file that `pydicom` can read. +The instances are retrieved as binary bytes. You can loop through the returned items and convert the bytes into a file that `pydicom` can read as follows. ```python _Details:_ * Accept: application/dicom+json * Authorization: Bearer $token" -This series has two instances (green-square and red-triangle), so the response should return for both instances. Validate that the response has a status code of OK and that both instances metadata are returned. +This series has two instances (green-square and red-triangle), so the response should return for both instances. Validate that the response has a status code of OK and that the metadata of both instances are returned. ```python url = f'{base_url}/studies/{study_uid}/series/{series_uid}/metadata' response = client.get(url, headers=headers) #, verify=False) In the following examples, we search for items using their unique identifiers. You can also search for other attributes, such as PatientName. -Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md#supported-search-parameters) document for supported DICOM attributes. +Refer to the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md#supported-search-parameters) for supported DICOM attributes. ### Search for studies response = client.get(url, headers=headers, params=params) #, verify=False) > [!NOTE] > Delete is not part of the DICOM standard, but it has been added for convenience. -A 204 response code is returned when the deletion is successful. A 404 response code is returned if the item(s) never existed or are already deleted. +A 204 response code is returned when the deletion is successful. A 404 response code is returned if the items never existed or are already deleted. ### Delete a specific instance within a study and series _Details:_ * Headers: * Authorization: Bearer $token -This request deletes the red-triangle instance from the server. If it's successful, the response status code contains no content. +This request deletes the red-triangle instance from the server. If successful, the response status code contains no content. ```python headers = {"Authorization":bearer_token} _Details:_ * Headers: * Authorization: Bearer $token -This code example deletes the green-square instance (it's the only element left in the series) from the server. If it's successful, the response status code doesn't delete content. +This code example deletes the green-square instance from the server (it's the only element left in the series). If successful, the response status code doesn't delete content. ```python headers = {"Authorization":bearer_token} |
healthcare-apis | Dicomweb Standard Apis With Dicom Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md | Title: Access DICOMweb APIs to manage DICOM data in Azure Health Data Services description: Learn how to use DICOMweb APIs to store, review, search, and delete DICOM objects. Learn how to use custom APIs to track changes and assign unique tags to DICOM data.-+ Last updated 05/29/2024-+ # Access DICOMweb APIs to manage DICOM data -The DICOM® service allows you to store, review, search, and delete DICOM objects by using a subset of DICOMweb APIs. The DICOMweb APIs are web-based services that follow the DICOM standard. By using these APIs, you can access and manage your organization's DICOM data without requiring complex protocols or formats. +The DICOM® service allows you to store, review, search, and delete DICOM objects by using a subset of DICOMweb APIs. The DICOMweb APIs are web-based services that follow the DICOM standard. Using these APIs, you can access and manage your organization's DICOM data without requiring complex protocols or formats. The supported services are: In addition to the subset of DICOMweb APIs, the DICOM service supports these cus The DICOM service provides a web-based interface that follows REST (representational state transfer) principles. The REST API allows different applications or systems to communicate with each other using standard methods like GET, POST, PUT, and DELETE. To interact with the DICOM service, use any programming language that supports HTTP requests and responses. -Refer to the language-specific examples. You can view Postman collection examples in several languages including: +Refer to the language-specific examples. You can view Postman collection examples in several languages including the following. - Go - Java For more information about how to use Python with the DICOM service, see [Using Postman is an excellent tool for designing, building, and testing REST APIs. [Download Postman](https://www.postman.com/downloads/) to get started. For more information, see [Postman learning site](https://learning.postman.com/). -One important caveat with Postman and the DICOMweb standard is that Postman only supports uploading DICOM files by using the single-part payload defined in the DICOM standard. This caveat is because Postman can't support custom separators in a multipart/related POST request. For more information, see [Multipart POST not working for me # 576](https://github.com/postmanlabs/postman-app-support/issues/576). All examples in the Postman collection for uploading DICOM documents by using a multipart request are prefixed with **[won't work - see description]**. The examples for uploading by using a single-part request are included in the collection and are prefixed with **Store-Single-Instance**. +One important caution with Postman and the DICOMweb standard is that Postman only supports uploading DICOM files by using the single-part payload defined in the DICOM standard. This is because Postman can't support custom separators in a multipart/related POST request. For more information, see [Multipart POST not working for me # 576](https://github.com/postmanlabs/postman-app-support/issues/576). All examples in the Postman collection for uploading DICOM documents by using a multipart request are prefixed with **[won't work - see description]**. The examples for uploading by using a single-part request are included in the collection and are prefixed with **Store-Single-Instance**. To use the Postman collection, download it locally and then import the collection through Postman. To access the collection, see [Postman Collection Examples](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json). |
healthcare-apis | Enable Diagnostic Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md | Title: Enable diagnostic logging in the DICOM service - Azure Health Data Services description: This article explains how to enable diagnostic logging in the DICOM service.-+ Last updated 10/13/2023-+ # Enable audit and diagnostic logging in the DICOM service -In this article, you'll learn how to enable diagnostic logging in DICOM service and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements is a must. The feature in DICOM service enables diagnostic logs is the [Diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal. +In this article, you'll learn how to enable diagnostic logging in DICOM® service and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements is required. The feature in DICOM service that enables diagnostic logs is the [Diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal. ## Enable logs 1. To enable logging DICOM service, select your DICOM service in the Azure portal. 2. Select the **Activity log** on the left pane, and then select **Diagnostic settings**. - [ ![Screenshot of Azure activity log.](media/dicom-activity-log.png) ](media/dicom-activity-log.png#lightbox) + [![Screenshot of Azure activity log.](media/dicom-activity-log.png)](media/dicom-activity-log.png#lightbox) 3. Select **+ Add diagnostic setting**. - [ ![Screenshot of Add Diagnostic settings.](media/add-diagnostic-settings.png) ](media/add-diagnostic-settings.png#lightbox) + [![Screenshot of Add Diagnostic settings.](media/add-diagnostic-settings.png)](media/add-diagnostic-settings.png#lightbox) 4. Enter the **Diagnostic settings name**. - [ ![Screenshot of Configure Diagnostic settings.](media/configure-diagnostic-settings.png) ](media/configure-diagnostic-settings.png#lightbox) + [![Screenshot of Configure Diagnostic settings.](media/configure-diagnostic-settings.png)](media/configure-diagnostic-settings.png#lightbox) -5. Select the **Category** and **Destination** details for accessing the diagnostic logs. +5. Select the **Category** and **Destination** details for accessing the diagnostic logs: * **Send to Log Analytics workspace** in the Azure Monitor. You need to create your Logs Analytics workspace before you can select this option. For more information about the platform logs, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).- * **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created. + * **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to already be created. * **Stream to an event hub** for ingestion by a third-party service or custom analytic solution. You need to create an event hub namespace and event hub policy before you can configure this step.- * **Send to partner solution** that you're working with as partner organization in Azure. For information about potential partner integrations, see [Azure partner solutions documentation](../../partner-solutions/overview.md) + * **Send to partner solution** that you're working with as a partner organization in Azure. For information about potential partner integrations, see [Azure partner solutions documentation](../../partner-solutions/overview.md) For information about supported metrics, see [Supported metrics with Azure Monitor](.././../azure-monitor/essentials/metrics-supported.md). 6. Select **Save**. - > [!Note] - > It might take up to 15 minutes for the first Logs to show in Log Analytics. Also, if the DICOM service is moved from one resource group or subscription to another, update the settings once the move is complete. + > It might take up to 15 minutes for the first Logs to appear in Log Analytics. Also, if the DICOM service is moved from one resource group or subscription to another, update the settings once the move is complete. For information on how to work with diagnostic logs, see [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md) ## Log details-The log schema used differs based on the destination. Log Analytics has a schema that differs from other destinations. Each log type has a schema that differs. +The log schema used differs based on the destination. Log Analytics has a schema that differs from other destinations. Each log type has a different schema. ### Audit log details #### Raw logs -The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics: +The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics. |Field Name |Type |Notes | ||||-|correlationId|String|Correlation ID -|operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) -|time|DateTime|Date and time of the event. -|resourceId|String| Azure path to the resource. -|identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM). -|location|String|The location of the server that processed the request. -|uri|String|The request URI. -|resultType|String| The available values currently are Started, Succeeded, or Failed. -|resultSignature|Int|The HTTP Status Code (for example, 200) -|type|String|Type of log (it's always MicrosoftHealthcareApisAuditLog in this case). -|level|String|Log level (Informational, Error). +| correlationId | String | Correlation ID | +| operationName | String | Describes the type of operation (for example, Retrieve, Store, Query, etc.) | +|time | DateTime | Date and time of the event. | +|resourceId | String | Azure path to the resource. | +|identity | Dynamic | A generic property bag containing identity information (currently doesn't apply to DICOM). | +| location | String | The location of the server that processed the request. | +| uri | String | The request URI. | +| resultType | String | The available values currently are Started, Succeeded, or Failed. | +| resultSignature | Int | The HTTP Status Code (for example, 200) | +| type | String | Type of log (it's always MicrosoftHealthcareApisAuditLog in this case). | +| level | String | Log level (Informational, Error). | #### Log Analytics logs -The DICOM service returns the following fields in the audit sign-in Log Analytics: +The DICOM service returns the following fields in the audit sign-in Log Analytics. |Field Name |Type |Notes | ||||-|CorrelationId|String|Correlation ID -|OperationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) -|TimeGenerated [UTC]|DateTime|Date and time of the event. -|_ResourceId|String| Azure path to the resource. -|Identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM). -|Uri|String|The request URI. -|ResultType|String| The available values currently are Started, Succeeded, or Failed. -|StatusCode|Int|The HTTP Status Code (for example, 200) -|Type|String|Type of log (it's always AHDSDicomAuditLogs in this case). -|Level|String|Log level (Informational, Error). -|TenantId|String| Tenant ID. +| CorrelationId | String | Correlation ID | +| OperationName | String | Describes the type of operation (for example, Retrieve, Store, Query, etc.) | +| TimeGenerated [UTC] | DateTime | Date and time of the event. | +| _ResourceId | String | Azure path to the resource. | +| Identity | Dynamic | A generic property bag containing identity information (currently doesn't apply to DICOM). | +| Uri | String | The request URI. | +| ResultType | String | The available values currently are Started, Succeeded, or Failed. | +| StatusCode | Int | The HTTP Status Code (for example, 200) | +| Type|String | Type of log (it's always AHDSDicomAuditLogs in this case). | +| Level | String | Log level (Informational, Error). | +| TenantId | String | Tenant ID. | ### Diagnostic log details #### Raw logs -The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics: +The DICOM service returns the following fields in the audit log as seen when streamed outside of Log Analytics. |Field Name |Type |Notes | ||||-|correlationId|String|Correlation ID -|operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) -|time|DateTime|Date and time of the event. -|resultDescription|String|Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file. -|resourceId|String| Azure path to the resource. -|identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM). -|location|String|The location of the server that processed the request. -|properties|String|Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request. -|level|String|Log level (Informational, Error). +| correlationId | String | Correlation ID | +| operationName | String | Describes the type of operation (for example, Retrieve, Store, Query, etc.) | +| time | DateTime | Date and time of the event. | +| resultDescription | String | Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file. | +| resourceId | String | Azure path to the resource. | +| identity | Dynamic | A generic property bag containing identity information (currently doesn't apply to DICOM). | +| location | String | The location of the server that processed the request. | +| properties | String | Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request. | +| level | String | Log level (Informational, Error). | #### Log Analytics logs -The DICOM service returns the following fields in the audit sign-in Log Analytics: +The DICOM service returns the following fields in the audit sign-in Log Analytics. |Field Name |Type |Notes | ||||-|CorrelationId|String|Correlation ID -|OperationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) -|TimeGenerated|DateTime|Date and time of the event. -|Message|String|Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file. -|Location|String|The location of the server that processed the request. -|Properties|String|Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request. -|LogLevel|String|Log level (Informational, Error). +| CorrelationId | String | Correlation ID | +| OperationName | String | Describes the type of operation (for example, Retrieve, Store, Query, etc.) | +| TimeGenerated | DateTime | Date and time of the event. | +| Message | String | Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file. | +| Location | String | The location of the server that processed the request. | +| Properties | String | Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request. | +| LogLevel | String | Log level (Informational, Error). | ## Sample Log Analytics queries |
healthcare-apis | Export Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md | Title: Export DICOM files by using the export API of the DICOM service description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account.-+ Last updated 10/30/2023-+ # Export DICOM files There are three steps to exporting data from the DICOM service: The first step to export data from the DICOM service is to enable a system-assigned managed identity. This managed identity is used to authenticate the DICOM service and give permission to the storage account used as the destination for export. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). -1. In the Azure portal, browse to the DICOM service that you want to export from and select **Identity**. +1. In the Azure portal, browse to the DICOM service that you want to export from, and select **Identity**. :::image type="content" source="media/dicom-export-identity.png" alt-text="Screenshot that shows selection of Identity view." lightbox="media/dicom-export-identity.png"::: The export API exposes one `POST` endpoint for exporting data. POST <dicom-service-url>/<version>/export ``` -Given a *source*, the set of data to be exported, and a *destination*, the location to which data is exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section. +Given a *source*, the set of data to be exported, and a *destination* (the location to which data is exported), the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section. -Any errors encountered while you attempt to export are recorded in an error log. For more information, see the [Errors](#errors) section. +Any errors encountered while attempting to export are recorded in an error log. For more information, see the [Errors](#errors) section. #### Request The request body consists of the export source and destination. The only setting is the list of identifiers to export. | Property | Required | Default | Description |-| :- | :- | : | :- | +| -- | -- | - | -- | | `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instance identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"` | #### Destination settings The only setting is the list of identifiers to export. The connection to the Blob Storage account is specified with `BlobContainerUri`. | Property | Required | Default | Description |-| :- | :- | : | :- | +| -- | -- | - | -- | | `BlobContainerUri` | No | `""` | The complete URI for the blob container | | `UseManagedIdentity` | Yes | `false` | A required flag that indicates whether managed identity should be used to authenticate to the blob container | Content-Type: application/json #### Response -The export API returns a `202` status code when an export operation is started successfully. The body of the response contains a reference to the operation, while the value of the `Location` header is the URL for the export operation's status (the same as `href` in the body). +The export API returns a `202` status code when an export operation is started successfully. The body of the response contains a reference to the operation. The value of the `Location` header is the URL for the export operation's status (the same as `href` in the body). Inside the destination container, use the path format `<operation id>/results/<study>/<series>/<sop instance>.dcm` to find the DCM files. Content-Type: application/json ## Errors -If there are any user errors when you export a DICOM file, the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files and the caller can review it. You can find the error log at `<export blob container uri>/<operation ID>/errors.log`. +If there are any user errors exporting a DICOM file, the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files, and the caller can review it. You can find the error log at `<export blob container uri>/<operation ID>/errors.log`. #### Format -Each line of the error log is a JSON object with the following properties. A given error identifier might appear multiple times in the log as each update to the log is processed *at least once*. +Each line of the error log is a JSON object with the following properties. A given error identifier might appear multiple times in the log, as each update to the log is processed *at least once*. | Property | Description | | | -- | |
healthcare-apis | Get Access Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md | Title: Get an access token for the DICOM service in Azure Health Data Services description: Find out how to secure your access to the DICOM service with a token. Use the Azure command-line tool and unique identifiers to manage your medical images.-+ Last updated 10/13/2023-+ # Get an access token -To use the DICOM® service, users and applications need to prove their identity and permissions by getting an access token. An access token is a string that identifies a user or an application and grants them permission to access a resource. Using access tokens enhances security by preventing unauthorized access and reducing the need for repeated authentication. +To use the DICOM® service, users and applications need to prove their identity and permissions by getting an access token. An access token is a string that identifies a user or an application, and grants them permission to access a resource. Using access tokens enhances security by preventing unauthorized access and reducing the need for repeated authentication. ## Use the Azure command-line interface To assign roles and grant access to the DICOM service: 1. Register a client application in Microsoft Entra ID that acts as your identity provider and authentication mechanism. Use Azure portal, PowerShell, or Azure CLI to [register an application](dicom-register-application.md). 1. Assign one of the built-in roles for the DICOM data plane to the client application. The roles are: -- **DICOM Data Owner**. Gives full access to DICOM data.-- **DICOM Data Reader**. Allows read and search operations on DICOM data. +- **DICOM Data Owner**. Gives full access to DICOM data +- **DICOM Data Reader**. Allows read and search operations on DICOM data ## Get a token To get an access token using Azure CLI: #### Store a token in a variable -The DICOM service uses a `resource` or `Audience` with uniform resource identifier (URI) equal to the URI of the DICOM server `https://dicom.healthcareapis.azure.com`. You can obtain a token and store it in a variable (named `$token`) with the following command: +The DICOM service uses a `resource` or `Audience`, with uniform resource identifier (URI) equal to the URI of the DICOM server `https://dicom.healthcareapis.azure.com`. You can obtain a token and store it in a variable (named `$token`) with the following command. ```cURL $token=$(az account get-access-token --resource=https://dicom.healthcareapis.azure.com --query accessToken --output tsv) $token=$(az account get-access-token --resource=https://dicom.healthcareapis.azu * If you're using a local installation, sign in to the Azure CLI with the [az login](/cli/azure/reference-index#az-login) command. To finish authentication, follow the on-screen steps. For more information, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli). -* If prompted, install Azure CLI extensions on first use. For more information, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). +* If prompted on first use, install Azure CLI extensions. For more information, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). * Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade). ## Use a token with the DICOM service -You can use a token with the DICOM service [using cURL](dicomweb-standard-apis-curl.md). Here's an example: +You can use a token with the DICOM service [using cURL](dicomweb-standard-apis-curl.md). Following is an example. ```cURL -X GET --header "Authorization: Bearer $token" https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com/v<version of REST API>/changefeed |
healthcare-apis | Get Started With Analytics Dicom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md | Title: Get started using DICOM data in analytics workloads - Azure Health Data Services description: Learn how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data. -+ Last updated 10/13/2023-+ # Get started using DICOM data in analytics workloads -This article describes how to get started by using DICOM® data in analytics workloads with Azure Data Factory and Microsoft Fabric. +This article describes how to get started using DICOM® data in analytics workloads with Azure Data Factory and Microsoft Fabric. ## Prerequisites The pipeline in this example reads data from a DICOM service and writes its outp 1. For **Authentication type**, select **System Assigned Managed Identity**. -1. Enter the storage account details by entering the URL to the storage account manually. Or you can select the Azure subscription and storage account from dropdowns. +1. Enter the storage account details by entering the URL to the storage account manually. You can also select the Azure subscription and storage account from the dropdowns. 1. After you fill in the required fields, select **Test connection** to ensure the identity's roles are correctly configured. Data Factory pipelines are a collection of _activities_ that perform a task, lik ### Create a pipeline for DICOM data -If you created the DICOM service with Azure Data Lake Storage, you need to use a custom template to include a new `fileName` parameter in the metadata pipeline. Instead of using the template from the template gallery, follow these steps to configure the pipeline. +If you created the DICOM service with Azure Data Lake Storage, instead of using the template from the template gallery, you need to use a custom template to include a new `fileName` parameter in the metadata pipeline. To configure the pipeline follow these steps. 1. Download the [template](https://github.com/microsoft/dicom-server/blob/main/samples/templates/Copy%20DICOM%20Metadata%20Changes%20to%20ADLS%20Gen2%20in%20Delta%20Format.zip) from GitHub. The template file is a compressed (zipped) folder. You don't need to extract the files because they're already uploaded in compressed form. If you created the DICOM service with Azure Data Lake Storage, you need to use a ## Schedule a pipeline -Pipelines are scheduled by _triggers_. There are different types of triggers. _Schedule triggers_ allow pipelines to be triggered on a wall-clock schedule, which means they run at specific times of the day, such as every hour or every day at midnight. _Manual triggers_ trigger pipelines on demand, which means they run whenever you want them to. +Pipelines are scheduled by _triggers_. There are different types of triggers. _Schedule triggers_ allow pipelines to be triggered to run at specific times of the day, such as every hour, or every day at midnight. _Manual triggers_ trigger pipelines on demand, which means they run whenever you want them to. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see [Pipeline execution and triggers in Azure Data Factory or Azure Synapse Analytics](../../data-factory/concepts-pipeline-execution-triggers.md). In this example, a _tumbling window trigger_ is used to periodically run the pip ### Configure trigger run parameters -Triggers define when to run a pipeline. They also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines a few parameters that are described in the following table. If no value is supplied during configuration, the listed default value is used for each parameter. +Triggers define when a pipeline runs. They also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines parameters that are described in the following table. If no value is supplied during configuration, the listed default value is used for each parameter. | Parameter name | Description | Default value |-| :- | :- | : | +| -- | -- | - | | BatchSize | The maximum number of changes to retrieve at a time from the change feed (maximum 200) | `200` | | ApiVersion | The API version for the Azure DICOM service (minimum 2) | `2` | | StartTime | The inclusive start time for DICOM changes | `0001-01-01T00:00:00Z` | Triggers define when to run a pipeline. They also include [parameters](../../dat > [!NOTE] > Only tumbling window triggers support the system variables:- > * `@trigger().outputs.windowStartTime` and - > * `@trigger().outputs.windowEndTime` + > * `@trigger().outputs.windowStartTime` and + > * `@trigger().outputs.windowEndTime`. > > Schedule triggers use different system variables: > * `@trigger().scheduledTime` and - > * `@trigger().startTime` + > * `@trigger().startTime`. > > Learn more about [trigger types](../../data-factory/concepts-pipeline-execution-triggers.md#trigger-type-comparison). After the trigger is published, it can be triggered manually by using the **Trig ## Monitor pipeline runs -You can monitor trigger runs and their associated pipeline runs on the **Monitor** tab. Here, you can browse when each pipeline ran and how long it took to run. You can also potentially debug any problems that arose. +You can monitor triggered runs and their associated pipeline runs on the **Monitor** tab. Here, you can browse when each pipeline ran and how long it took to run. You can also potentially debug any problems that arose. :::image type="content" source="media/data-factory-monitor.png" alt-text="Screenshot that shows the Monitor view with a list of pipeline runs." lightbox="media/data-factory-monitor.png"::: ## Microsoft Fabric -[Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of a [Fabric lakehouse](/fabric/data-engineering/lakehouse-overview), you can manage, structure, and analyze data in OneLake in a single location. Any data outside of OneLake, written to Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of Fabric's suite of tools. +[Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of a [Fabric lakehouse](/fabric/data-engineering/lakehouse-overview), you can manage, structure, and analyze data in OneLake in a single location. Any data outside of OneLake, written to Data Lake Storage Gen2, can be connected to OneLake using shortcuts to take advantage of Fabric's suite of tools. ### Create shortcuts to metadata tables If you're using a [DICOM service with Data Lake Storage](dicom-data-lake.md), yo 1. Enter a **Shortcut Name** that describes the DICOM data. For example, **contoso-dicom-files**. -1. Enter the **Sub Path** that matches the name of the storage container and folder used by the DICOM service. For example, if you wanted to link to the root folder the Sub Path would be **/dicom/AHDS**. Note that the root folder is always `AHDS`, but you can optionally link to a child folder for a specific workspace or DICOM service instance. +1. Enter the **Sub Path** that matches the name of the storage container and folder used by the DICOM service. For example, if you wanted to link to the root folder the Sub Path would be **/dicom/AHDS**. The root folder is always `AHDS`, but you can optionally link to a child folder for a specific workspace or DICOM service instance. 1. Select **Create** to create the shortcut. If you're using a [DICOM service with Data Lake Storage](dicom-data-lake.md), yo After the tables are created in the lakehouse, you can query them from [Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). You can create notebooks directly from the lakehouse by selecting **Open Notebook** from the menu bar. -On the notebook page, the contents of the lakehouse can still be viewed on the left side, including the newly added tables. At the top of the page, select the language for the notebook. The language can also be configured for individual cells. The following example uses Spark SQL. +On the notebook page, the contents of the lakehouse can be viewed on the left side, including newly added tables. At the top of the page, select the language for the notebook. The language can also be configured for individual cells. The following example uses Spark SQL. #### Query tables by using Spark SQL This query selects all the contents from the `instance` table. When you're ready :::image type="content" source="media/fabric-notebook.png" alt-text="Screenshot that shows a notebook with a sample Spark SQL query." lightbox="media/fabric-notebook.png"::: -After a few seconds, the results of the query appear in a table underneath the cell like the example shown here. The amount of time might be longer if this Spark query is the first in the session because the Spark context needs to be initialized. +After a few seconds, the results of the query appear in a table underneath the cell like the following example shown. The time might be longer if this Spark query is the first in the session because the Spark context needs to be initialized. :::image type="content" source="media/fabric-notebook-results.png" alt-text="Screenshot that shows a notebook with a sample Spark SQL query and results." lightbox="media/fabric-notebook-results.png"::: #### Access DICOM file data in notebooks -If you used the template to create the pipeline and created a shortcut to the DICOM file data, you can use the `filePath` column in the `instance` table to correlate instance metadata to file data. +If you used a template to create the pipeline and created a shortcut to the DICOM file data, you can use the `filePath` column in the `instance` table to correlate instance metadata to the file data. ``` SQL SELECT sopInstanceUid, filePath from instance |
healthcare-apis | Import Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/import-files.md | Title: Import DICOM files into the DICOM service description: Learn how to import DICOM files by using bulk import in Azure Health Data Services.-+ Last updated 10/05/2023-+ # Import DICOM files (preview) Bulk import is a quick way to add data to the DICOM® service. Importing DICOM files with the bulk import capability enables: -- **Backup and migration**: For example, your organization might have many DICOM instances stored in local or on-premises systems that you want to back up or migrate to the cloud for better security, scalability, and availability. Rather than uploading the data one by one, use bulk import to transfer the data faster and more efficiently. +- **Backup and migration**. For example, your organization might have many DICOM instances stored in local or on-premises systems, which you want to back up or migrate to the cloud for better security, scalability, and availability. Rather than uploading the data one by one, use bulk import to transfer the data faster and more efficiently. -- **Machine learning development**: For example, your organization might have a large dataset of DICOM instances that you want to use for training machine learning models. With bulk import, you can upload the data to the DICOM service and then access it from [Microsoft Fabric](get-started-with-analytics-dicom.md), [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning), or other tools.+- **Machine learning development**. For example, your organization might have a large dataset of DICOM instances that you want to use for training machine learning models. With bulk import, you can upload the data to the DICOM service and then access it from [Microsoft Fabric](get-started-with-analytics-dicom.md), [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning), or other tools. ## Prerequisites Before you perform a bulk import, you need to enable a system-assigned managed i You need to enable bulk import before you import data. -#### Use the Azure portal +### Use the Azure portal 1. In the Azure portal, go to the DICOM service and then select **Bulk Import** from the left pane. You need to enable bulk import before you import data. :::image type="content" source="media/import-enable.png" alt-text="Screenshot that shows the Bulk Import page with the toggle set to Enabled." lightbox="media/import-enable.png"::: -#### Use an Azure Resource Manager template +### Use an Azure Resource Manager template When you use an Azure Resource Manager template (ARM template), enable bulk import with the property named `bulkImportConfiguration`. -Here's an example of how to configure bulk import in an ARM template: +Following is an example of how to configure bulk import in an ARM template. ``` json { Within the new resource group, two resources are created: DICOM images are added to the DICOM service by copying them into the `import-container`. Bulk import monitors this container for new images and adds them to the DICOM service. If there are errors that prevent a file from being added successfully, the errors are copied to the `error-container` and an error message is written to the `error-queue`. -#### Grant write access to the import container +### Grant write access to the import container The user or account that adds DICOM images to the import container needs write access to the container by using the `Data Owner` role. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml). -#### Upload DICOM images to the import container +### Upload DICOM images to the import container -Data is uploaded to Azure Storage containers in many ways: +Data is uploaded to Azure Storage containers in many ways. - [Upload a blob with Azure Storage Explorer](../../storage/blobs/quickstart-storage-explorer.md#upload-blobs-to-the-container) - [Upload a blob with AzCopy](../../storage/common/storage-use-azcopy-blobs-upload.md) |
healthcare-apis | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md | Title: Overview of the DICOM service in Azure Health Data Services description: The DICOM service is a cloud-based solution for storing, managing, and exchanging medical imaging data securely and efficiently with any DICOMwebΓäó-enabled systems or applications. Learn more about its benefits and use cases.-+ Last updated 10/13/2023-+ # What is the DICOM service? The DICOM service offers many benefits, including: ## Use imaging data to enable healthcare scenarios -To effectively treat patients, research treatments, diagnose illnesses, or get an overview of a patient's health history, organizations need to integrate data across several sources. The DICOM service enables imaging data to persist in the Microsoft cloud and allows it to reside with electronic health records (EHR) and healthcare device (IoT) data in the same Azure subscription. +To effectively treat patients, research treatments, diagnose illnesses, or get an overview of a patient's health history, organizations need to integrate data across several sources. The DICOM service enables imaging data to persist in the Microsoft cloud and allows it to reside with electronic health records (EHR) and healthcare device (IoT) data in the same Azure subscription. FHIR® supports integration of other types of data directly, or through references. With the DICOM service, organizations are able to store references to imaging data in FHIR and enable queries that cross clinical and imaging datasets. This capability enables organizations to deliver better healthcare. For example: -- **Image back-up**. Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, or organizations can use the DICOM service to back up their images with unlimited storage and access. There's no need to deidentify PHI data because the service is validated for PHI compliance.+- **Image back-up**. Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, or other organizations can use the DICOM service to back up their images with unlimited storage and access. There's no need to de-identify PHI data because the service is validated for PHI compliance. - **Image exchange and collaboration**. Share an image, a subset of images, or an entire image library instantly with or without related EHR data. The DICOM service enables organizations to manage medical imaging data with seve - **Studies Service support**. The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM studies, series, and instances. Microsoft includes the nonstandard delete transaction to enable a full resource lifecycle. -- **Worklist Service support**. The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS).Studies Service+- **Worklist Service support**. The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS) Studies Service. - **Extended query tags**. The DICOM service allows you to expand the list of tags specified in the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md) so you can index DICOM studies, series, and instances on standard or private DICOM tags. -- **Change feed**. The DICOM service enables you to access ordered, guaranteed, immutable, read-only logs of all changes that occur in the DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.+- **Change feed**. The DICOM service enables you to access ordered, guaranteed, immutable, read-only logs of all changes that occur in the DICOM service. Client applications can read these logs at any time independently, in parallel, and at their own pace. - **DICOMcast**. DICOMcast is an [open-source capability](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md) that can be self-hosted in Azure. DICOMcast enables a single source of truth for clinical data and imaging metadata. With DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service or FHIR server as an imaging study resource. -- **Export files**. The DICOM service allows you to [export DICOM data](export-dicom-files.md) in a file format, simplifying the process of using medical imaging in external workflows such as AI and machine learning. +- **Export files**. The DICOM service allows you to [export DICOM data](export-dicom-files.md) in a file format, simplifying the process of using medical imaging in external workflows, such as AI and machine learning. ## Prerequisites to deploy the DICOM service -Your organization needs an Azure subscription to configure and run the components required for the DICOM service. By default, the components are created inside of an Azure resource group to simplify management. Additionally, a Microsoft Entra account is required. For each instance of the DICOM service, Microsoft creates a combination of isolated and multitenant resources. +Your organization needs an Azure subscription to configure and run the components required for the DICOM service. To simplify management, by default the components are created inside an Azure resource group. Additionally, a Microsoft Entra account is required. For each instance of the DICOM service, Microsoft creates a combination of isolated and multitenant resources. ## Next steps |
healthcare-apis | Pull Dicom Changes From Change Feed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md | Title: Access DICOM Change Feed logs by using C# and the DICOM client package in Azure Health Data Services description: Learn how to use C# code to consume Change Feed, a feature of the DICOM service that provides logs of all the changes in your organization's medical imaging data. The code example uses the DICOM client package to access and process the Change Feed.-+ Last updated 1/18/2024-+ # Access DICOM Change Feed logs by using C# and the DICOM client package -The Change Feed capability enables you to go through the history of the DICOM® service and then act on the create and delete events. +The Change Feed capability enables you to go through the history of a DICOM® service and act on the create and delete events. You access the Change Feed by using REST APIs. These APIs, along with sample usage of Change Feed, are documented in the [DICOM Change Feed overview](change-feed-overview.md). The version of the REST API should be explicitly specified in the request URL as described in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md). ## Consume Change Feed -The C# code example shows how to consume Change Feed using the DICOM client package. +The following C# code example shows how to consume Change Feed using the DICOM client package. ```csharp const int limit = 10; |
healthcare-apis | References For Dicom Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md | Title: References for DICOM service - Azure Health Data Services description: This reference provides related resources for the DICOM service.-+ Last updated 06/03/2022-+ # DICOM service open-source projects This article describes our open-source projects on GitHub that provide source co ### DICOM server -* [Medical imaging server for DICOM](https://github.com/microsoft/dicom-server): Open-source version of the Azure Health Data Services DICOM service managed service. +* [Medical imaging server for DICOM](https://github.com/microsoft/dicom-server): An open-source version of the Azure Health Data Services DICOM service managed service. ### DICOM cast -* [Integrate clinical and imaging data](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md): DICOM cast allows synchronizing the data from the DICOM service to the FHIR service, which allows healthcare organization to integrate clinical and imaging data. DICOM cast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning. +* [Integrate clinical and imaging data](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md): DICOM cast allows synchronizing the data from the DICOM service to the FHIR® service, which allows healthcare organization to integrate clinical and imaging data. DICOM cast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data, and the ability to effectively create cohorts for medical studies, analytics, and machine learning. ### DICOM data anonymization -* [Anonymize DICOM metadata](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/DICOM-anonymization.md): A DICOM file not only contains a viewable image but also a header with a large variety of data elements. These meta-data elements include identifiable information about the patient, the study, and the institution. Sharing such sensitive data demands proper protection to ensure data safety and maintain patient privacy. DICOM Anonymization Tool helps anonymize metadata in DICOM files for this purpose. +* [Anonymize DICOM metadata](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/DICOM-anonymization.md): A DICOM file not only contains a viewable image but also a header with a large variety of data elements. These meta-data elements include identifiable information about the patient, the study, and the institution. Sharing sensitive data demands proper protection to ensure data safety and maintain patient privacy. The DICOM Anonymization Tool helps anonymize metadata in DICOM files for this purpose. ### Access imaging study resources on Power BI, Power Apps, and Dynamics 365 Customer Insights -* [Connect to a FHIR service from Power Query Desktop](/power-query/connectors/fhir/fhir): After provisioning DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use the POWER Query connector for FHIR to import and shape data from the FHIR server including imaging study resource. +* [Connect to a FHIR service from Power Query Desktop](/power-query/connectors/fhir/fhir): After provisioning DICOM service, FHIR service, and synchronizing an imaging study for a given patient via DICOM cast, you can use the POWER Query connector for FHIR to import and shape data from the FHIR server including imaging study resources. ### Convert imaging study data to hierarchical parquet files -* [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md): After you provision a DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use FHIR to Synapse Sync Agent to perform Analytics and Machine Learning on imaging study data by moving FHIR data to Azure Data Lake in near real time and making it available to a Synapse workspace. +* [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md): After you provision a DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use FHIR to Synapse Sync Agent to perform Analytics and Machine Learning on imaging study data by moving FHIR data to Azure Data Lake in near real time, and making it available to a Synapse workspace. ### Health Data Services workshop -* [Azure Health Data Services Workshop](https://github.com/microsoft/azure-health-data-services-workshop): This workshop presents a series of hands-on activities to help users gain new skills working with Azure Health Data Services capabilities. The DICOM service challenge includes deployment of the service, exploration of the core API capabilities, a Postman collection to simplify exploration, and instructions for configuring a ZFP DICOM viewer. +* [Azure Health Data Services Workshop](https://github.com/microsoft/azure-health-data-services-workshop): This workshop presents a series of hands-on activities to help users gain new skills working with Azure Health Data Services capabilities. The DICOM service challenge includes deployment of the service, exploration of the core API capabilities, a Postman collection to simplify exploration, and instructions for configuring a ZFP DICOM viewer. ### Using the DICOM service with the OHIF viewer -* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, nondiagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. +* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, nondiagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ### Medical imaging network demo environment-* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow. -+* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab/demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow. ## Next steps [Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md) |
healthcare-apis | Update Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/update-files.md | Title: Update files in the DICOM service in Azure Health Data Services description: Learn how to use the bulk update API in Azure Health Data Services to modify DICOM attributes for multiple files in the DICOM service. This article explains the benefits, requirements, and steps of the bulk update operation.-+ Last updated 1/18/2024-+ # Update DICOM files There are a few limitations when you use the bulk update operation: - A maximum of 50 studies can be updated in a single operation. - Only one bulk update operation can be performed at a time.-- You can't delete only the latest version of a study or revert back to the original version. +- You can't delete only the latest version of a study, or revert back to the original version. - You can't update any field from non-null to a null value. ## Use the bulk update operation-Bulk update is an asynchronous, long-running operation available at the studies endpoint. The request payload includes one or more studies to update, the set of attributes to update, and the new values for those attributes. +Bulk update is an asynchronous, long-running operation available at a study's endpoint. The request payload includes one or more studies to update, the set of attributes to update, and the new values for those attributes. ### Update instances in multiple studies The bulk update endpoint starts a long-running operation that updates all instances in each study with the specified attributes. Content-Type: application/json } ``` -If the operation fails to start successfully, the response includes information about the failure in the errors list, including UIDs of the failing instance(s). +If the operation fails to start successfully, the response includes information about the failure in the errors list, including UIDs of one or more failing instances. ```http { GET {dicom-service-url}/{version}/operations/{operationId} | Name | Type | Description | | | | -- |-| 200 (OK) | Operation | The operation with the specified ID is complete | -| 202 (Accepted) | Operation | The operation with the specified ID is running | -| 404 (Not Found) | | Operation not found | +| 200 (OK) | Operation | The operation with the specified ID is complete. | +| 202 (Accepted) | Operation | The operation with the specified ID is running. | +| 404 (Not Found) | | Operation not found | ## Retrieving study versions-The [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs) transaction allows you to retrieve both the original and latest version of a study, series, or instance. The latest version of a study, series, or instance is always returned by default. The original version is returned by setting the `msdicom-request-original` header to `true`. Here's an example request: +The [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs) transaction allows you to retrieve both the original and latest version of a study, series, or instance. By default, the latest version of a study, series, or instance is returned. The original version is returned by setting the `msdicom-request-original` header to `true`. An example request follows. ```http GET {dicom-service-url}/{version}/studies/{study}/series/{series}/instances/{instance} Any attributes in the [Patient Identification Module](https://dicom.nema.org/dic ### Attributes automatically changed in bulk updates -When you perform a bulk update, the DICOM service updates the requested attributes and also two additional metadata fields. Here is the information that is updated automatically: +When you perform a bulk update, the DICOM service updates the requested attributes and two additional metadata fields automatically. Following is the information that is updated automatically. -| Tag | Attribute name | Description | Value +| Tag | Attribute name | Description | Value | | --| | | --| | (0002,0012) | Implementation Class UID | Uniquely identifies the implementation that wrote this file and its content. | 1.3.6.1.4.1.311.129 |-| (0002,0013) | Implementation Version Name | Identifies a version for an Implementation Class UID (0002,0012) | Assembly version of the DICOM service (e.g. 0.1.4785) | +| (0002,0013) | Implementation Version Name | Identifies a version for an Implementation Class UID (0002,0012) | Assembly version of the DICOM service (for example, 0.1.4785) | -Here, the UID `1.3.6.1.4.1.311.129` is a registered under [Microsoft OID arc](https://oidref.com/1.3.6.1.4.1.311) in IANA. +The UID `1.3.6.1.4.1.311.129` is a registered under [Microsoft OID arc](https://oidref.com/1.3.6.1.4.1.311) in IANA. #### Patient identification module attributes | Attribute Name | Tag | Description | | - | --| | | Patient's Name | (0010,0010) | Patient's full name |-| Patient ID | (0010,0020) | Primary hospital identification number or code for the patient. | -| Other Patient IDs| (0010,1000) | Other identification numbers or codes used to identify the patient. -| Type of Patient ID| (0010,0022) | The type of identifier in this item. Enumerated Values: TEXT RFID BARCODE Note that the identifier is coded as a string regardless of the type, not as a binary value. -| Other Patient Names| (0010,1001) | Other names used to identify the patient. -| Patient's Birth Name| (0010,1005) | Patient's birth name. -| Patient's Mother's Birth Name| (0010,1060) | Birth name of patient's mother. -| Medical Record Locator | (0010,1090)| An identifier used to find the patient's existing medical record (for example, film jacket). +| Patient ID | (0010,0020) | Primary hospital identification number or code for the patient | +| Other Patient IDs| (0010,1000) | Other identification numbers or codes used to identify the patient | +| Type of Patient ID| (0010,0022) | The type of identifier in this item; Enumerated Values: TEXT RFID BARCODE. Note that the identifier is coded as a string, _not_ as a binary value, regardless of the type. | +| Other Patient Names| (0010,1001) | Other names used to identify the patient | +| Patient's Birth Name| (0010,1005) | Patient's birth name | +| Patient's Mother's Birth Name| (0010,1060) | Birth name of patient's mother | +| Medical Record Locator | (0010,1090)| An identifier used to find the patient's existing medical record (for example, film jacket). | #### Patient demographic module attributes | Attribute Name | Tag | Description | | - | --| |-| Patient's Age | (0010,1010) | Age of the Patient. | -| Occupation | (0010,2180) | Occupation of the Patient. | -| Confidentiality Constraint on Patient Data Description | (0040,3001) | Special indication to the modality operator about confidentiality of patient information (for example, that they shouldn't use the patients name where other patients are present). | +| Patient's Age | (0010,1010) | Age of the Patient | +| Occupation | (0010,2180) | Occupation of the Patient | +| Confidentiality Constraint on Patient Data Description | (0040,3001) | Special indication to the modality operator about confidentiality of patient information; For example, that they shouldn't use the patient's name when other patients are present. | | Patient's Birth Date | (0010,0030) | Date of birth of the named patient | | Patient's Birth Time | (0010,0032) | Time of birth of the named patient |-| Patient's Sex | (0010,0040) | Sex of the named patient. | +| Patient's Sex | (0010,0040) | Sex of the named patient | | Quality Control Subject |(0010,0200) | Indicates whether or not the subject is a quality control phantom. | | Patient's Size | (0010,1020) | Patient's height or length in meters | | Patient's Weight | (0010,1030) | Weight of the patient in kilograms |-| Patient's Address | (0010,1040) | Legal address of the named patient | +| Patient's Address | (0010,1040) | Legal address of the named patient | | Military Rank | (0010,1080) | Military rank of patient |-| Branch of Service | (0010,1081) | Branch of the military. The country or regional allegiance might also be included (for example, U.S. Army). | -| Country of Residence | (0010,2150) | Country where a patient currently resides | -| Region of Residence | (0010,2152) | Region within patient's country of residence | -| Patient's Telephone Numbers | (0010,2154) | Telephone numbers at which the patient can be reached | -| Ethnic Group | (0010,2160) | Ethnic group or race of patient | -| Patient's Religious Preference | (0010,21F0) | The religious preference of the patient | +| Branch of Service | (0010,1081) | Branch of the military; The country or regional allegiance might also be included (for example, U.S. Army). | +| Country of Residence | (0010,2150) | Country where a patient currently resides | +| Region of Residence | (0010,2152) | Region within patient's country of residence | +| Patient's Telephone Numbers | (0010,2154) | Telephone numbers at which the patient can be reached | +| Ethnic Group | (0010,2160) | Ethnic group or race of patient | +| Patient's Religious Preference | (0010,21F0) | The religious preference of the patient | | Patient Comments | (0010,4000) | User-defined comments about the patient | -| Responsible Person | (0010,2297) | Name of person with medical decision making authority for the patient. | -| Responsible Person Role | (0010,2298) | Relationship of Responsible Person to the patient. | -| Responsible Organization | (0010,2299) | Name of organization with medical decision making authority for the patient. | -| Patient Species Description | (0010,2201) | The species of the patient. | -| Patient Breed Description | (0010,2292) | The breed of the patient. See Section C.7.1.1.1.1. | -| Breed Registration Number | (0010,2295) | Identification number of a veterinary patient within the registry. | -| Issuer of Patient ID | (0010,0021) | Identifier of the Assigning Authority (system, organization, agency, or department) that issued the Patient ID. +| Responsible Person | (0010,2297) | Name of a person with medical decision making authority for the patient. | +| Responsible Person Role | (0010,2298) | Relationship of Responsible Person to the patient | +| Responsible Organization | (0010,2299) | Name of an organization with medical decision making authority for the patient | +| Patient Species Description | (0010,2201) | The species of the patient | +| Patient Breed Description | (0010,2292) | The breed of the patient; See Section C.7.1.1.1.1. | +| Breed Registration Number | (0010,2295) | Identification number of a veterinary patient within the registry | +| Issuer of Patient ID | (0010,0021) | Identifier of the Assigning Authority (system, organization, agency, or department) that issued the Patient ID | #### General study module | Attribute Name | Tag | Description | | - | --| |-| Referring Physician's Name | (0008,0090) | Name of the patient's referring physician. | -| Accession Number | (0008,0050) | A RIS generated number that identifies the order for the Study. | -| Study Description | (0008,1030) | Institution-generated description or classification of the Study (component) performed. | +| Referring Physician's Name | (0008,0090) | Name of the patient's referring physician | +| Accession Number | (0008,0050) | A RIS generated number that identifies the order for the Study | +| Study Description | (0008,1030) | Institution-generated description or classification of the Study (component) performed | [!INCLUDE [DICOM trademark statements](../includes/healthcare-apis-dicom-trademark.md)] |
healthcare-apis | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md | Refer to the table for details about resolution dates or possible workarounds. |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |+|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects those with FHIR instances created post-August 19th that utilize private link.|August 22,2024 11:00 am PST|Suggested workaround to unblock is 1 Create a Private DNS Zone for azurehealthcareapis.com under the same VNET. 2 Create a new recordset to the targeted FHIR service. | --| |FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8, 2024 4:15 pm PST| |API queries to FHIR service returned Internal Server error in UK south region |August 10, 2023 9:53 am PST|--|August 10, 2023 10:43 am PST|-|FHIR resources aren't queryable by custom search parameters even after reindex is successful.| July 2023| Suggested workaround is to create support ticket to update the status of custom search parameters after reindex is successful.|--| + ## Related content |
iot-dps | How To Control Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-control-access.md | Here are the service functions exposed on the endpoints: | `{your-service}.azure-devices-provisioning.net/enrollmentGroups` |Provides operations for managing device enrollment groups. | | `{your-service}.azure-devices-provisioning.net/registrations/{id}` |Provides operations for retrieving and managing the status of device registrations. | - As an example, a service generated using a pre-created shared access policy called `enrollmentread` would create a token with the following parameters: * resource URI: `{mydps}.azure-devices-provisioning.net`, As an example, a service generated using a pre-created shared access policy call * policy name: `enrollmentread`, * any expiration time.backn -![Create a shared access policy for your Device Provisioning Service instance in the portal][img-add-shared-access-policy] - ```javascript var endpoint ="mydps.azure-devices-provisioning.net"; var policyName = 'enrollmentread'; The following table lists the permissions you can use to control access to your <!-- links and images --> -[img-add-shared-access-policy]: ./media/how-to-control-access/how-to-add-shared-access-policy.PNG [lnk-sdks]: ../iot-hub/iot-hub-devguide-sdks.md [lnk-azure-resource-manager]: ../azure-resource-manager/management/overview.md [lnk-resource-provider-apis]: /rest/api/iot-dps/ |
iot-hub-device-update | Device Update Configuration File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configuration-file.md | When installing Debian agent on an IoT Device with a Linux OS, modify the `/etc/ "do" ], "iotHubProtocol": "mqtt",- "compatPropertyNames":"manufacturer,model,location,language" <The property values must be in lower case only>, + "compatPropertyNames":"manufacturer,model,location,environment" <The property values must be in lower case only>, "manufacturer": <Place your device info manufacturer here>, "model": <Place your device info model here>, "agents": [ |
iot | Concepts Model Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-model-discovery.md | Title: Use IoT Plug and Play models in a solution | Microsoft Docs description: As a solution builder, learn about how you can use IoT Plug and Play models in your IoT solution. Previously updated : 03/13/2024 Last updated : 08/30/2024 content-encoding:utf-8 ## Retrieve a model definition -A solution uses model ID identified above to retrieve the corresponding model definition. +A solution uses the model ID identified previously to retrieve the corresponding model definition. A solution can get the model definition by using one of the following options: dtmi:com:example:Thermostat;1 dtmi:azure:DeviceManagement:DeviceInformation;1 ``` -The `ModelsRepositoryClient` can be configured to query a custom DMR -available through http(s)- and to specify the dependency resolution by using the `ModelDependencyResolution` flag: +The `ModelsRepositoryClient` can be configured to query a custom DMR--available through https--and to specify the dependency resolution by using the `ModelDependencyResolution` flag: - Disabled. Returns the specified interface only, without any dependency. - Enabled. Returns all the interfaces in the dependency chain After you identify the model ID for a new device connection, follow these steps: ## Next steps -Now that you've learned how to integrate IoT Plug and Play models in an IoT solution, some suggested next steps are: +Now that you learned how to integrate IoT Plug and Play models in an IoT solution, some suggested next steps are: - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin) |
iot | Concepts Model Parser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-model-parser.md | - Title: Understand the Azure Digital Twins model parser | Microsoft Docs -description: As a developer, learn how to use the DTDL parser to validate models. -- Previously updated : 1/23/2024------# Understand the digital twins model parser --The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL v2 or v3 model. The DTDL model may be defined in multiple files. --## Install the DTDL model parser --The parser is available in NuGet.org with the ID: [DTDLParser](https://www.nuget.org/packages/DTDLParser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI. --```bash -dotnet add package DTDLParser -``` --> [!NOTE] -> At the time of writing, the parser version is `1.0.52`. --## Use the parser to validate and inspect a model --The DTDLParser is a library that you can use to: --- Determine whether one or more models are valid according to the language v2 or v3 specifications.-- Identify specific modeling errors.-- Inspect model contents.--A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files that define a model and then validate all the files as a whole, including any references between the files. --The [DTDLParser for .NET](https://github.com/digitaltwinconsortium/DTDLParser) repository includes the following samples that illustrate the use of the parser: --- [DTDLParserResolveSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserResolveSample) shows how to parse an interface with external references, resolve the dependencies using the `Azure.IoT.ModelsRepository` client.-- [DTDLParserJSInteropSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserJSInteropSample) shows how to use the DTDL Parser from JavaScript running in the browser, using .NET JSInterop.--The DTDLParser for .NET repository also includes a [collection of tutorials](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/tutorials/README.md) that show you how to use the parser to validate and inspect models. --## Next steps --The model parser API reviewed in this article enables many scenarios to automate or validate tasks that depend on DTDL models. For example, you could dynamically build a UI from the information in the model. |
iot | Concepts Modeling Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-modeling-guide.md | Title: Understand IoT Plug and Play device models | Microsoft Docs description: Understand the Digital Twins Definition Language (DTDL) modeling language for IoT Plug and Play devices. The article describes primitive and complex datatypes, reuse patterns that use components and inheritance, and semantic types. The article provides guidance on the choice of device twin model identifier and tooling support for model authoring. Previously updated : 1/23/2024 Last updated : 08/30/2024 Because the geospatial types are array-based, they can't currently be used in pr ## Semantic types -The data type of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the digital twins model parser](concepts-model-parser.md). +The data type of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the Digital Twins model parser](#understand-the-digital-twins-model-parser). The following snippet shows an example telemetry definition that includes semantic type information. The semantic type `Temperature` is added to the `@type` array, and the `unit` value, `degreeCelsius` is one of the valid units for the semantic type: The following snippet shows an example telemetry definition that includes semant ## Localization -Applications, such as IoT Central, use information in the model to dynamically build a UI around the data that's exchanged with an IoT Plug and Play device. For example, tiles on a dashboard can display names and descriptions for telemetry, properties, and commands. +Applications, such as IoT Central, use information in the model to dynamically build a UI around the data exchanged with an IoT Plug and Play device. For example, tiles on a dashboard can display names and descriptions for telemetry, properties, and commands. The optional `description` and `displayName` fields in the model hold strings intended for use in a UI. These fields can hold localized strings that an application can use to render a localized UI. There's a DTDL authoring extension for VS Code that supports both DTDL v2 and DT To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code. -When you've installed the extension, use it to help you author DTDL model files in VS Code: +After you install the extension, use it to help you author DTDL model files in VS Code: - The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot: Applications, such as IoT Central, use device models. In IoT Central, a model is > [!NOTE] > IoT Central defines some extensions to the DTDL language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). -A custom solution can use the [digital twins model parser](concepts-model-parser.md) to understand the capabilities of a device that implements the model. To learn more, see [Use IoT Plug and Play models in an IoT solution](concepts-model-discovery.md). +A custom solution can use the [Digital Twins model parser](#understand-the-digital-twins-model-parser) to understand the capabilities of a device that implements the model. To learn more, see [Use IoT Plug and Play models in an IoT solution](concepts-model-discovery.md). ### Version IoT Central implements more versioning rules for device models. If you version a ### Publish -As of February 2024, the Azure Certified Device program has been retired. Therefore, Microsoft is no longer accepting submissions of DTDL models to the[Azure IoT plug and play models](https://github.com/Azure/iot-plugandplay-models) repository. +As of February 2024, the Azure Certified Device program is retired. Therefore, Microsoft is no longer accepting submissions of DTDL models to the[Azure IoT plug and play models](https://github.com/Azure/iot-plugandplay-models) repository. If you want to set up your own model repository, you can use the [Azure IoT plug and play models tools](https://github.com/Azure/iot-plugandplay-models-tools) repository. This repository includes the code for the `dmr-client` CLI tool that can validate, import, and expand DTDL models. This tool also lets you index model repositories that follow the device model repository conventions. The following list summarizes some key constraints and limits on models: - An interface can extend at most two other interfaces. - A component can't contain another component. +## Understand the Digital Twins model parser ++The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL v2 or v3 model. The DTDL model can be defined in multiple files. ++### Install the DTDL model parser ++The parser is available in NuGet.org with the ID: [DTDLParser](https://www.nuget.org/packages/DTDLParser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI. ++```bash +dotnet add package DTDLParser +``` ++> [!NOTE] +> At the time of writing, the parser version is `1.0.52`. ++### Use the parser to validate and inspect a model ++The DTDLParser is a library that you can use to: ++- Determine whether one or more models are valid according to the language v2 or v3 specifications. +- Identify specific modeling errors. +- Inspect model contents. ++A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files that define a model and then validate all the files as a whole, including any references between the files. ++The [DTDLParser for .NET](https://github.com/digitaltwinconsortium/DTDLParser) repository includes the following samples that illustrate the use of the parser: ++- [DTDLParserResolveSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserResolveSample) shows how to parse an interface with external references, resolve the dependencies using the `Azure.IoT.ModelsRepository` client. +- [DTDLParserJSInteropSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserJSInteropSample) shows how to use the DTDL Parser from JavaScript running in the browser, using .NET JSInterop. ++The DTDLParser for .NET repository also includes a [collection of tutorials](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/tutorials/README.md) that show you how to use the parser to validate and inspect models. ++The model parser API enables many scenarios to automate or validate tasks that depend on DTDL models. For example, you could dynamically build a UI from the information in the model. + ## Next steps -Now that you've learned about device modeling, here are some more resources: +Now that you learned about device modeling, here are some more resources: - [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [Model repositories](./concepts-model-discovery.md) |
lab-services | Add Lab Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/add-lab-creator.md | |
lab-services | Approaches For Custom Image Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/approaches-for-custom-image-creation.md | |
lab-services | Azure Polices For Lab Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md | Title: Azure Policies for Lab Services description: Learn how to use Azure Policy to use built-in policies for Azure Lab Services to make sure your labs are compliant with your requirements. --++ Last updated 11/08/2022 |
lab-services | Class Type Adobe Creative Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md | |
lab-services | Class Type Autodesk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md | |
lab-services | Class Type Big Data Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-big-data-analytics.md | |
lab-services | Class Type Rstudio Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md | |
lab-services | Class Type Solidworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md | |
lab-services | Class Type Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md | |
lab-services | Class Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-types.md | description: Learn about different example class types for which you can set up --++ Last updated 04/24/2023 |
lab-services | Classroom Labs Fundamentals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md | |
lab-services | Concept Lab Accounts Versus Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-accounts-versus-lab-plans.md | |
lab-services | Concept Lab Services Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-role-based-access-control.md | |
lab-services | Concept Lab Services Supported Networking Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-supported-networking-scenarios.md | |
lab-services | Concept Migrate From Lab Accounts Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-migrate-from-lab-accounts-roles.md | |
lab-services | Concept Migrating Physical Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-migrating-physical-labs.md | |
lab-services | Connect Virtual Machine Linux X2go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-linux-x2go.md | description: Learn how to use X2Go for Linux virtual machines in a lab in Azure --++ Last updated 04/24/2023 |
lab-services | Create And Configure Labs Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/create-and-configure-labs-admin.md | Title: Configure regions for labs description: Learn how to change the region of a lab. --++ Last updated 06/17/2022 |
lab-services | Hackathon Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/hackathon-labs.md | Title: Use Azure Lab Services for hackathon description: Learn how to use Azure Lab Services for creating labs that you can use for running hackathons. --++ Last updated 05/22/2023 |
lab-services | How To Add Lab Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-add-lab-creator.md | description: Learn how to grant a user access to create labs. --++ Last updated 06/27/2024 |
lab-services | How To Bring Custom Linux Image Vhd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-bring-custom-linux-image-vhd.md | description: Learn how to import a Linux custom image from your physical lab env --++ Last updated 05/22/2023 |
lab-services | How To Bring Custom Windows Image Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-bring-custom-windows-image-azure-vm.md | Title: Create a lab from a Windows Azure VM description: Learn how to create a lab in Azure Lab Services from an existing Windows-based Azure virtual machine. --++ Last updated 05/17/2023 |
lab-services | How To Configure Auto Shutdown Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-auto-shutdown-lab-plans.md | description: Learn how to enable or disable automatic shutdown of lab VMs in Azu --++ Last updated 03/01/2023 |
lab-services | How To Configure Canvas For Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-canvas-for-lab-plans.md | Title: Configure Canvas to use Azure Lab Services description: Learn how to configure Canvas to use Azure Lab Services. Last updated 12/16/2022--++ |
lab-services | How To Configure Teams For Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-teams-for-lab-plans.md | Title: Configure Teams to use Azure Lab Services description: Learn how to configure Microsoft Teams to use Azure Lab Services. Last updated 11/15/2022--++ |
lab-services | How To Connect Peer Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-peer-virtual-network.md | |
lab-services | How To Connect Vnet Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md | |
lab-services | How To Determine Your Quota Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-determine-your-quota-usage.md | Title: How to determine your quota usage description: Learn how to determine where the cores for your subscription are used and if you have any spare capacity against your quota. --++ Last updated 10/11/2022 |
lab-services | How To Enable Shutdown Disconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-shutdown-disconnect.md | description: Learn how to enable or disable automatic shutdown of lab VMs in Azu --++ Last updated 03/01/2023 |
lab-services | How To Manage Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-plans.md | |
lab-services | How To Manage Lab Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-users.md | |
lab-services | How To Manage Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs.md | Title: View and manage labs description: Learn how to create a lab, configure a lab, view all the labs, or delete a lab. --++ Last updated 07/04/2023 |
lab-services | How To Manage Vm Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md | |
lab-services | How To Migrate Lab Acounts To Lab Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-migrate-lab-acounts-to-lab-plans.md | Title: Migrate lab accounts to lab plans description: 'Learn how to migrate lab accounts to lab plans in Azure Lab Services.' --++ Last updated 08/07/2023 |
lab-services | How To Prepare Windows Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md | description: Prepare a Windows-based lab template in Azure Lab Services. Configu --++ Last updated 05/17/2023 |
lab-services | How To Reset And Redeploy Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-reset-and-redeploy-vm.md | description: Learn how you can troubleshoot your lab VM in Azure Lab Services by --++ Last updated 09/28/2023 <!-- As a student, I want to be able to troubleshoot connectivity problems with my VM so that I can get back up and running quickly, without having to escalate an issue --> |
lab-services | How To Setup Lab Gpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md | |
lab-services | How To Use Restrict Allowed Virtual Machine Sku Sizes Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md | Title: Restrict allowed lab VM sizes description: Learn how to use the Lab Services should restrict allowed virtual machine SKU sizes Azure Policy to restrict educators to specified virtual machine sizes for their labs. --++ Last updated 08/28/2023 |
lab-services | How To Use Shared Image Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-shared-image-gallery.md | Title: Use an Azure compute gallery in Azure Lab Services description: Learn how to use an Azure compute gallery in a lab plan. A compute gallery lets you share a VM image, which can be reused to create new labs. Last updated 03/06/2022--++ # Use an Azure compute gallery in Azure Lab Services |
lab-services | How To Windows Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-windows-shutdown.md | Title: Control shutdown for Windows lab VMs description: Remove the shutdown command from the Windows Start menu in a lab virtual machine in Azure Lab Services. --++ Last updated 06/02/2023 |
lab-services | Lab Services Within Canvas Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-within-canvas-overview.md | Title: Azure Lab Services within Canvas description: Learn about the benefits of using Azure Lab Services in Canvas. --++ Last updated 06/02/2023 |
lab-services | Lab Services Within Teams Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-within-teams-overview.md | Title: Azure Lab Services within Microsoft Teams description: Learn about the benefits of using Azure Lab Services in Microsoft Teams. --++ Last updated 06/02/2023 |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 02/06/2024 --++ |
lab-services | Troubleshoot Access Lab Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-access-lab-vm.md | |
lab-services | Troubleshoot Lab Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-lab-creation.md | |
lab-services | Tutorial Access Lab Virtual Machine Teams Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-access-lab-virtual-machine-teams-canvas.md | |
lab-services | Tutorial Connect Lab Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-connect-lab-virtual-machine.md | |
lab-services | Tutorial Setup Lab Teams Canvas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-teams-canvas.md | |
lab-services | Upload Custom Image Shared Image Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/upload-custom-image-shared-image-gallery.md | |
load-balancer | Load Balancer Nat Pool Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md | Title: Azure Load Balancer NAT Pool to NAT Rule Migration -description: Process for migrating NAT Pools to NAT Rules on Azure Load Balancer. + Title: Migrate from Inbound NAT rules version 1 to version 2 +description: Learn how to migrate Azure Load balancer from inbound NAT rules version 1 to version 2. -+ Previously updated : 06/26/2024-- Last updated : 08/22/2024+ -# Tutorial: Migrate from Inbound NAT Pools to NAT Rules +# Migrate from Inbound NAT rules version 1 to version 2 -Azure Load Balancer NAT Pools are the legacy approach for automatically assigning Load Balancer front end ports to each instance in a Virtual Machine Scale Set. [NAT Rules](inbound-nat-rules.md) on Standard SKU Load Balancers have replaced this functionality with an approach that is both easier to manage and faster to configure. +An [inbound NAT rule](inbound-nat-rules.md) is used to forward traffic from a load balancerΓÇÖs frontend to one or more instances in the backend pool. These rules provide a 1:1 mapping between the load balancerΓÇÖs frontend IP address and backend instances. There are currently two versions of Inbound NAT rules, version 1 and version 2. -## Why Migrate to NAT Rules? +## NAT rule version 1 -NAT Rules provide the same functionality as NAT Pools, but have the following advantages: -* NAT Rules can be managed using the Portal -* NAT Rules can leverage Backend Pools, simplifying configuration -* NAT Rules configuration changes apply more quickly than NAT Pools -* NAT Pools cannot be used in conjunction with user-configured NAT Rules +[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load BalancerΓÇÖs frontend port to each backend instance. Rules are applied to the backend instanceΓÇÖs network interface card (NIC). For Azure Virtual Machine Scale Sets instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. -## Migration Process +## NAT rule version 2 -The migration process will create a new Backend Pool for each Inbound NAT Pool existing on the target Load Balancer. A corresponding NAT Rule will be created for each NAT Pool and associated with the new Backend Pool. Existing Backend Pool membership will be retained. +[Version 2](inbound-nat-rules.md) of Inbound NAT rules provide the same feature set as version 1, with extra benefits. -> [!IMPORTANT] -> The migration process removes the Virtual Machine Scale Set(s) from the NAT Pools before associating the Virtual Machine Scale Set(s) with the new NAT Rules. This requires an update to the Virtual Machine Scale Set(s) model, which may cause a brief downtime while instances are upgraded with the model. +- Simplified deployment experience and optimized updates. + - Inbound NAT rules now target the backend pool of the load balancer and no longer require a reference on the virtual machine's NIC. Previously on version 1, both the load balancer and the virtual machine's NIC needed to be updated whenever the Inbound NAT rule was changed. Version 2 only requires a single call on the load balancerΓÇÖs configuration, resulting in optimized updates. +- Easily retrieve port mapping between Inbound NAT rules and backend instances. + - With the legacy offering, to retrieve the port mapping between an Inbound NAT rule and a virtual machine instance, the rule would need to be correlated with the virtual machine's NIC. Version 2 injects the port mapping between the rule and backend instance directly into the load balancerΓÇÖs configuration. -> [!NOTE] -> Frontend port mapping to Virtual Machine Scale Set instances may change with the move to NAT Rules, especially in situations where a single NAT Pool has multiple associated Virtual Machine Scale Sets. The new port assignment will align sequentially to instance ID numbers; when there are multiple Virtual Machine Scale Sets, ports will be assigned to all instances in one scale set, then the next, continuing. +## How do I know if IΓÇÖm using version 1 of Inbound NAT rules? -> [!NOTE] -> Service Fabric Clusters take significantly longer to update the Virtual Machine Scale Set model (up to an hour). +The easiest way to identify if your deployments are using version 1 of the feature is by inspecting the load balancerΓÇÖs configuration. If either the `InboundNATPool` property or the `backendIPConfiguration` property within the `InboundNATRule` configuration is populated, then the deployment is version 1 of Inbound NAT rules. -### Prerequisites +## How to migrate from version 1 to version 2? -* In order to migrate a Load Balancer's NAT Pools to NAT Rules, the Load Balancer SKU must be 'Standard'. To automate this upgrade process, see the steps provided in [Upgrade a Basic Load Balancer to Standard with PowerShell](upgrade-basic-standard-with-powershell.md). -* Virtual Machine Scale Sets associated with the target Load Balancer must use either a 'Manual' or 'Automatic' upgrade policy--'Rolling' upgrade policy is not supported. For more information, see [Virtual Machine Scale Sets Upgrade Policies](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy) -* Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell) -* Install the [Azure PowerShell modules](/powershell/azure/install-azure-powershell) +Prior to migrating it's important to review the following information: -### Install the 'AzureLoadBalancerNATPoolMigration' module +- Migrating to version 2 of Inbound NAT rules causes downtime to active traffic that is flowing through the NAT rules. Traffic flowing through [load balancer rules](components.md) or [outbound rules](components.md) aren't impacted during the migration process. +- Plan out the max number of instances in a backend pool. Since version 2 targets the load balancerΓÇÖs backend pool, a sufficient number of ports need to be allocated for the NAT ruleΓÇÖs frontend. +- Each backend instance is exposed on the port configured in the new NAT rule. +- Multiple NAT rules canΓÇÖt exist if they have an overlapping port range or have the same backend port. +- NAT rules and load balancing rules canΓÇÖt share the same backend port. -Install the module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureLoadBalancerNATPoolMigration) +### Manual Migration -```azurepowershell -Install-Module -Name AzureLoadBalancerNATPoolMigration -Scope CurrentUser -Repository PSGallery -Force +The following three steps need to be performed to migrate to version 2 of inbound NAT rules ++1. Delete the version 1 of inbound NAT rules on the load balancerΓÇÖs configuration. +2. Remove the reference to the NAT rule on the virtual machine or virtual machine scale set configuration. + 1. All virtual machine scale set instances need to be updated. +3. Deploy version 2 of Inbound NAT rules. ++### Virtual Machine ++The following steps are used to migrate from version 1 to version 2 of Inbound NAT rules for a virtual machine. ++# [Azure CLI](#tab/azure-cli) ++```azurecli ++az network lb inbound-nat-rule delete -g MyResourceGroup --lb-name MyLoadBalancer --name NATruleV1 ++az network nic ip-config inbound-nat-rule remove -g MyResourceGroup --nic-name MyNic -n MyIpConfig --inbound-nat-rule MyNatRule ++az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 ++``` ++# [PowerShell](#tab/powershell) ++```powershell ++$slb = Get-AzLoadBalancer -Name "MyLoadBalancer" -ResourceGroupName "MyResourceGroup" ++Remove-AzLoadBalancerInboundNatRuleConfig -Name "myinboundnatrule" -LoadBalancer $loadbalancer ++Set-AzLoadBalancer -LoadBalancer $slb ++$nic = Get-AzNetworkInterface -Name "myNIC" -ResourceGroupName "MyResourceGroup" ++$nic.IpConfigurations[0].LoadBalancerInboundNatRule ΓÇ»= $null ++Set-AzNetworkInterface -NetworkInterface $nic ++$slb | Add-AzLoadBalancerInboundNatRuleConfig -Name "NewNatRuleV2" -FrontendIPConfiguration $slb.FrontendIpConfigurations[0] -Protocol "Tcp" -FrontendPortRangeStart 201-FrontendPortRangeEnd 500 -BackendAddressPool $slb.BackendAddressPools[0] -BackendPort 22 +$slb | Set-AzLoadBalancer +++``` ++++### Virtual Machine Scale Set ++The following steps are used to migrate from version 1 to version 2 of Inbound NAT rules for a virtual machine scale set. It assumes the virtual machine scale set's upgrade mode is set to Manual. For more information, see [Orchestration modes for Virtual Machine Scale Sets in Azure](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) ++++# [Azure CLI](#tab/azure-cli) ++```azurecli ++az network lb inbound-nat-pool delete ΓÇ»-g MyResourceGroup --lb-name MyLoadBalancer -n MyNatPool ++az vmss update -g MyResourceGroup -n MyVMScaleSet --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools ++az vmss update-instances --instance-ids '*' --resource-group MyResourceGroup --name MyVMScaleSet ++az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 ++``` ++# [PowerShell](#tab/powershell) ++```powershell ++# Remove the Inbound NAT rule ++$slb = Get-AzLoadBalancer -Name "MyLoadBalancer" -ResourceGroupName "MyResourceGroup" ++Remove-AzLoadBalancerInboundNatPoolConfig -Name myinboundnatpool -LoadBalancer $slb ++Set-AzLoadBalancer -LoadBalancer $slb ++# Remove the Inbound NAT pool association ++$vmss = Get-AzVmss -ResourceGroupName "MyResourceGroup" -VMScaleSetName "MyVMScaleSet" ++$vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].loadBalancerInboundNatPools = $null ++# Upgrade all instances in the VMSS ++Update-AzVmssInstance -ResourceGroupName $resourceGroupName -VMScaleSetName $vmssName -InstanceId "*" ++$slb | Add-AzLoadBalancerInboundNatRuleConfig -Name "NewNatRuleV2" -FrontendIPConfiguration $slb.FrontendIpConfigurations[0] -Protocol "Tcp" -FrontendPortRangeStart 201-FrontendPortRangeEnd 500 -BackendAddressPool $slb.BackendAddressPools[0] -BackendPort 22 +$slb | Set-AzLoadBalancer + ```+ -### Use the module to upgrade NAT Pools to NAT Rules +## Migration with automation script for Virtual Machine Scale Set -1. Connect to Azure with `Connect-AzAccount` -1. Find the target Load Balancer for the NAT Rules upgrade and note its name and Resource Group name -1. Run the migration command +The migration process will reuse existing backend pools with membership matching the NAT Pools to be migrated; if no matching backend pool is found, the script will exit (without making changes). Alternatively, use the `-backendPoolReuseStrategy` parameter to either always create new backend pools (`NoReuse`) or create a new backend pool if a matching one doesn't exist (`OptionalFirstMatch`). Backend pools and NAT Rule associations can be updated post migration to match your preference. +### Prerequisites -#### Example: specify the Load Balancer name and Resource Group name - ```azurepowershell - Start-AzNATPoolMigration -ResourceGroupName <loadBalancerResourceGroupName> -LoadBalancerName <LoadBalancerName> - ``` +Before beginning the migration process, ensure the following prerequisites are met: -#### Example: pass a Load Balancer from the pipeline - ```azurepowershell - Get-AzLoadBalancer -ResourceGroupName <loadBalancerResourceGroupName> -Name <LoadBalancerName> | Start-AzNATPoolMigration - ``` +- The load balancer's SKU must be **Standard** to migrate a load balancer's NAT Pools to NAT Rules. To automate this upgrade process, see the steps provided inΓÇ»[Upgrade a Basic Load Balancer to Standard with PowerShell](upgrade-basic-standard-with-powershell.md). +- The Virtual Machine Scale Sets associated with the target Load Balancer must use either a 'Manual' or 'Automatic' upgrade policy--'Rolling' upgrade policy isn't supported. For more information, seeΓÇ»[Virtual Machine Scale Sets Upgrade Policies](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy). +- Install the latest version ofΓÇ»[PowerShell](/powershell/scripting/install/installing-powershell). +- Install theΓÇ»[Azure PowerShell modules](/powershell/azure/install-azure-powershell). -## Common Questions +### Install the `AzureLoadBalancerNATPoolMigration` module -### Will migration cause downtime to my NAT ports? +With the following command, install the `AzureLoadBalancerNATPoolMigration` module from the PowerShell Gallery: ++```powershell +# Install the AzureLoadBalancerNATPoolMigration module ++Install-Module -Name AzureLoadBalancerNATPoolMigration -Scope CurrentUser -Repository PSGallery -Force +``` -Yes, because we must first remove the NAT Pools before we can create the NAT Rules, there will be a brief time where there is no mapping of the front end port to a back end port. +### Upgrade NAT Pools to NAT Rules -> [!NOTE] -> Downtime for NAT'ed port on Service Fabric clusters will be significantly longer--up to an hour for a Silver cluster in testing. +With the `azureLoadBalancerNATPoolMigration` module installed, upgrade your NAT Pools to NAT Rules with the following steps: -### Do I need to keep both the new Backend Pools created during the migration and my existing Backend Pools if the membership is the same? +1. Connect to Azure with `Connect-AzAccount`. +2. Collect the names of the **target load balancer** for the NAT Rules upgrade and its **Resource Group** name. +3. Run the migration command with your resource names replacing the placeholders of `<loadBalancerResourceGroupName>` and `<loadBalancerName>`: -No, following the migration, you can review the new backend pools. If the membership is the same between backend pools, you can replace the new backend pool in the NAT Rule with an existing backend pool, then remove the new backend pool. + ```powershell + # Run the migration command + + Start-AzNATPoolMigration -ResourceGroupName <loadBalancerResourceGroupName> -LoadBalancerName <loadBalancerName> + + ``` ## Next steps |
load-balancer | Upgrade Basic Standard With Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md | -> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6] ++> [!VIDEO 8e203b99-41ff-4454-9cbd-58856708f1c6] - 03:06 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6?#time=0h3m06s" target="_blank">Step-by-step</a> - 32:54 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h32m45s" target="_blank">Recovery</a> For internal Load Balancers, Outbound Rules are not an option because there is n The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic Load Balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerBackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic Load Balancer won't be possible. Watch a video of the recovery process: -> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6] ++> [!VIDEO 8e203b99-41ff-4454-9cbd-58856708f1c6] If your failed migration was targeting multiple load balancers at the same time, using the `-MultiLBConfig` parameter, recover each Load Balancer individually using the same process as below. |
logic-apps | Azure Arc Enabled Logic Apps Create Deploy Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md | The following example describes a sample Azure Arc-enabled Logic Apps resource d } ``` +> [!NOTE] +> +> By default, **FUNCTIONS_WORKER_RUNTIME** app setting for your logic app is **`dotnet`**. +> Previously, **`node`** was the default value. However, **`dotnet`** is now the default +> value for all new and existing deployed Arc enabled logic apps, even for apps that had +> a different value. This change shouldn't affect your workflow's runtime, and everything +> should work the same way as before. For more information, see the +> [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). +> +> The **APP_KIND** app setting for your logic app is set to **workflowapp**, but in some scenarios, +> this app setting is missing, for example, due to Azure Resource Manager templates or other scenarios +> where the setting might not be included. If certain actions don't work, such as the +> **Execute JavaScript Code** action or the workflow stops working, check that the +> **APP_KIND** app setting exists and is set to to **workflowapp**. For more information, see the +> [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + ### Container deployment If you prefer to use container tools and deployment processes, you can containerize your logic apps and deploy them to Azure Arc-enabled Logic Apps. For this scenario, complete the following high-level tasks when you set up your infrastructure: |
logic-apps | Create Single Tenant Workflows Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md | More workflows in your logic app raise the risk of longer load times, which nega > for all new and existing deployed Standard logic apps, even for apps that had a different value. > This change shouldn't affect your workflow's runtime, and everything should work the same way > as before. For more information, see the [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json).+ > + > The **APP_KIND** app setting for your Standard logic app is set to **workflowApp**, but in some + > scenarios, this app setting is missing, for example, due to automation using Azure Resource Manager + > templates or other scenarios where the setting isn't included. If certain actions don't work, + > such as the **Execute JavaScript Code** action or the workflow stops working, check that the + > **APP_KIND** app setting exists and is set to to **workflowApp**. For more information, see the + > [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). 1. When you finish, select **Next: Storage**. |
logic-apps | Create Single Tenant Workflows Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md | Before you can create your logic app, create a local project so that you can man > [!NOTE] > You might get an error named **azureLogicAppsStandard.createNewProject** with the error message, > **Unable to write to Workspace Settings because azureFunctions.suppressProject is not a registered configuration**. - > If you do, try installing the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions), either directly from the Visual Studio Marketplace or from inside Visual Studio Code. + > If you do, try installing the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions), + > either directly from the Visual Studio Marketplace or from inside Visual Studio Code. 1. If Visual Studio Code prompts you to open your project in the current Visual Studio Code or in a new Visual Studio Code window, select **Open in current window**. Otherwise, select **Open in new window**. Before you can create your logic app, create a local project so that you can man [!INCLUDE [Visual Studio Code - logic app project structure](../../includes/logic-apps-single-tenant-project-structure-visual-studio-code.md)] + > [!NOTE] + > + > By default, in your **local.settings.json** file, the language worker runtime value for your + > Standard logic app is **`dotnet`**. Previously, **`node`** was the default value. However, + > **`dotnet`** is now the default value for all new and existing deployed Standard logic apps, + > even for apps that had a different value. This change shouldn't affect your workflow's runtime, + > and everything should work the same way as before. For more information, see the + > [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + > + > The **APP_KIND** app setting for your Standard logic app is set to **workflowApp**, but in some + > scenarios, this app setting is missing, for example, due to automation using Azure Resource Manager + > templates or other scenarios where the setting isn't included. If certain actions don't work, + > such as the **Execute JavaScript Code** action or the workflow stops working, check that the + > **APP_KIND** app setting exists and is set to to **workflowApp**. For more information, see the + > [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + <a name="convert-project-nuget"></a> ## Convert your project to NuGet package-based (.NET) |
logic-apps | Edit App Settings Host Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md | App settings in Azure Logic Apps work similarly to app settings in Azure Functio | Setting | Default value | Description | |||-|+| `APP_KIND` | `workflowApp` | Sets the app type for the Azure resource. | | `AzureWebJobsStorage` | None | Sets the connection string for an Azure storage account. For more information, see [AzureWebJobsStorage](../azure-functions/functions-app-settings.md#azurewebjobsstorage) | | `FUNCTIONS_WORKER_RUNTIME` | `dotnet` | Sets the language worker runtime to use with your logic app resource and workflows. However, this setting is no longer necessary due to automatically enabled multi-language support. <br><br>**Note**: Previously, this setting's default value was **`node`**. Now, **`dotnet`** is the default value for all new and existing deployed Standard logic apps, even for apps that had a different different value. This change shouldn't affect your workflow's runtime, and everything should work the same way as before.<br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). | | `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. | |
migrate | Migrate Support Matrix Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md | For Linux servers, based on the features you want to perform, you can create a u Operating system | Versions | Red Hat Enterprise Linux | 5.1, 5.3, 5.11, 6.x, 7.x, 8.x, 9.x- Ubuntu | 12.04, 14.04, 16.04, 18.04, 20.04 + Ubuntu | 12.04, 14.04, 16.04, 18.04, 20.04, 22.04 Oracle Linux | 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 SUSE Linux | 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 Debian | 7, 8, 9, 10, 11 |
migrate | Tutorial Discover Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md | Check that the zipped file is secure, before you deploy it. **Scenario*** | **Download** | **SHA256** | | - Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] + Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value](includes/security-hash-value.md)] ### 3. Create an appliance |
migrate | How To Set Up Appliance Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/how-to-set-up-appliance-vmware.md | After you create the appliance, check if the appliance can connect to Azure Migr To set up the appliance by using an OVA template, you'll complete these steps, which are described in detail in this section: > [!NOTE]-> OVA templates are not available for soverign clouds. +> OVA templates are not available for sovereign clouds. > [!NOTE] > Do not clone or create a VM template out of an appliance deployed using OVA template. This scenario is unsupported and may result in deployment failures within the Migrate Service. |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md | Support | Details | Supported servers | You can enable agentless dependency analysis on up to 1,000 servers (across multiple vCenter Servers) discovered per appliance. Windows servers | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br /> Windows Server 2008 (32-bit)-Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11 +Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04, 22.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11 Server requirements | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. vCenter Server account | The read-only account used by Azure Migrate and Modernize for assessment must have privileges for guest operations on VMware VMs. Windows server access | A user account (local or domain) with administrator permissions on servers. |
operational-excellence | Overview Relocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md | The following tables provide links to each Azure service relocation document. Th | | | | | [Azure Automation](./relocation-automation.md)| ✅ | ✅| ❌ | [Azure IoT Hub](/azure/iot-hub/iot-hub-how-to-clone?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |+[Azure NetApp Files](./relocation-netapp.md)| ✅ | ✅| ❌ | [Azure Static Web Apps](./relocation-static-web-apps.md) | ✅ |❌ | ❌ | [Power BI](/power-bi/admin/service-admin-region-move?toc=/azure/operational-excellence/toc.json)| ✅ |❌ | ❌ | |
operational-excellence | Relocation Netapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-netapp.md | + + Title: Relocate Azure NetApp Files volume to another region +description: Learn how to relocate an Azure NetApp Files volume to another region +++ Last updated : 08/14/2024++++ - subject-relocation +++# Relocate Azure NetApp Files volume to another region ++This article covers guidance for relocating [Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) volumes to another region. ++++## Prerequisites ++Before you begin the relocation planning stage, first review the following prerequisites: ++- The target NetApp account instance should already be created. ++- Source and target regions must be paired regions. To see if they're paired, see [Supported cross-region replication pairs](../azure-netapp-files/cross-region-replication-introduction.md?#supported-region-pairs). ++- Understand all dependent resources. Some of the resources could be: + - Microsoft Entra ID + - [Virtual Network](./relocation-virtual-network.md) + - Azure DNS + - [Storage services](./relocation-storage-account.md) + - [Capacity pools](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md) +++## Prepare ++Before you begin the relocation process, make sure to complete the following preparations: ++- The target Microsoft Entra ID connection must have access to the DNS servers, AD DS Domain Controllers, or Microsoft Entra Domain Services Domain Controllers that are reachable from the delegated subnet in the target region. ++- The network configurations (including separate subnets if needed and IP ranges) should already be planned and prepared ++- Turn off replication procedures to disaster recovery region. If you've established a disaster recovery (DR) solution using replication to a DR region, turn off replication to the DR site before initiating relocation procedures. ++- Understand the following considerations in regards to replication: + + - SMB, NFS, and dual-protocol volumes are supported. Replication of SMB volumes requires a Microsoft Entra ID connection in the source and target NetApp accounts. + + - The replication destination volume is read-only until the entire move is complete. + + - Azure NetApp Files replication doesn't currently support multiple subscriptions. All replications must be performed under a single subscription. + + - There are resource limits for the maximum number of cross-region replication destination volumes. For more information, see [Resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md). + +## Redeploy ++**To redeploy your NetApp resources:** ++1. [Create the target NetApp account](../azure-netapp-files/azure-netapp-files-create-netapp-account.md). ++1. [Create the target capacity pool](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). ++1. [Delegate a subnet in the target region](../azure-netapp-files/azure-netapp-files-delegate-subnet.md). Azure NetApp Files creates a system route to the delegated subnet. Peering and endpoints can be used to connect to the target as needed. ++1. Create a data replication volume by following the directions in [Create volume replication for Azure NetApp Files](../azure-netapp-files/cross-region-replication-create-peering.md). ++1. [Verify that the health status](../azure-netapp-files/cross-region-replication-display-health-status.md) of replication is healthy. +++## Cleanup ++Once the replication is complete, you can then safely delete the replication peering the source volume. ++To learn how to clean up a replication, see [Delete volume replications or volumes](/azure/azure-netapp-files/cross-region-replication-delete). +++## Related content +++- [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md) ++To learn more about moving resources between regions and disaster recovery in Azure, refer to: ++- [Requirements for Active Directory Connections](/azure/azure-netapp-files/create-active-directory-connections#requirements-for-active-directory-connections) + +- [Guidelines for Azure NetApp Files network planning](/azure/azure-netapp-files/azure-netapp-files-network-topologies) + +- [Fail over to the destination region](/azure/azure-netapp-files/cross-region-replication-manage-disaster-recovery#fail-over-to-destination-volume) ++- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md) ++- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md) |
operator-nexus | Howto Kubernetes Cluster Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-features.md | Before proceeding with this how-to guide, it's recommended that you: * Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-cli.md) for a comprehensive overview and steps involved. * Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.-* Minimum required `networkcloud` az-cli extension version: `3.0.0b1` +* Minimum required `networkcloud` az-cli extension version: `2.0.b3` ## Limitations |
operator-nexus | Howto Kubernetes Cluster Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-upgrade.md | This article provides instructions on how to upgrade an Operator Nexus Kubernete * An Azure Operator Nexus Kubernetes cluster deployed in a resource group in your Azure subscription. * If you're using Azure CLI, this article requires that you're running the latest Azure CLI version. If you need to install or upgrade, see [Install Azure CLI](./howto-install-cli-extensions.md)-* Minimum required `networkcloud` az-cli extension version: `3.0.0b1` +* Minimum required `networkcloud` az-cli extension version: `2.0.b3` * Understand the version bundles concept. For more information, see [Nexus Kubernetes version bundles](./reference-nexus-kubernetes-cluster-supported-versions.md#version-bundles). ## Check for available upgrades Max surge and max unavailable can be configured at the same time, in which case * Learn more about [Nexus Kubernetes version bundles](./reference-nexus-kubernetes-cluster-supported-versions.md#version-bundles). <!-- LINKS - external -->-[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ +[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ |
operator-service-manager | Best Practices Onboard Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md | Delete publisher resources in the following order to make sure no orphaned resou ## Considerations if your NF runs cert-manager -With release 1.0.2728-50 and later , AOSM now uses cert-manager to store and rotate certificates. As part of this change, AOSM deploys a cert-manager operator, and associate CRDs, in the azurehybridnetwork namespace. Since having multiple cert-manager operators, even deployed in separate namespaces, will watch across all namespaces, only one cert-manager can be effectively run on the cluster. +> [!IMPORTANT] +> This guidance applies only to certain releases. Check your version for proper behavior. ++From release 1.0.2728-50 to release Version 2.0.2777-132, AOSM uses cert-manager to store and rotate certificates. As part of this change, AOSM deploys a cert-manager operator, and associate CRDs, in the azurehybridnetwork namespace. Since having multiple cert-manager operators, even deployed in separate namespaces, will watch across all namespaces, only one cert-manager can be effectively run on the cluster. Any user trying to install cert-manager on the cluster, as part of a workload deployment, will get a deployment failure with an error that the CRD ΓÇ£exists and cannot be imported into the current release.ΓÇ¥ To avoid this error, the recommendation is to skip installing cert-manager, instead take dependency on cert-manager operator and CRD already installed by AOSM. |
operator-service-manager | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md | Title: Azure Operator Service Manager Release Notes -description: Tracking of major and minor releases of Azure Operator Service Manager. + Title: Release notes for Azure Operator Service Manager +description: Official documentation and tracking for major and minor releases. Last updated 08/14/2024 The following bug fixes, or other defect resolutions, are delivered with this re None -## Release Version 2.0.2788-135 +## Release 2.0.2788-135 Document Revision 1.1 Azure Operator Service Manager is a cloud orchestration service that enables aut * Release Version: Version 2.0.2788-135 * Release Date: August 21, 2024 * Is NFO update required: YES, Update only-* Dependency Versions: Go/1.22.4 Helm/3.15.2 +* Dependency Versions: Go/1.22.4 Helm/3.15.2 ### Release Installation This release can be installed with as an update on top of release 2.0.2783-134. |
operator-service-manager | Safe Upgrade Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/safe-upgrade-practices.md | Title: Safe Upgrade Practices for CNFs -description: Safely execute complex upgrades of workloads with Azure Operator Service Manager. + Title: Get started with Azure Operator Service Manager Safe Upgrade Practices +description: Safely execute complex upgrades of CNF workloads on Azure Operator Nexus Previously updated : 08/16/2024 Last updated : 08/30/2024 |
operator-service-manager | Safe Upgrades Nf Level Rollback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/safe-upgrades-nf-level-rollback.md | + + Title: Control upgrade failure behavior with Azure Operator Service Manager +description: Learn about recovery behaviors including pause on failure and rollback on failure. ++ Last updated : 08/30/2024+++++# Control upgrade failure behavior ++## Overview +This guide describes the Azure Operator Service Manager (AOSM) upgrade failure behavior features for container network functions (CNFs). These features, as part of the AOSM safe upgrade practices initiative, offer a choice between faster retries, with pause on failure, versus return to starting point, with rollback on failure. ++## Pause on failure +Any upgrade using AOSM starts with a site network service (SNS) reput opreation. The reput operation processes the network function applications (NfApps) found in the network function design version (NFDV). The reput operation implements the following default logic: +* NfApps are processed following either updateDependsOn ordering, or in the sequential order they appear. +* NfApps with parameter "applicationEnabled" set to disable are skipped. +* NFApps present, but not referenced by the new NFDV are deleted. +* The execution sequence is paused if any of the NfApp upgrades fail and a rollback is considered. +* The failure leaves the NF resource in a failed state. ++With pause on failure, AOSM rolls back only the failed NfApp, via the testOptions, installOptions, or upgradeOptions parameters. No action is taken on any NfApps which proceed the failed NfApp. This method allows the end user to troubleshoot the failed NfApp and then restart the upgrade from that point forward. As the default behavior, this method is the most efficient method, but may cause network function (NF) inconsistencies while in a mixed version state. ++## Rollback on failure +To address risk of mismatched NfApp versions, AOSM now supports NF level rollback on failure. With this option enabled, if an NfApp operation fails, both the failed NfApp, and all prior completed NfApps, can be rolled back to initial version state. This method minimizes, or eliminates, the amount of time the NF is exposed to NfApp version mismatches. The optional rollback on failure feature works as follows: +* A user initiates an sSNS reput operation and enables rollback on failure. +* A snapshot of the current NfApp versions is captured and stored. +* The snapshot is used to determine the individual NfApp actions taken to reverse actions that completed successfully. + - "helm install" action on deleted components, + - "helm rollback" action on upgraded components, + - "helm delete" action on newly installed components +* NfApp failure occurs, AOSM restores the NfApps to the snapshot version state before the upgrade, with most recent actions reverted first. ++> [!NOTE] +> * AOSM doesn't create a snapshot if a user doesn't enable rollback on failure. +> * A rollback on failure only applies to the successfully completed NFApps. +> - Use the testOptions, installOptions, or upgradeOptions parameters to control rollback of the failed NfApp. ++AOSM returns the following operational status and messages, given the respective results: +``` + - Upgrade Succeeded + - Provisioning State: Succeeded + - Message: <empty> +``` +``` + - Upgrade Failed, Rollback Succeeded + - Provisioning State: Failed + - Message: Application(<ComponentName>) : <Failure Reason>; Rollback succeeded +``` +``` + - Upgrade Failed, Rollback Failed + - Provisioning State: Failed + - Message: Application(<ComponentName>) : <Failure reason>; Rollback Failed (<RollbackComponentName>) : <Rollback Failure reason> +``` +## How to configure rollback on failure +The most flexible method to control failure behavior is to extend a new configuration group schema (CGS) parameter, rollbackEnabled, to allow for configuration group value (CGV) control via roleOverrideValues in the NF payload. First, define the CGS parameter: +``` +{ + "description": "NF configuration", + "type": "object", + "properties": { + "nfConfiguration": { + "type": "object", + "properties": { + "rollbackEnabled": { + "type": "boolean" + } + }, + "required": [ + "rollbackEnabled" + ] + } + } +} +``` +> [!NOTE] +> * If the nfConfiguration isn't provided through the roleOverrideValues parameter, by default the rollback is disabled. ++With the new rollbackEnable parameter defined, the Operator can now provide a run time value, under roleOverrideValues, as part of NF reput payload. +``` +example: +{ + "location": "eastus", + "properties": { + // ... + "roleOverrideValues": [ + "{\"nfConfiguration\":{\"rollbackEnabled\":true}}", + "{\"name\":\"nfApp1\",\"deployParametersMappingRuleProfile\":{\"applicationEnablement\" : \"Disabled\"}}", + "{\"name\":\"nfApp2\",\"deployParametersMappingRuleProfile\":{\"applicationEnablement\" : \"Disabled\"}}", + //... other nfapps overrides + ] + } +} +``` +> [!NOTE] +> * Each roleOverrideValues entry overrides the default behavior of the NfAapps. +> * If multiple entries of nfConfiguration are found in the roleOverrideValues, then the NF reput is returned as a bad request. ++## How to troubleshoot rollback on failure +### Understand pod states +Understanding the different pod states is crucial for effective troubleshooting. The following are the most common pod states: +* Pending: Pod scheduling is in progress by Kubernetes. +* Running: All containers in the pod are running and healthy. +* Failed: One or more containers in the pod are terminated with a nonzero exit code. +* CrashLoopBackOff: A container within the pod is repeatedly crashing and Kubernetes is unable to restart it. +* ContainerCreating: Container creation is in progress by the container runtime. ++### Check pod status and logs +First start by checking pod status and logs using a kubectl command: +``` +$ kubectl get pods +$ kubectl logs <pod-name> +``` +The get pods command lists all the pods in the current namespace, along with their current status. The logs command retrieves the logs for a specific pod, allowing you to inspect any errors or exceptions. To troubleshoot networking problems, use the following commands: +``` +$ kubectl get services +$ kubectl describe service <service-name> +``` +The get services command displays all the services in the current namespace. The command provides details about a specific service, including the associated endpoints, and any relevant error messages. If you're encountering issues with PVCs, you can use the following commands to debug them: +``` +$ kubectl get persistentvolumeclaims +$ kubectl describe persistentvolumeclaims <pvc-name> +``` +The "get persistentvolumeclaims" command lists all the PVCs in the current namespace. The describe command provides detailed information about a specific PVC, including the status, associated storage class, and any relevant events or errors. |
oracle | Database Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md | Oracle Database@Azure is available in the following locations. Oracle Database@A |France Central |✓ | | |UK South |✓ |✓ | |Canada Central |✓ |✓ |-|Australia East |✓ | | +|Australia East |✓ |✓ | ## Azure Support scope and contact information |
role-based-access-control | Role Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments.md | Title: Understand Azure role assignments - Azure RBAC description: Learn about Azure role assignments in Azure role-based access control (Azure RBAC) for fine-grained access management of Azure resources.-+ Previously updated : 08/01/2024- Last updated : 08/30/2024+ # Understand Azure role assignments If you have a Microsoft Entra ID P2 or Microsoft Entra ID Governance license, [M The assignment type options available to you might vary depending or your PIM policy. For example, PIM policy defines whether permanent assignments can be created, maximum duration for time-bound assignments, roles activations requirements (approval, multifactor authentication, or Conditional Access authentication context), and other settings. For more information, see [Configure Azure resource role settings in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-configure-role-settings). +If you don't want to use the PIM functionality, select the **Active** assignment type and **Permanent** assignment duration options. These settings create a role assignment where the principal always has permissions in the role. + :::image type="content" source="./media/shared/assignment-type-eligible.png" alt-text="Screenshot of Add role assignment with Assignment type options displayed." lightbox="./media/shared/assignment-type-eligible.png"::: To better understand PIM, you should review the following terms. |
sap | Dbms Guide Ibm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ibm.md | keywords: 'Azure, Db2, SAP, IBM' Previously updated : 03/07/2024 Last updated : 08/29/2024 Remote shared volumes like the Azure services in the listed scenarios are suppor * Hosting Linux guest OS based Db2 data and log files on NFS shares hosted on Azure NetApp Files is supported! -If you're using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) apply to deployments with the Db2 DBMS as well. +If you're using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) apply to deployments with the Db2 DBMS (Database Management System) as well. -As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The exact quotas are depending on the VM type used. A list of VM types with their quotas can be found [here (Linux)](/azure/virtual-machines/sizes) and [here (Windows)](/azure/virtual-machines/sizes). +As explained earlier in the general part of the document, quotas on IOPS (I/O operations per second) throughput for Azure disks exist. The exact quotas are depending on the VM type used. A list of VM types with their quotas can be found [here (Linux)](/azure/virtual-machines/sizes) and [here (Windows)](/azure/virtual-machines/sizes). As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single mounted disk. Whereas you always should separate the data files and transaction log files on different disks/VHDs. For performance considerations, also refer to chapter 'Data Safety and Performance Considerations for Database Directories' in SAP installation guides. -Alternatively, you can use Windows Storage Pools, which are only available in Windows Server 2012 and higher as described [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md). On Linux you can use LVM or mdadm to create one large logical device over multiple disks. +Alternatively, you can use Windows Storage Pools, which are only available in Windows Server 2012 and higher as described [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md). On Linux, you can use LVM or mdadm to create one large logical device over multiple disks. <!-- log_dir, sapdata and saptmp are terms in the SAP and DB2 world and now spelling errors --> On Windows using Storage pools for Db2 storage paths for `log_dir`, `sapdata` an IBM Db2 for SAP NetWeaver Applications is supported on any VM type listed in SAP support note [1928533]. Recommended VM families for running IBM Db2 database are Esd_v4/Eas_v4/Es_v3 and M/M_v2-series for large multi-terabyte databases. The IBM Db2 transaction log disk write performance can be improved by enabling the M-series Write Accelerator. -Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to large. The list is based on Azure premium storage. However, Azure Ultra disk is fully supported with Db2 as well and can be used as well. Use the values for capacity, burst throughput, and burst IOPS to define the Ultra disk configuration. You can limit the IOPS for the /db2/```<SID>```/log_dir at around 5000 IOPS. +Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to x-large. ++>[!IMPORTANT] +> The VM types listed below are examples that meet the vCPU and memory critiera of each of the categories. The storage configuration is based on Azure premium storage v1. Premium SSD v2 and Azure Ultra disk is fully supported with IBM Db2 as well and can be used for deployments. Use the values for capacity, burst throughput, and burst IOPS to define the Ultra disk or Premium SSD v2 configuration. You can limit the IOPS for the /db2/```<SID>```/log_dir at around 5000 IOPS. Adjust the throughput and IOPS to the specific workload if these baseline recommendations don't meet the requirements #### Extra small SAP system: database size 50 - 200 GB: example Solution Manager-| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | +| VM Size / Examples |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | | | | | :: | : | : | : | : | : | : | : |-|E4ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | -|vCPU: 4 |/db2/```<SID>```/sapdata |P10 |2 |1,000 |200 |256 |7,000 |340 |256<br />KB |ReadOnly | -|RAM: 32 GiB |/db2/```<SID>```/saptmp |P6 |1 |240 |50 |128 |3,500 |170 | || -| |/db2/```<SID>```/log_dir |P6 |2 |480 |100 |128 |7,000 |340 |64<br />KB || -| |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 || | +| vCPU: 4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | +| RAM: ~32 GiB |/db2/```<SID>```/sapdata |P10 |2 |1,000 |200 |256 |7,000 |340 |256<br />KB |ReadOnly | +| E4(d)s_v5|/db2/```<SID>```/saptmp |P6 |1 |240 |50 |128 |3,500 |170 | || +| E4(d)as_v5 |/db2/```<SID>```/log_dir |P6 |2 |480 |100 |128 |7,000 |340 |64<br />KB || +| ... |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 || | #### Small SAP system: database size 200 - 750 GB: small Business Suite-| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | +| VM Size / Examples |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | | | | | :: | : | : | : | : | : | : | : |-|E16ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | -|vCPU: 16 |/db2/```<SID>```/sapdata |P15 |4 |4,400 |500 |1.024 |14,000 |680 |256 KB |ReadOnly | -|RAM: 128 GiB |/db2/```<SID>```/saptmp |P6 |2 |480 |100 |128 |7,000 |340 |128 KB || -| |/db2/```<SID>```/log_dir |P15 |2 |2,200 |250 |512 |7,000 |340 |64<br />KB || -| |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 ||| +| vCPU: 16 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | +| RAM: ~128 GiB |/db2/```<SID>```/sapdata |P15 |4 |4,400 |500 |1.024 |14,000 |680 |256 KB |ReadOnly | +| E16(d)s_v5 |/db2/```<SID>```/saptmp |P6 |2 |480 |100 |128 |7,000 |340 |128 KB || +| E16(d)as_v5 |/db2/```<SID>```/log_dir |P15 |2 |2,200 |250 |512 |7,000 |340 |64<br />KB || +| ... |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 ||| #### Medium SAP system: database size 500 - 1000 GB: small Business Suite-| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | +| VM Size / Examples |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | | | | | :: | : | : | : | : | : | : | : |-|E32ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | -|vCPU: 32 |/db2/```<SID>```/sapdata |P30 |2 |10,000 |400 |2.048 |10,000 |400 |256 KB |ReadOnly | -|RAM: 256 GiB |/db2/```<SID>```/saptmp |P10 |2 |1,000 |200 |256 |7,000 |340 |128 KB || -| |/db2/```<SID>```/log_dir |P20 |2 |4,600 |300 |1.024 |7,000 |340 |64<br />KB || -| |/db2/```<SID>```/offline_log_dir |P15 |1 |1,100 |125 |256 |3,500 |170 ||| +| vCPU: 32 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | +| RAM: ~256 GiB |/db2/```<SID>```/sapdata |P30 |2 |10,000 |400 |2.048 |10,000 |400 |256 KB |ReadOnly | +| E32(d)s_v5 |/db2/```<SID>```/saptmp |P10 |2 |1,000 |200 |256 |7,000 |340 |128 KB || +| E32(d)as_v5 |/db2/```<SID>```/log_dir |P20 |2 |4,600 |300 |1.024 |7,000 |340 |64<br />KB || +| M32ls |/db2/```<SID>```/offline_log_dir |P15 |1 |1,100 |125 |256 |3,500 |170 ||| #### Large SAP system: database size 750 - 2000 GB: Business Suite-| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | +| VM Size / Examples |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | | | | | :: | : | : | : | : | : | : | : |-|E64ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | -|vCPU: 64 |/db2/```<SID>```/sapdata |P30 |4 |20,000 |800 |4.096 |20,000 |800 |256 KB |ReadOnly | -|RAM: 504 GiB |/db2/```<SID>```/saptmp |P15 |2 |2,200 |250 |512 |7,000 |340 |128 KB || -| |/db2/```<SID>```/log_dir |P20 |4 |9,200 |600 |2.048 |14,000 |680 |64<br />KB || -| |/db2/```<SID>```/offline_log_dir |P20 |1 |2,300 |150 |512 |3,500 |170 || | +| vCPU: 64 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | +| RAM: ~512 GiB |/db2/```<SID>```/sapdata |P30 |4 |20,000 |800 |4.096 |20,000 |800 |256 KB |ReadOnly | +| E64(d)s_v5 |/db2/```<SID>```/saptmp |P15 |2 |2,200 |250 |512 |7,000 |340 |128 KB || +| E64(d)as_v5 |/db2/```<SID>```/log_dir |P20 |4 |9,200 |600 |2.048 |14,000 |680 |64<br />KB || +| M64ls |/db2/```<SID>```/offline_log_dir |P20 |1 |2,300 |150 |512 |3,500 |170 || | #### Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system+Especially for such larger systems it's important to evaluate the infrastructure that the system is currently running on and the resource consumption data of those systems to find the best match of Azure compute and storage infrastructure and configuration. + | VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching | | | | | :: | : | : | : | : | : | : | : |-|M128s |/db2 |P10 |1 |500 |100 |128 |3,500 |170 || | -|vCPU: 128 |/db2/```<SID>```/sapdata |P40 |4 |30,000 |1.000 |8.192 |30,000 |1.000 |256 KB |ReadOnly | -|RAM: 2,048 GiB |/db2/```<SID>```/saptmp |P20 |2 |4,600 |300 |1.024 |7,000 |340 |128 KB || -| |/db2/```<SID>```/log_dir |P30 |4 |20,000 |800 |4.096 |20,000 |800 |64<br />KB |Write-<br />Accelerator | -| |/db2/```<SID>```/offline_log_dir |P30 |1 |5,000 |200 |1.024 |5,000 |200 || | +| vCPU: =>128 |/db2 |P10 |1 |500 |100 |128 |3,500 |170 || | +| RAM: =>2,048 GiB |/db2/```<SID>```/sapdata |P40 |4 |30,000 |1.000 |8.192 |30,000 |1.000 |256 KB |ReadOnly | +| M128s_v2 |/db2/```<SID>```/saptmp |P20 |2 |4,600 |300 |1.024 |7,000 |340 |128 KB || +| M176s_2_v3 |/db2/```<SID>```/log_dir |P30 |4 |20,000 |800 |4.096 |20,000 |800 |64<br />KB |Write-<br />Accelerator | +| M176s_3_v3,<br />M176s_4_v3 |/db2/```<SID>```/offline_log_dir |P30 |1 |5,000 |200 |1.024 |5,000 |200 || | ### Using Azure NetApp Files |
sentinel | Ops Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ops-guide.md | Title: Operational guide - Microsoft Sentinel description: Learn about the operational recommendations to help security operations teams to plan and run security activities. Previously updated : 06/28/2024 Last updated : 08/30/2024 appliesto: - Microsoft Sentinel in the Azure portal and the Microsoft Defender portal -#Customer intent: As a security operations (SOC) team member or security administrator, I want to know what operational activities I should plan to do daily, weekly, and monthly with Microsoft Sentinel to help keep my organization's environment secure. +#Customer intent: As a security operations (SOC) team member or security administrator, I want to know what operational activities I should plan to do daily, weekly, and monthly with Microsoft Sentinel to help keep my organization's environment secure. # Microsoft Sentinel operational guide -This article lists the operational activities that we recommend security operations (SOC) teams and security administrators plan for and run as part of their regular security activities with Microsoft Sentinel. +This article lists the operational activities that we recommend security operations (SOC) teams and security administrators plan for and run as part of their regular security activities with Microsoft Sentinel. For more information about managing your security operations, see [Security operations overview](/security/operations/overview). ## Daily tasks Schedule the following activities monthly. ## Related content -- [Deployment guide for Microsoft Sentinel](deploy-overview.md)+- [Security operations overview](/security/operations/overview) +- [Implement Microsoft Sentinel and Microsoft Defender XDR for Zero Trust](/security/operations/siem-xdr-overview) +- [Deployment guide for Microsoft Sentinel](deploy-overview.md) |
site-recovery | Azure To Azure Common Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md | description: This article answers common questions about Azure virtual machine d Previously updated : 04/18/2024 Last updated : 08/30/2024 Yes, you can create a Capacity Reservation for your virtual machine SKU in the d ### Why should I reserve capacity using Capacity Reservation at the destination location? -While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-capacity-reservations-docs) +While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 1-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-capacity-reservations-docs) ### Does Site Recovery work with reserved instances? |
site-recovery | Site Recovery Deployment Planner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md | If you have a previous version of Deployment Planner, do either of the following ## Version history -The latest Site Recovery Deployment Planner tool version is 2.5. +The latest Site Recovery Deployment Planner tool version is 3.0. See the [Site Recovery Deployment Planner version history](./site-recovery-deployment-planner-history.md) page for the fixes that are added in each update. ## Next steps |
storage | Data Lake Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction.md | -Data Lake Storage converges the capabilities of [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) with Azure Blob Storage. For example, Data Lake Storage provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you also get low-cost, tiered storage, with high availability/disaster recovery capabilities. +Azure Data Lake Storage converges the capabilities of [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) with Azure Blob Storage. For example, Data Lake Storage provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you also get low-cost, tiered storage, with high availability/disaster recovery capabilities. Data Lake Storage makes Azure Storage the foundation for building enterprise data lakes on Azure. Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput, Data Lake Storage allows you to easily manage massive amounts of data. _Azure Data Lake Storage_ is a cloud-based, enterprise data lake solution. It's ## Data Lake Storage -_Azure Data Lake Storage_ refers to the current implementation of Azure's Data Lake Storage solution. The previous implementation, _Azure Data Lake Storage Gen1_ will be retired on February 29, 2024. --Unlike Data Lake Storage Gen1, Data Lake Storage isn't a dedicated service or account type. Instead, it's implemented as a set of capabilities that you use with the Blob Storage service of your Azure Storage account. You can unlock these capabilities by enabling the hierarchical namespace setting. +Azure Data Lake Storage isn't a dedicated service or account type. Instead, it's implemented as a set of capabilities that you use with the Blob Storage service of your Azure Storage account. You can unlock these capabilities by enabling the hierarchical namespace setting. Data Lake Storage includes the following capabilities. |
storage | Classic Account Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md | For step-by-step instructions for migrating your classic storage accounts, see [ ### Can I create new classic accounts? -Depending on when your subscription was created, you may no longer be able to create classic storage accounts: --- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.-- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023.-- Also, beginning August 31, 2022, the ability to create classic storage accounts has been discontinued in additional phases based on the last time a classic storage account was created.+After August 16, 2024, customers can no longer create classic storage accounts. We recommend creating storage accounts only in Azure Resource Manager from this point forward. ### What happens to existing classic storage accounts after August 31, 2024? -After August 31, 2024, you'll no longer be able to manage data in your classic storage accounts through Azure Service Manager. The data will be preserved but we highly recommend migrating these accounts to Azure Resource Manager to avoid any service interruptions. +After August 31, 2024, you'll no longer be able to manage data in your classic storage accounts through Azure Service Manager and the classic management plane APIs. The data will be preserved but we highly recommend migrating these accounts to Azure Resource Manager to avoid any service interruptions. ### Will there be downtime when migrating my storage account from Classic to Resource Manager? |
update-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md | -> On 31 August 2024, both Azure Automation Update Management and the Log Analytics agent it uses [will be retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Therefore, if you are using the Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md#migration-scripts) to move your machines and schedules from Automation Update Management to Azure Update Manager. +> Both Azure Automation Update Management and the Log Analytics agent it uses [has been retired on 31st August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Therefore, if you are using the Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md#migration-scripts) to move your machines and schedules from Automation Update Management to Azure Update Manager. > For more information, see the [FAQs on retirement](update-manager-faq.md#impact-of-log-analytics-agent-retirement). You can [sign up](https://developer.microsoft.com/reactor/?search=Azure+Update+Manager&page=1) for monthly live sessions on migration including Q&A sessions. |
virtual-desktop | Whats New Msixmgr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-msixmgr.md | Title: What's new in the MSIXMGR tool - Azure Virtual Desktop description: Learn about what's new in the release notes for the MSIXMGR tool. -+ Last updated 04/18/2023 |
virtual-wan | Vpn Over Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-over-expressroute.md | The device configuration file contains the settings to use when you're configuri }, "vpnSiteConfiguration":{ "Name":"VPN-over-INet-site",- "IPAddress":"13.75.195.234", + "IPAddress":"198.51.100.122", "LinkName":"VPN-over-INet" }, "vpnSiteConnections":[{ The device configuration file contains the settings to use when you're configuri }, "gatewayConfiguration":{ "IpAddresses":{- "Instance0":"51.143.63.104", - "Instance1":"52.137.90.89" + "Instance0":"203.0.113.186", + "Instance1":"203.0.113.195" } }, "connectionConfiguration":{ |