Updates from: 11/30/2023 02:10:47
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
Congratulations! You've learned to use Document Intelligence models to analyze v
> [Explore the Document Intelligence REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) ::: moniker-end - ::: moniker range="doc-intel-2.1.0" In this how-to guide, you learn how to add Document Intelligence to your applications and workflows. Use a programming language of your choice or the REST API. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service while you learn the technology. Remember that the number of free pages is limited to 500 per month.
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
Annotations are currently in preview for Completions and Chat Completions (GPT m
# [OpenAI Python 0.28.1](#tab/python) - ```python # os.getenv() for the endpoint and key assumes that you are using environment variables.
main().catch((err) => {
console.error("The sample encountered an error:", err); }); ```+
+# [PowerShell](#tab/powershell)
+
+```powershell-interactive
+# Env: for the endpoint and key assumes that you are using environment variables.
+$openai = @{
+ api_key = $Env:AZURE_OPENAI_KEY
+ api_base = $Env:AZURE_OPENAI_ENDPOINT # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
+ api_version = '2023-10-01-preview' # this may change in the future
+ name = 'YOUR-DEPLOYMENT-NAME-HERE' #This will correspond to the custom name you chose for your deployment when you deployed a model.
+}
+
+$prompt = 'Example prompt where a severity level of low is detected'
+ # Content that is detected at severity level medium or high is filtered,
+ # while content detected at severity level low isn't filtered by the content filters.
+
+$headers = [ordered]@{
+ 'api-key' = $openai.api_key
+}
+
+$body = [ordered]@{
+ prompt = $prompt
+ model = $openai.name
+} | ConvertTo-Json
+
+# Send a completion call to generate an answer
+$url = "$($openai.api_base)/openai/deployments/$($openai.name)/completions?api-version=$($openai.api_version)"
+
+$response = Invoke-RestMethod -Uri $url -Headers $headers -Body $body -Method Post -ContentType 'application/json'
+return $response.prompt_filter_results.content_filter_results | format-list
+```
+
+The `$response` object contains a property named `prompt_filter_results` that contains annotations
+about the filter results. If you prefer JSON to a .NET object, pipe the output to `ConvertTo-JSON`
+instead of `Format-List`.
+
+```output
+hate : @{filtered=False; severity=safe}
+self_harm : @{filtered=False; severity=safe}
+sexual : @{filtered=False; severity=safe}
+violence : @{filtered=False; severity=safe}
+```
+ For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`.
As part of your application design, consider the following best practices to del
- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). - Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context). - Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).--
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
To use the batch transcription REST API:
1. [Get batch transcription results](batch-transcription-get.md) - Check transcription status and retrieve transcription results asynchronously. > [!IMPORTANT]
-> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. See how to check the current status of a batch transcription job in [this section](batch-transcription-get.md#get-transcription-status).
+> Batch transcription jobs are scheduled on a best-effort basis. At peak hours it may take up to 30 minutes or longer for a transcription job to start processing. See how to check the current status of a batch transcription job in [this section](batch-transcription-get.md#get-transcription-status).
## Next steps
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
The `--pod-cidr` parameter is required when upgrading from legacy CNI because th
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-You must register the `Microsoft.ContainerService` `AzureOverlayDualStackPreview` feature flag.
+You must have the latest aks-preview Azure CLI extension installed and register the `Microsoft.ContainerService` `AzureOverlayDualStackPreview` feature flag.
Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
aks Windows Aks Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-customer-stories.md
+
+ Title: Windows AKS customer stories
+
+description: Learn how customers are using Windows Containers on AKS.
+ Last updated : 11/29/2023++
+# Windows AKS customer stories
+
+Explore how various industries are using Windows Containers on Azure Kubernetes Service (AKS) for seamless Kubernetes integration with minimal code modifications.
+
+Learn directly from the customer stories listed here.
+
+## Customer stories
+- [Relativity](#relativity)
+- [Duck Creek](#duck-creek)
+- [Forza (Xbox Game Studios)](#forza)
+- [Microsoft Experience + Devices](#microsoft-experience--devices)
+
+### Relativity
+
+![Logo of Relativity.](./media/windows-aks-customer-stories/relativity.png)
+
+Relativity, transitioned from virtual machines to Windows containers on Azure Kubernetes Service (AKS) to modernize its Windows code base, streamline development, and improve scalability.
+
+This shift enabled faster, more cost-effective deployment of their products and services without rewriting millions of lines of code. The transition to a containerized architecture significantly reduced deployment cycles from six months to a single day, enhancing the speed and flexibility of RelativityΓÇÖs engineering teams and leading to better performance and security in their application delivery.
+
+For more information visit [RelativityΓÇÖs Windows AKS customer story](https://customers.microsoft.com/story/1516554049543037694-windows-containers-helps-relativity-boost-reliability-security).
+
+
+### Duck Creek
+
+![Logo of Duck Creek.](./media/windows-aks-customer-stories/duck-creek.png)
+
+Duck Creek Technologies modernized its insurance software solutions by adopting Windows containers on Azure Kubernetes Service (AKS), significantly enhancing operational efficiency and reducing time to market for new features. This transition to AKS enabled Duck Creek to offer scalable, reliable, and up-to-date SaaS solutions to its insurance clients, supporting rapid deployment and active delivery of updates.
+
+By containerizing their applications to Windows Containers, Duck Creek could maintain the flexibility and robustness of their products without extensive code rewriting, thereby ensuring high availability and scalability, especially critical during peak demand periods like natural disasters. This move represents Duck Creek's commitment to leveraging cutting-edge technology for Insurtech innovation.
+
+For more information visit [Duck CreekΓÇÖs Windows AKS customer story](https://customers.microsoft.com/story/1547298699206424647-duck-creek-insurance-core-systems-provide-evergreen-saas-solutions-using-windows-containers-aks).
+
+### Forza
+
+![Logo of Forza.](./media/windows-aks-customer-stories/forza.png)
+
+Forza Horizon 5, developed by Turn 10 Studios, achieved remarkable performance and scalability by transitioning to Azure Kubernetes Service (AKS) with Windows-based containers. This shift allowed the team to adapt swiftly to demand spikes, handling over 10 million concurrent players at launch, the biggest first week in Xbox Game Studios history.
+
+By utilizing Windows AKS, they were able to significantly reduce infrastructure management tasks, enhancing both the development process and the gaming experience. The move to containerized architecture enabled rapid scaling from 600,000 to 3 million concurrent users and reduced infrastructure costs, demonstrating the effectiveness of AKS in high-demand, low-latency environments like gaming.
+
+ For more information visit [ForzaΓÇÖs Windows AKS customer story](https://customers.microsoft.com/story/1498781140435260527-forza-horizon-5-crosses-finish-line-fueled-by-azure-kubernetes-service).
+
+### Microsoft Experience + Devices
+
+![Logo of Microsoft.](./media/windows-aks-customer-stories/microsoft.png)
+
+Microsoft's E+D group, responsible for supporting products such as Teams and Office modernized the Microsoft 365 infrastructure by transitioning to Windows containers on Azure Kubernetes Service (AKS), aiming for more consistent, efficient DevOps within strict security and compliance frameworks.
+
+The transition enabled Microsoft 365 developers to focus more on innovation and iterating quickly, leveraging the benefits of AKS like security-optimized hosting, automated compliance checks, and centralized capacity management, thereby accelerating development while optimizing resource utilization and costs.
+
+For more information visit [MicrosoftΓÇÖs E+D Windows AKS customer story](https://customers.microsoft.com/story/1536483517282553662-modernizing-microsoft-365-windows-containers-azure-kubernetes-service).
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
documentationcenter: ''
Previously updated : 07/25/2022 Last updated : 11/06/2023
After 30 September 2023, if you prefer not to update your tools, scripts, and pr
## Required action
+Update your tools, scripts, and programs using the details in the following section.
+
+We also recommend setting the **Minimum API version** in your API Management instance.
+
+### Update your tools, scripts, and programs
+ * **ARM, Bicep, or Terraform templates** - Update the template to use API version 2021-08-01 or later. * **Azure CLI** - Run `az version` to check your version. If you're running version 2.42.0 or later, no action is required. Use the `az upgrade` command to upgrade the Azure CLI if necessary. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli).
After 30 September 2023, if you prefer not to update your tools, scripts, and pr
* Python: 3.0.0 - JavaScript: 8.0.1 - Java: 1.0.0-beta3
- ## More information
+
+### Update Minimum API version setting on your API Management instance
+
+We recommend setting the **Minimum API version** for your API Management instance using the Azure portal. This setting limits control plane API calls to your instance with an API version equal to or newer than this value. Currently you can set this to **2019-12-01**.
+
+To set the **Minimum API version** in the portal:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Deployment + infrastructure**, select **Management API**.
+1. Select the **Management API settings** tab.
+1. Under **Prevent users with read-only permissions from accessing service secrets**, select **Yes**. The **Minimum API version** appears.
+1. Select **Save**.
+
+## More information
* [Azure CLI](/cli/azure/update-azure-cli) * [Azure PowerShell](/powershell/azure/install-azure-powershell)
After 30 September 2023, if you prefer not to update your tools, scripts, and pr
* [Bicep](../../azure-resource-manager/bicep/overview.md) * [Microsoft Q&A](/answers/topics/azure-api-management.html)
-## Next steps
+## Related content
See all [upcoming breaking changes and feature retirements](overview.md).
api-management Integrate Vnet Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/integrate-vnet-outbound.md
Previously updated : 09/20/2023 Last updated : 11/20/2023 # Integrate an Azure API Management instance with a private VNet for outbound connections (preview)
When an API Management instance is integrated with a virtual network for outboun
- The network must be deployed in the same region and subscription as your API Management instance - (Optional) For testing, a sample backend API hosted within a different subnet in the virtual network. For example, see [Tutorial: Establish Azure Functions private site access](../azure-functions/functions-create-private-site-access.md).
+### Permissions
+
+You must have at least the following role-based access control permissions on the subnet or at a higher level to configure virtual network integration:
+
+| Action | Description |
+|-|-|
+| Microsoft.Network/virtualNetworks/read | Read the virtual network definition |
+| Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition |
+| Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network |
+
+### Register Microsoft.Web resource provider
+
+Ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+ ## Delegate the subnet The subnet used for integration must be delegated to the **Microsoft.Web/serverFarms** service. In the subnet settings, in **Delegate subnet to a service**, select **Microsoft.Web/serverFarms**. :::image type="content" source="media/integrate-vnet-outbound/delegate-subnet.png" alt-text="Screenshot of delegating the subnet to a service in the portal.":::
-For details, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md).
- ## Enable VNet integration This section will guide you through the process of enabling VNet integration for your Azure API Management instance.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
Title: 'Tutorial: Linux Java app with Quarkus and PostgreSQL' description: Learn how to get a data-driven Linux Quarkus app working in Azure App Service, with connection to a PostgreSQL running in Azure.--++ ms.devlang: java Previously updated : 5/27/2022 Last updated : 11/30/2023 # Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
-This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure.
-When you are finished, you will have a [Quarkus](https://quarkus.io) application storing data in [PostgreSQL](../postgresql/index.yml) database running on [Azure App Service on Linux](overview.md).
--
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a App Service on Azure
-> * Create a PostgreSQL database on Azure
-> * Deploy the sample app to Azure App Service
-> * Connect a sample app to the database
-> * Stream diagnostic logs from App Service
-> * Add additional instances to scale out the sample app
--
-## Prerequisites
-
-* [Azure CLI](/cli/azure/overview), installed on your own computer.
-* [Git](https://git-scm.com/)
-* [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure)
-* [Maven](https://maven.apache.org)
-
-## Clone the sample app and prepare the repo
-
-This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](../postgresql/index.yml). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource).
-
-Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
-
-```bash
-git clone https://github.com/quarkusio/quarkus-quickstarts
-cd quarkus-quickstarts/hibernate-orm-panache-quickstart
+This tutorial shows how to build, configure, and deploy a secure [Quarkus](https://quarkus.io) application in Azure App Service that's connected to a PostgreSQL database (using [Azure Database for PostgreSQL](../postgresql/index.yml)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Quarkus app running on [Azure App Service on Linux](overview.md).
++
+**To complete this tutorial, you'll need:**
+
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/).
+* Knowledge of Java with [Quarkus](https://quarkus.io) development.
+
+## 1. Run the sample application
+
+The tutorial uses [Quarkus sample: Hibernate ORM with Panache and RESTEasy](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app), which comes with a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The easiest way to run it is in a GitHub codespace.
+
+ :::column span="2":::
+ **Step 1:** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the GitHub fork, select **Code** > **Create codespace on main**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-2.png" alt-text="A screenshot showing how create a codespace in GitHub." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** In the codespace terminal:
+ 1. Run `mvn quarkus:dev`.
+ 1. When you see the notification `Your application running on port 8080 is available.`, select **Open in Browser**. If you see a notification with port 5005, skip it.
+ You should see the sample application in a new browser tab.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-3.png" alt-text="A screenshot showing how to run the sample application inside the GitHub codespace." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-3.png":::
+ :::column-end:::
+
+For more information on how the Quarkus sample application is created, see Quarkus documentation [Simplified Hibernate ORM with Panache](https://quarkus.io/guides/hibernate-orm-panache) and [Configure data sources in Quarkus](https://quarkus.io/guides/datasource).
+
+## 2. Create App Service and PostgreSQL
+
+First, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. For the creation process, you'll specify:
+
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Region** to run the app physically in the world.
+* The **Runtime stack** for the app. It's where you select the version of Java to use for your app.
+* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+ :::column span="2":::
+ **Step 1:** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-quarkus-postgres-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-quarkus-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **Java 17**.
+ 1. *Java web server stack* &rarr; **Java SE (Embedded Web Server)**.
+ 1. *Database* &rarr; **PostgreSQL - Flexible Server**. The server name and database name are set by default to appropriate values.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::column-end:::
+
+## 3. Verify connection settings
+
+The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, you can use [Key Vault references](app-service-key-vault-references.md) instead.
+
+ :::column span="2":::
+ **Step 1:** In the App Service page, in the left menu, select **Configuration**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. It's injected at runtime as an environment variable.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4:** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `PORT` and set its value to `8080`, which is the default port of the Quarkus application. Select **OK**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the PORT app setting in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5:** Select **Save**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the PORT app setting in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting-save.png":::
+ :::column-end:::
++
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 4. Deploy sample code
+
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
+
+Note the following:
+
+- Your deployed Java package must be an [Uber-Jar](https://quarkus.io/guides/maven-tooling#uber-jar-maven).
+- For simplicity of the tutorial, you'll disable tests during the deployment process. The GitHub Actions runners don't have access to the PostgreSQL database in Azure, so any integration tests that require database access will fail, such as is the case with the Quarkus sample application.
+
+ :::column span="2":::
+ **Step 1:** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-quarkus-postgresql-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In **Authentication type**, select **User-assigned identity (Preview)**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** Back in the GitHub codespace of your sample fork,
+ 1. Open *src/main/resources/application.properties* in the explorer. Quarkus uses this file to load Java properties.
+ 1. Add a production property `%prod.quarkus.datasource.jdbc.url=${AZURE_POSTGRESQL_CONNECTIONTRING}`.
+ This property sets the production data source URL to the app setting that the creation wizard generated for you.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing a GitHub codespace and the application.properties file opened." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4:**
+ 1. Open *.github/workflows/main_msdocs-quarkus-postgres-XYZ.yml* in the explorer. This file was created by the App Service create wizard.
+ 1. Under the `Build with Maven` step, change the Maven command to `mvn clean install -DskipTests -Dquarkus.package.type=uber-jar`.
+ `-DskipTests` skips the tests in your Quarkus project, and `-Dquarkus.package.type=uber-jar` creates an Uber-Jar that App Service needs.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing a GitHub codespace and a GitHub workflow YAML opened." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5:**
+ 1. Select the **Source Control** extension.
+ 1. In the textbox, type a commit message like `Configure DB and deployment workflow`.
+ 1. Select **Commit**, then confirm with **Yes**.
+ 1. Select **Sync changes 2**, then confirm with **OK**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6:** Back in the Deployment Center page in the Azure portal:
+ 1. Select **Logs**. A new deployment run is already started from your committed changes.
+ 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-7.png":::
+ :::column-end:::
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 5. Browse to the app
+
+ :::column span="2":::
+ **Step 1:** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** Add a few fruits to the list.
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Quarkus web app with PostgreSQL running in Azure showing a list of fruits." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
+
+## 6. Stream diagnostic logs
+
+Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample application includes standard JBoss logging statements to demonstrate this capability as shown below.
++
+ :::column span="2":::
+ **Step 1:** In the App Service page:
+ 1. From the left menu, select **App Service logs**.
+ 1. Under **Application logging**, select **File System**.
+ 1. In the top menu, select **Save**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-2.png":::
+ :::column-end:::
+
+Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=java).
+
+## 7. Clean up resources
+
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
+
+ :::column span="2":::
+ **Step 1:** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:**
+ 1. Enter the resource group name to confirm your deletion.
+ 1. Select **Delete**.
+ 1. Confirm with **Delete** again.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-3.png"::::
+ :::column-end:::
+
+## Troubleshooting
+
+#### I see the error log "ERROR [org.acm.hib.orm.pan.ent.FruitEntityResource] (vert.x-eventloop-thread-0) Failed to handle request: jakarta.ws.rs.NotFoundException: HTTP 404 Not Found".
+
+This is a Vert.x error (see [Quarkus Reactive Architecture](https://quarkus.io/guides/quarkus-reactive-architecture)), indicating that the client requested an unknown path. This error happens on every app startup because App Service verifies that the app starts by sending a `GET` request to `/robots933456.txt`.
+
+#### The app failed to start and shows the following error in log: "Model classes are defined for the default persistence unit \<default> but configured datasource \<default> not found: the default EntityManagerFactory will not be created."
+
+This Quarkus error is most likely because the app can't connect to the Azure database. Make sure that the app setting `AZURE_POSTGRESQL_CONNECTIONSTRING` hasn't been changed, and that *application.properties* is using the app setting properly.
+
+## Frequently asked questions
+
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [What if I want to run tests with PostgreSQL during the GitHub workflow?](#what-if-i-want-to-run-tests-with-postgresql-during-the-github-workflow)
+
+#### How much does this setup cost?
+
+Pricing for the created resources is as follows:
+
+- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The PostgreSQL flexible server is created in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
+
+#### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?
+
+- For basic access from a command-line tool, you can run `psql` from the app's SSH terminal.
+- To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM in one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.
+
+#### How does local app development work with GitHub Actions?
+
+Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example:
+
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
```
-## Create an App Service on Azure
-
-1. Sign in to your Azure CLI, and optionally set your subscription if you have more than one connected to your sign-in credentials.
-
- ```azurecli
- az login
- az account set -s <your-subscription-id>
- ```
-
-2. Create an Azure Resource Group, noting the resource group name (referred to with `$RESOURCE_GROUP` later on)
-
- ```azurecli
- az group create \
- --name <a-resource-group-name> \
- --location <a-resource-group-region>
- ```
-
-3. Create an App Service Plan. The App Service Plan is the compute container, it determines your cores, memory, price, and scale.
-
- ```azurecli
- az appservice plan create \
- --name "quarkus-tutorial-app-service-plan" \
- --resource-group $RESOURCE_GROUP \
- --sku B2 \
- --is-linux
- ```
-
-4. Create an app service within the App Service Plan.
-
- ```azurecli
- WEBAPP_NAME=<a unique name>
- az webapp create \
- --name $WEBAPP_NAME \
- --resource-group $RESOURCE_GROUP \
- --runtime "JAVA|11-java11" \
- --plan "quarkus-tutorial-app-service-plan"
- ```
-
-> [!IMPORTANT]
-> The `WEBAPP_NAME` must be **unique across all Azure**. A good pattern is to use a combination of your company name or initials of your name along with a good webapp name, for example `johndoe-quarkus-app`.
-
-## Create an Azure PostgreSQL Database
-
-Follow these steps to create an Azure PostgreSQL database in your subscription. The Quarkus Fruits app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
-
-1. Create the database service.
-
- ```azurecli
- DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db'
- ADMIN_USERNAME='demoadmin'
- ADMIN_PASSWORD='<admin-password>'
-
- az postgres server create \
- --resource-group $RESOURCE_GROUP \
- --name $DB_SERVER_NAME \
- --location $LOCATION \
- --admin-user $ADMIN_USERNAME \
- --admin-password $ADMIN_PASSWORD \
- --sku-name GP_Gen5_2
- ```
-
- The following parameters are used in the above Azure CLI command:
-
- * *resource-group* &rarr; Use the same resource group name in which you created the web app, for example `msdocs-quarkus-postgres-webapp-rg`.
- * *name* &rarr; The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)
- * *location* &rarr; Use the same location used for the web app.
- * *admin-user* &rarr; Username for the administrator account. It can't be `azure_superuser`, `admin`, `administrator`, `root`, `guest`, or `public`. For example, `demoadmin` is okay.
- * *admin-password* Password of the administrator user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-
- > [!IMPORTANT]
- > When creating usernames or passwords **do not** use the `$` character. Later you create environment variables with these values where the `$` character has special meaning within the Linux container used to run Java apps.
-
- * *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.
- * *sku-name* &rarr; The name of the pricing tier and compute configuration, for example `GP_Gen5_2`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
-
-2. Configure the firewall rules on your server by using the [az postgres server firewall-rule create](/cli/azure/postgres/flexible-server/firewall-rule) command to give your local environment access to connect to the server.
-
- ```azurecli
- az postgres server firewall-rule create \
- --resource-group $RESOURCE_GROUP_NAME \
- --server-name $DB_SERVER_NAME \
- --name AllowMyIP \
- --start-ip-address <your IP> \
- --end-ip-address <your IP>
+#### What if I want to run tests with PostgreSQL during the GitHub workflow?
+
+The default Quarkus sample application includes tests with database connectivity. To avoid connection errors, you added the `-skipTests` property. If you want, you can run the tests against a PostgreSQL service container. For example, in the automatically generated workflow file in your GitHub fork (*.github/workflows/main_cephalin-quarkus.yml*), make the following changes:
+
+1. Add YAML code for the PostgreSQL container to the `build` job, as shown in the following snippet.
+
+ ```yml
+ ...
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ # BEGIN CODE ADDITION
+ container: ubuntu
+
+ # Hostname for the PostgreSQL container
+ postgresdb:
+ image: postgres
+ env:
+ POSTGRES_PASSWORD: postgres
+ POSTGRES_USER: postgres
+ POSTGRES_DB: postgres
+ # Set health checks to wait until postgres has started
+ options: >-
+ --health-cmd pg_isready
+ --health-interval 10s
+ --health-timeout 5s
+ --health-retries 5
+
+ # END CODE ADDITION
+
+ steps:
+ - uses: actions/checkout@v4
+ ...
```
+
+ `container: ubuntu` tells GitHub to run the `build` job in a container. This way, the connection string in your dev environment `jdbc:postgresql://postgresdb:5432/postgres` can work as-is in when the workflow runs. For more information about PostgreSQL connectivity in GitHub Actions, see [Creating PostgreSQL service containers](https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers).
- Also, once your application runs on App Service, you'll need to give it access as well. run the following command to allow access to the database from services within Azure:
+1. In the `Build with Maven` step, remove `-DskipTests`. For example:
- ```azurecli
- az postgres server firewall-rule create \
- --resource-group $RESOURCE_GROUP_NAME \
- --server-name $DB_SERVER_NAME \
- --name AllowAllWindowsAzureIps \
- --start-ip-address 0.0.0.0 \
- --end-ip-address 0.0.0.0
+ ```yml
+ - name: Build with Maven
+ run: mvn clean install -Dquarkus.package.type=uber-jar
```
-3. Create a database named `fruits` within the Postgres service with this command:
-
- ```azurecli
- az postgres db create \
- --resource-group $RESOURCE_GROUP \
- --server-name $DB_SERVER_NAME \
- --name fruits
- ```
-
-## Configure the Quarkus app properties
-
-Quarkus configuration is located in the `src/main/resources/application.properties` file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
-
-Delete the existing content in `application.properties` and replace with the following to configure our database for dev, test, and production modes:
-
-```properties
-quarkus.package.type=uber-jar
-%dev.quarkus.datasource.db-kind=h2
-%dev.quarkus.datasource.jdbc.url=jdbc:h2:mem:fruits
-
-%test.quarkus.datasource.db-kind=h2
-%test.quarkus.datasource.jdbc.url=jdbc:h2:mem:fruits
-
-%prod.quarkus.datasource.db-kind=postgresql
-%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?user=${DBUSER}@${DBHOST}&password=${DBPASS}
-%prod.quarkus.hibernate-orm.sql-load-script=import.sql
-
-quarkus.hibernate-orm.database.generation=drop-and-create
-```
-
-> [!IMPORTANT]
-> Be sure to keep the dollar signs and braces intact when copying and pasting the above for the variables `${DBHOST}`, `${DBNAME}`, `${DBUSER}`, and `${DBPASS}`. We'll set the actual values later in our environment so that we don't expose them hard-coded in the properties file, and so that we can change them without having to re-deploy the app.
-
-## Run the sample app locally
-
-Use Maven to run the sample.
-
-```bash
-mvn quarkus:dev
-```
-
-> [!IMPORTANT]
-> Be sure you have the H2 JDBC driver installed. You can add it using the following Maven command: `./mvnw quarkus:add-extension -Dextensions="jdbc-h2"`.
-
-This will build the app, run its unit tests, and then start the application in developer live coding. You should see:
-
-```output
-__ ____ __ _____ ___ __ ____ ______
- --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
- -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
\___\_\____/_/ |_/_/|_/_/|_|\____/___/
-INFO [io.quarkus] (Quarkus Main Thread) hibernate-orm-panache-quickstart 1.0.0-SNAPSHOT on JVM (powered by Quarkus x.x.x.Final) started in x.xxxs. Listening on: http://localhost:8080
-
-INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
-INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, jdbc-h2, jdbc-postgresql, narayana-jta, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, vertx]
-```
-
-You can access Quarkus app locally by typing the `w` character into the console, or using this link once the app is started: `http://localhost:8080/`.
--
-If you see exceptions in the output, double-check that the configuration values for `%dev` are correct.
-
-> [!TIP]
-> You can enable continuous testing by typing `r` into the terminal. This will continuously run tests as you develop the application. You can also use Quarkus' *Live Coding* to see changes to your Java or `pom.xml` immediately. Simlply edit code and reload the browser.
-
-When you're done testing locally, shut down the application with `CTRL-C` or type `q` in the terminal.
-
-## Configure App Service for Database
-
-Our Quarkus app is expecting various environment variables to configure the database. Add these to the App Service environment with the following command:
-
-```azurecli
-az webapp config appsettings set \
- -g $RESOURCE_GROUP \
- -n $WEBAPP_NAME \
- --settings \
- 'DBHOST=$DB_SERVER_NAME' \
- 'DBNAME=fruits' \
- 'DBUSER=$ADMIN_USERNAME' \
- 'DBPASS=$ADMIN_PASSWORD' \
- 'PORT=8080' \
- 'WEBSITES_PORT=8080'
-```
-
-> [!NOTE]
-> The use of single quotes (`'`) to surround the settings is required if your password has special characters.
-
-Be sure to replace the values for `$RESOURCE_GROUP`, `$WEBAPP_NAME`, `$DB_SERVER_NAME`, `$ADMIN_USERNAME`, and `$ADMIN_PASSWORD` with the relevant values from previous steps.
-
-## Deploy to App Service on Linux
-
-Build the production JAR file using the following command:
-
-```azurecli
-mvn clean package
-```
-
-The final result will be a JAR file in the `target/` subfolder.
-
-To deploy applications to Azure App Service, developers can use the [Maven Plugin for App Service](/training/modules/publish-web-app-with-maven-plugin-for-azure-app-service/), [VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice), or the Azure CLI to deploy apps. Use the following command to deploy our app to the App Service:
-
-```azurecli
-az webapp deploy \
- --resource-group $RESOURCE_GROUP \
- --name $WEBAPP_NAME \
- --src-path target/*.jar --type jar
-```
-
-You can then access the application using the following command:
-
-```azurecli
-az webapp browse \
- --resource-group $RESOURCE_GROUP \
- --name $WEBAPP_NAME
-```
-
-> [!TIP]
-> You can also manually open the location in your browser at `http://<webapp-name>.azurewebsites.net`. It may take a minute or so to upload the app and restart the App Service.
-
-You should see the app running with the remote URL in the address bar:
--
-If you see errors, use the following section to access the log file from the running app:
-
-## Stream diagnostic logs
--
-## Scale out the app
-
-Scale out the application by adding another worker:
-
-```azurecli
-az appservice plan update --number-of-workers 2 \
- --name quarkus-tutorial-app-service-plan \
- --resource-group $RESOURCE_GROUP
-```
-
-## Clean up resources
-
-If you don't need these resources for another tutorial (see [Next steps](#next-steps)), you can delete them by running the following command in the Cloud Shell or on your local terminal:
-
-```azurecli
-az group delete --name $RESOURCE_GROUP --yes
-```
- ## Next steps
-[Azure for Java Developers](/java/azure/)
-[Quarkus](https://quarkus.io),
-[Getting Started with Quarkus](https://quarkus.io/get-started/),
-and
-[App Service Linux](overview.md).
+- [Azure for Java Developers](/java/azure/)
+- [Quarkus](https://quarkus.io),
+- [Getting Started with Quarkus](https://quarkus.io/get-started/)
-Learn more about running Java apps on App Service on Linux in the developer guide.
+Learn more about running Java apps on App Service in the developer guide.
> [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Configure a Java app in Azure App Service](configure-language-java.md?pivots=platform-linux)
Learn how to secure your app with a custom domain and certificate.
app-service Tutorial Secure Domain Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-domain-certificate.md
For more information on app scaling, see [Scale up an app in Azure App Service](
1. Don't select **Validate** yet. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain.png" alt-text="A screenshot showing how to configure a new custom domain, along with a managed certificate." lightbox="./media/tutorial-secure-domain-certificate/add-custom-domain.png" border="true":::
+ :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain.png" alt-text="A screenshot showing how to configure a new custom domain, along with a managed certificate." lightbox="./media/tutorial-secure-domain-certificate/configure-custom-domain.png" border="true":::
:::column-end::: :::row-end:::
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023 #
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
The status of the selected machines changes to **Enabled**.
If any problems occur during the enablement process, see [Troubleshoot delivery of Extended Security Updates for Windows Server 2012](troubleshoot-extended-security-updates.md) for assistance.
+## At-scale Azure Policy
+
+For at-scale linking of servers to an Azure Arc Extended Security Update license and locking down license modification or creation, consider the usage of the following built-in Azure policies:
+
+- [Enable Extended Security Updates (ESUs) license to keep Windows 2012 machines protected after their support lifecycle has ended (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4864134f-d306-4ff5-94d8-ea4553b18c97)
+
+- [Deny Extended Security Updates (ESUs) license creation or modification (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4c660f31-eafb-408d-a2b3-6ed2260bd26c)
+
+Azure policies can be specified to a targeted subscription or resource group for both auditing and management scenarios.
+ ## Additional scenarios There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc include the following:
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-arc Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md
Title: Recover from accidental deletion of resource bridge VM
description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled System Center Virtual Machine Manager disaster scenarios. Previously updated : 11/15/2023 Last updated : 11/29/2023 ms.
# Recover from accidental deletion of resource bridge virtual machine
-In this article, you'll learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail.
+In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail.
## Recover the Arc resource bridge in case of virtual machine deletion To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps.
+>[!Note]
+> This note is applicable only if you're performing this recovery operation to upgrade your Arc resource bridge.<br><br>
+> If you have VMs that are still in the older version, i.e., have *Enabled (Deprecated)* set under the *Virtual hardware operations* column in the Virtual Machines inventory of your SCVMM server in Azure, switch them to the new version by following the steps in [this article](./switch-to-the-new-version-scvmm.md#switch-to-the-new-version-existing-customer) before proceeding with the steps for resource bridge recovery.
+ 1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and SCVMM Azure resources. 2. Find and delete the old Arc resource bridge resource under the [Resource Bridges tab from the Azure Arc center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/resourceBridges).
To recover from Arc resource bridge VM deletion, you need to deploy a new resour
$vmmserverName= <SCVMM-name-in-azure> ```
-4. [Run the onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) again with the `--force` parameter.
+4. [Run the onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) again with the `-Force` parameter.
``` powershell-interactive
- ./resource-bridge-onboarding-script.ps1 --force
+ ./resource-bridge-onboarding-script.ps1 -Force
``` 5. [Provide the inputs](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#script-runtime) as prompted. 6. In the same machine, run the following scripts, as applicable:
- - [Download the script](https://download.microsoft.com/download/6/b/4/6b4a5009-fed8-46c2-b22b-b24a4d0a06e3/arcvmm-appliance-dr.ps1) if you are running the script from a Windows machine
- - [Download the script](https://download.microsoft.com/download/0/5/c/05c2bcb8-87f8-4ead-9757-a87a0759071c/arcvmm-appliance-dr.sh) if you are running the script from a Linux machine
+ - [Download the script](https://download.microsoft.com/download/6/b/4/6b4a5009-fed8-46c2-b22b-b24a4d0a06e3/arcvmm-appliance-dr.ps1) if you're running the script from a Windows machine
+ - [Download the script](https://download.microsoft.com/download/0/5/c/05c2bcb8-87f8-4ead-9757-a87a0759071c/arcvmm-appliance-dr.sh) if you're running the script from a Linux machine
-7. Once the script is run successfully, the old Resource Bridge will be recovered and the connection is re-established to the existing Azure-enabled SCVMM resources.
+7. Once the script is run successfully, the old Resource Bridge is recovered and the connection is re-established to the existing Azure-enabled SCVMM resources.
## Next steps
azure-arc Install Arc Agents Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script.md
Title: Install Arc agent using a script for SCVMM VMs description: Learn how to enable guest management using a script for Arc enabled SCVMM VMs. Previously updated : 11/15/2023 Last updated : 11/29/2023
# Install Arc agents using a script
-In this article, you'll learn how to install Arc agents on Azure-enabled SCVMM VMs using a script.
+In this article, you learn how to install Arc agents on Azure-enabled SCVMM VMs using a script.
## Prerequisites
Ensure the following before you install Arc agents using a script for SCVMM VMs:
- Is running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - Is able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. - Has Azure CLI [installed](https://learn.microsoft.com/cli/azure/install-azure-cli).
- - Has the Arc agent installation script downloaded from [here](https://download.microsoft.com/download/7/1/6/7164490e-6d8c-450c-8511-f8191f6ec110/arcscvmm-enable-guest-management.ps1).
+ - Has the Arc agent installation script downloaded from [here](https://download.microsoft.com/download/7/1/6/7164490e-6d8c-450c-8511-f8191f6ec110/arcscvmm-enable-guest-management.ps1) for a Windows VM or from [here](https://download.microsoft.com/download/0/9/b/09bd9ef4-a7af-49e5-ad5f-9e8f85fae75b/arcscvmm-enable-guest-management.sh) for a Linux VM.
>[!NOTE] >- If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and `add <username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`.
Ensure the following before you install Arc agents using a script for SCVMM VMs:
## Steps to install Arc agents using a script
-1. Log in to the target VM as an administrator.
+1. Sign in to the target VM as an administrator.
2. Run the Azure CLI with the `az` command from either Windows Command Prompt or PowerShell.
-3. Log in to your Azure account in Azure CLI using `az login --use-device-code`
-4. Run the downloaded script *arcscvmm-enable-guest-management.ps1*. The `vmmServerId` parameter should denote your VMM ServerΓÇÖs ARM ID.
+3. Sign in to your Azure account in Azure CLI using `az login --use-device-code`
+4. Run the downloaded script *arcscvmm-enable-guest-management.ps1* or *arcscvmm-enable-guest-management.sh*, as applicable, using the following commands. The `vmmServerId` parameter should denote your VMM ServerΓÇÖs ARM ID.
-```azurecli
-./arcscvmm-enable-guest-management.ps1 -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName>
-```
+ **For a Windows VM:**
+
+ ```azurecli
+ ./arcscvmm-enable-guest-management.ps1 -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName>
+ ```
+
+ **For a Linux VM:**
+
+ ```azurecli
+ ./arcscvmm-enable-guest-management.sh -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName>
+ ```
## Next steps
azure-arc Switch To The New Version Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/switch-to-the-new-version-scvmm.md
Previously updated : 11/15/2023 Last updated : 11/29/2023 keywords: "VMM, Arc, Azure" #Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled SCVMM and leverage the associated capabilities
If you've onboarded to Arc-enabled SCVMM before September 22, 2023, for VMs that
> If you had enabled guest management on any of the VMs, [disconnect](/azure/azure-arc/servers/manage-agent?tabs=windows#step-2-disconnect-the-server-from-azure-arc) and [uninstall agents](/azure/azure-arc/servers/manage-agent?tabs=windows#step-3a-uninstall-the-windows-agent). 1. From your browser, go to the SCVMM management servers blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the SCVMM management server resource.
-2. Select all the virtual machines that are Azure enabled with the older version.
+2. Select all the virtual machines that are Azure enabled with the older version. The virtual machines in the older version will have *Enabled (Deprecated)* set under the Virtual hardware management column.
3. Select **Remove from Azure**. :::image type="Virtual Machines" source="media/switch-to-the-new-version-scvmm/virtual-machines.png" alt-text="Screenshot of virtual machines."::: 4. After successful removal from Azure, enable the same resources again in Azure.
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Title: Configure monitoring for Azure Functions description: Learn how to connect your function app to Application Insights for monitoring and how to configure data collection. Previously updated : 06/23/2022 Last updated : 11/29/2023 # Customer intent: As a developer, I want to understand how to configure monitoring for my functions correctly, so I can collect the data that I need.
If *[host.json]* includes multiple logs that start with the same string, the mor
```json { "logging": {
- "fileLoggingMode": "always",
+ "fileLoggingMode": "debugOnly",
"logLevel": { "default": "Information", "Host": "Error",
azure-functions Functions Bindings Dapr Input Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-secret.md
public void run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr secret input binding is paired with a Dapr invoke trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('RetrieveSecret', {
+ trigger: trigger.generic({
+ type: 'daprServiceInvocationTrigger',
+ name: "payload"
+ }),
+ extraInputs: [daprSecretInput],
+ handler: async (request, context) => {
+ context.log("Node function processed a RetrieveSecret request from the Dapr Runtime.");
+ const daprSecretInputValue = context.extraInputs.get(daprSecretInput);
+
+ // print the fetched secret value
+ for (var key in daprSecretInputValue) {
+ context.log(`Stored secret: Key=${key}, Value=${daprSecretInputValue[key]}`);
+ }
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprSecretInput` annotation allows you to have your function access a secre
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description |
+|--|-|
+|**key** | The secret key value. |
+|**secretStoreName** | Name of the secret store as defined in the _local-secret-store.yaml_ component file. |
+|**metadata** | The metadata namespace. |
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description |
+|--|-|
+|**key** | The secret key value. |
+|**secretStoreName** | Name of the secret store as defined in the _local-secret-store.yaml_ component file. |
+|**metadata** | The metadata namespace. |
++++ The following table explains the binding configuration properties that you set in the function.json file.
azure-functions Functions Bindings Dapr Input State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-state.md
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr invoke input binding is added as an `extraInput` and paired with an HTTP trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('StateInputBinding', {
+ trigger: trigger.generic({
+ type: 'httpTrigger',
+ authLevel: 'anonymous',
+ methods: ['GET'],
+ route: "state/{key}",
+ name: "req"
+ }),
+ extraInputs: [daprStateInput],
+ handler: async (request, context) => {
+ context.log("Node HTTP trigger function processed a request.");
+
+ const daprStateInputValue = context.extraInputs.get(daprStateInput);
+ // print the fetched state value
+ context.log(daprStateInputValue);
+
+ return daprStateInputValue;
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context, req) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprStateInput` annotation allows you to read Dapr state into your function
::: zone pivot="programming-language-javascript"
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description |
+|--|-|
+|**stateStore** | The name of the state store. |
+|**key** | The name of the key to retrieve from the specified state store. |
+
+# [Node.js v3](#tab/v3)
+ The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description |
The following table explains the binding configuration properties that you set i
|**stateStore** | The name of the state store. | |**key** | The name of the key to retrieve from the specified state store. | ++ ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Dapr Output Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-invoke.md
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr invoke output binding is paired with an HTTP trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('InvokeOutputBinding', {
+ trigger: trigger.generic({
+ type: 'httpTrigger',
+ authLevel: 'anonymous',
+ methods: ['POST'],
+ route: "invoke/{appId}/{methodName}",
+ name: "req"
+ }),
+ return: daprInvokeOutput,
+ handler: async (request, context) => {
+ context.log("Node HTTP trigger function processed a request.");
+
+ const payload = await request.text();
+ context.log(JSON.stringify(payload));
+
+ return { body: payload };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context, req) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprInvokeOutput` annotation allows you to have your function invoke and li
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**appId** | The app ID of the application involved in the invoke binding. | :heavy_check_mark: | :heavy_check_mark: |
+|**methods** | Post or get. | :heavy_check_mark: | :heavy_check_mark: |
+| **body** | _Required._ The body of the request. | :x: | :heavy_check_mark: |
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**appId** | The app ID of the application involved in the invoke binding. | :heavy_check_mark: | :heavy_check_mark: |
+|**methodName** | The name of the method variable. | :heavy_check_mark: | :heavy_check_mark: |
+|**httpVerb** | Post or get. | :heavy_check_mark: | :heavy_check_mark: |
+| **body** | _Required._ The body of the request. | :x: | :heavy_check_mark: |
+++++ The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
azure-functions Functions Bindings Dapr Output Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-publish.md
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr publish output binding is paired with an HTTP trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('PublishOutputBinding', {
+ trigger: trigger.generic({
+ type: 'httpTrigger',
+ authLevel: 'anonymous',
+ methods: ['POST'],
+ route: "topic/{topicName}",
+ name: "req"
+ }),
+ return: daprPublishOutput,
+ handler: async (request, context) => {
+ context.log("Node HTTP trigger function processed a request.");
+ const payload = await request.text();
+ context.log(JSON.stringify(payload));
+
+ return { payload: payload };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context, req) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprPublishOutput` annotation allows you to have a function access a publis
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**pubsubname** | The name of the publisher component service. | :heavy_check_mark: | :heavy_check_mark: |
+|**topic** | The name/identifier of the publisher topic. | :heavy_check_mark: | :heavy_check_mark: |
+| **payload** | _Required._ The message being published. | :x: | :heavy_check_mark: |
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**pubsubname** | The name of the publisher component service. | :heavy_check_mark: | :heavy_check_mark: |
+|**topic** | The name/identifier of the publisher topic. | :heavy_check_mark: | :heavy_check_mark: |
+| **payload** | _Required._ The message being published. | :x: | :heavy_check_mark: |
++++ The following table explains the binding configuration properties that you set in the _function.json_ file.
azure-functions Functions Bindings Dapr Output State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-state.md
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr state output binding is paired with an HTTP trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('StateOutputBinding', {
+ trigger: trigger.generic({
+ type: 'httpTrigger',
+ authLevel: 'anonymous',
+ methods: ['POST'],
+ route: "state/{key}",
+ name: "req"
+ }),
+ return: daprStateOutput,
+ handler: async (request, context) => {
+ context.log("Node HTTP trigger function processed a request.");
+
+ const payload = await request.text();
+ context.log(JSON.stringify(payload));
+
+ return { value : payload };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context, req) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprStateOutput` annotation allows you to function access a state store.
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+| **stateStore** | The name of the state store to save state. | :heavy_check_mark: | :x: |
+| **key** | The name of the key to save state within the state store. | :heavy_check_mark: | :heavy_check_mark: |
+| **value** | _Required._ The value being stored. | :x: | :heavy_check_mark: |
+
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+| **stateStore** | The name of the state store to save state. | :heavy_check_mark: | :x: |
+| **key** | The name of the key to save state within the state store. | :heavy_check_mark: | :heavy_check_mark: |
+| **value** | _Required._ The value being stored. | :x: | :heavy_check_mark: |
++++ The following table explains the binding configuration properties that you set in the _function.json_ file.
azure-functions Functions Bindings Dapr Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output.md
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+In the following example, the Dapr output binding is paired with the Dapr invoke output trigger, which is registered by the `app` object:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('SendMessageToKafka', {
+ trigger: trigger.generic({
+ type: 'daprServiceInvocationTrigger',
+ name: "payload"
+ }),
+ return: daprBindingOuput,
+ handler: async (request, context) => {
+ context.log("Node function processed a SendMessageToKafka request from the Dapr Runtime.");
+ context.log(context.triggerMetadata.payload)
+
+ return { "data": context.triggerMetadata.payload };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprBindingOutput` annotation allows you to create a function that sends an
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**bindingName** | The name of the binding. | :heavy_check_mark: | :heavy_check_mark: |
+|**operation** | The binding operation. | :heavy_check_mark: | :heavy_check_mark: |
+| **metadata** | The metadata namespace. | :x: | :heavy_check_mark: |
+| **data** | _Required._ The data for the binding operation. | :x: | :heavy_check_mark: |
+
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
+|--|| :: | :--: |
+|**bindingName** | The name of the binding. | :heavy_check_mark: | :heavy_check_mark: |
+|**operation** | The binding operation. | :heavy_check_mark: | :heavy_check_mark: |
+| **metadata** | The metadata namespace. | :x: | :heavy_check_mark: |
+| **data** | _Required._ The data for the binding operation. | :x: | :heavy_check_mark: |
+++++ The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description| Can be sent via Attribute | Can be sent via RequestBody |
azure-functions Functions Bindings Dapr Trigger Svc Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-svc-invoke.md
Title: Dapr Service Invocation trigger for Azure Functions description: Learn how to run Azure Functions as Dapr service invocation data changes. Previously updated : 10/11/2023 Last updated : 11/29/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+Use the `app` object to register the `daprInvokeOutput`:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('InvokeOutputBinding', {
+ trigger: trigger.generic({
+ type: 'httpTrigger',
+ authLevel: 'anonymous',
+ methods: ['POST'],
+ route: "invoke/{appId}/{methodName}",
+ name: "req"
+ }),
+ return: daprInvokeOutput,
+ handler: async (request, context) => {
+ context.log("Node HTTP trigger function processed a request.");
+
+ const payload = await request.text();
+ context.log(JSON.stringify(payload));
+
+ return { body: payload };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context) {
context.log(context.bindings.data); }; ```+++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprServiceInvocationTrigger` annotation allows you to create a function th
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description|
+|--||
+|**type** | Must be set to `daprServiceInvocationTrigger`.|
+|**name** | The name of the variable that represents the Dapr data in function code. |
+
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+|--||
+|**type** | Must be set to `daprServiceInvocationTrigger`.|
+|**name** | The name of the variable that represents the Dapr data in function code. |
++++ The following table explains the binding configuration properties that you set in the function.json file.
azure-functions Functions Bindings Dapr Trigger Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-topic.md
Title: Dapr Topic trigger for Azure Functions description: Learn how to run Azure Functions as Dapr topic data changes. Previously updated : 10/11/2023 Last updated : 11/29/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+Use the `app` object to register the `daprTopicTrigger`:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('TransferEventBetweenTopics', {
+ trigger: trigger.generic({
+ type: 'daprTopicTrigger',
+ name: "subEvent",
+ pubsubname: "%PubSubName%",
+ topic: "A"
+ }),
+ return: daprPublishOutput,
+ handler: async (request, context) => {
+ context.log("Node function processed a TransferEventBetweenTopics request from the Dapr Runtime.");
+ context.log(context.triggerMetadata.subEvent.data);
+
+ return { payload: context.triggerMetadata.subEvent.data };
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following examples show Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context) {
}; ``` + ::: zone-end
The `DaprTopicTrigger` annotation allows you to create a function that runs when
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description|
+|--||
+|**pubsubname** | The name of the Dapr pub/sub component type. |
+|**topic** | Name of the topic. |
+
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+|--||
+|**pubsubname** | The name of the Dapr pub/sub component type. |
+|**topic** | Name of the topic. |
++++ The following table explains the binding configuration properties that you set in the _function.json_ file.
azure-functions Functions Bindings Dapr Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger.md
Title: Dapr Input Bindings trigger for Azure Functions description: Learn how to run Azure Functions as Dapr input binding data changes. Previously updated : 10/11/2023 Last updated : 11/29/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
public String run(
::: zone pivot="programming-language-javascript"
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
+# [Node.js v4](#tab/v4)
+
+Use the `app` object to register the `daprBindingTrigger`:
+
+```javascript
+const { app, trigger } = require('@azure/functions');
+
+app.generic('ConsumeMessageFromKafka', {
+ trigger: trigger.generic({
+ type: 'daprBindingTrigger',
+ bindingName: "%KafkaBindingName%",
+ name: "triggerData"
+ }),
+ handler: async (request, context) => {
+ context.log("Node function processed a ConsumeMessageFromKafka request from the Dapr Runtime.");
+ context.log(context.triggerMetadata.triggerData)
+ }
+});
+```
+
+# [Node.js v3](#tab/v3)
The following example shows Dapr triggers in a _function.json_ file and JavaScript code that uses those bindings.
module.exports = async function (context) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The `DaprBindingTrigger` annotation allows you to create a function that gets tr
::: zone-end +
+# [Node.js v4](#tab/v4)
+
+The following table explains the binding configuration properties that you set in the code.
+
+|Property | Description|
+|--||
+|**bindingName** | The name of the binding. |
+
+
+# [Node.js v3](#tab/v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+|--||
+|**bindingName** | The name of the binding. |
+++++ The following table explains the binding configuration properties that you set in the function.json file.
azure-functions Functions Bindings Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr.md
Title: Dapr Extension for Azure Functions
description: Learn to use the Dapr triggers and bindings in Azure Functions. Previously updated : 10/11/2023 Last updated : 11/15/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Dapr --prerelease
## Install bundle
-> [!NOTE]
-> The [Node.js v4 model for Azure Functions](functions-reference-node.md?pivots=nodejs-model-v4) isn't currently available for use with the Dapr extension during the preview.
- # [Preview Bundle v4.x](#tab/preview-bundle-v4x) You can add the preview extension by adding or replacing the following code in your `host.json` file:
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
|Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
- |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.|
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
Title: Creating Metric Alerts for Logs in Azure Monitor
+ Title: Creating Metric Alerts in Azure Monitor Logs
description: Tutorial on creating near-real time metric alerts on popular log analytics data. Last updated 11/16/2023
-# Create a metric alert on a set of Azure Monitor Logs
+# Create a metric alert in Azure Monitor Logs
## Overview [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-**Metric Alerts for Logs** allows you to leverage metric alerts capabilities on a predefined set of Log Analytics logs. The monitored logs, which can be collected from Azure or on-premises computers, are converted to metrics, and then monitored with metric alert rules just like any other metric.
+**Metric Alerts for Logs** allows you to leverage metric alerts capabilities on a predefined set of logs in Azure Monitor Logs. The monitored logs, which can be collected from Azure or on-premises computers, are converted to metrics, and then monitored with metric alert rules just like any other metric.
The supported Log Analytics logs are the following: - [Performance counters](./../agents/data-sources-performance-counters.md) for Windows & Linux machines (corresponding with the supported [Log Analytics workspace metrics](../essentials/metrics-supported.md#microsoftoperationalinsightsworkspaces))
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
The alert condition for stateful alerts is `fired`, until it is considered resol
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
+Stateful log alerts have these limitations:
+- they can trigger up to 300 alerts per evaluation.
+- you can have a maximum of 5000 alerts with the `fired` alert condition.
+ This table describes when a stateful alert is considered resolved: |Alert type |The alert is resolved when |
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Log alerts can measure two different things, which can be used for different mon
- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. - **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
-You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
+You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
+Note that stateful log alerts have these limitations:
+- they can trigger up to 300 alerts per evaluation.
+- you can have a maximum of 5000 alerts with the `fired` alert condition.
> [!NOTE] > Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
azure-monitor Tutorial Metric Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-metric-alert.md
Title: Tutorial - Create a metric alert for an Azure resource description: Learn how to create a metric chart with Azure metrics explorer. Previously updated : 11/08/2021 Last updated : 11/28/2023 # Tutorial: Create a metric alert for an Azure resource
From metrics explorer, click **New alert rule**. The rule will be preconfigured
## Configure alert logic The resource will already be selected. You need to modify the signal logic to specify the threshold value and any other details for the alert rule.
-Click on the **Condition name** to view these settings.
+To view these settings, select the **Condition** tab.
:::image type="content" source="./media/tutorial-metric-alert/configuration.png" lightbox="./media/tutorial-metric-alert/configuration.png" alt-text="Alert rule configuration":::
The **Alert logic** is defined by the condition and the evaluation time. The ale
:::image type="content" source="./media/tutorial-metric-alert/alert-logic.png" lightbox="./media/tutorial-metric-alert/alert-logic.png" alt-text="Alert rule alert logic":::
-You can accept the default time granularity or modify it to your requirements. **Frequency of evaluation** defines how often the alert logic is evaluated. **Aggregation granularity** defines the time interval over which the collected values are aggregated.
+You can accept the default time granularity or modify it to your requirements. **Check every** defines how often the alert rule will check if the condition is met. **Lookback period** defines the time interval over which the collected values are aggregated. For example, every 1 minute, youΓÇÖll be looking at the past 5 minutes.
-Click **Done** when you're done configuring the signal logic.
+
+When you're done configuring the signal logic, click **Next: Actions >** or the **Actions** tab to configure actions.
## Configure actions [!INCLUDE [Action groups](../../../includes/azure-monitor-tutorial-action-group.md)]
Click **Done** when you're done configuring the signal logic.
:::image type="content" source="./media/tutorial-metric-alert/alert-details.png" lightbox="./media/tutorial-metric-alert/alert-details.png" alt-text="Alert rule details":::
-Click **Create alert rule** to create the alert rule.
+Click **Review + create** and then **Create** to create the alert rule.
## View the alert
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
For detailed information about instrumenting applications to enable Application
#### OpenTelemetry Distro
+* [ASP.NET Core](opentelemetry-enable.md?tabs=aspnetcore)
* [ASP.NET](opentelemetry-enable.md?tabs=net) * [Java](opentelemetry-enable.md?tabs=java) * [Node.js](opentelemetry-enable.md?tabs=nodejs) * [Python](opentelemetry-enable.md?tabs=python)
-* [ASP.NET Core](opentelemetry-enable.md?tabs=aspnetcore) (preview)
#### Application Insights SDK (Classic API)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
This article describes how to enable and configure OpenTelemetry-based data coll
OpenTelemetry offerings are available for .NET, Node.js, Python and Java applications.
-|Language |Release Status |
-|-|-|
-|.NET (Exporter) | :white_check_mark: ┬╣ |
-|Java | :white_check_mark: ┬╣ |
-|Node.js | :white_check_mark: ┬╣ |
-|Python | :white_check_mark: ┬╣ |
-|ASP.NET Core | :warning: ┬▓ |
-
-**Footnotes**
-- ┬╣ :white_check_mark: : OpenTelemetry is available to all customers with formal support.-- ┬▓ :warning: : OpenTelemetry is available as a public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)- > [!NOTE] > For a feature-by-feature release status, see the [FAQ](#whats-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro).
-> The ASP.NET Core Distro is undergoing additional stability testing prior to GA. You can use the .NET Exporter if you need a fully supported OpenTelemetry solution for your ASP.NET Core application.
## Get started
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Title: Get started with autoscale in Azure
description: "Learn how to scale your resource web app, cloud service, virtual machine, or Virtual Machine Scale Set in Azure." Previously updated : 04/10/2023 Last updated : 11/29/2023 # Get started with autoscale in Azure
Follow the steps below to create your first autoscale setting.
:::image type="content" source="./media/autoscale-get-started/instance-limits.png" lightbox="./media/autoscale-get-started/instance-limits.png" alt-text="A screenshot showing the configure tab of the autoscale setting page with configured rules.":::
-You have successfully created your first scale setting to autoscale your web app based on CPU usage. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 3 instances. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default there will be 1 instance.
+You have successfully created your first scale setting to autoscale your web app based on CPU usage. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 3 instances. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default there will be 1 instance.
## Scheduled scale conditions
You have now defined a scale condition for a specific day. When CPU usage is gre
### View the history of your resource's scale events
-Whenever your resource has any scaling event, it is logged in the activity log. You can view the history of the scale events in the **Run history** tab.
+Whenever your resource has any scaling event, it's logged in the activity log. You can view the history of the scale events in the **Run history** tab.
:::image type="content" source="./media/autoscale-get-started/run-history.png" lightbox="./media/autoscale-get-started/run-history.png" alt-text="A screenshot showing the run history tab in autoscale settings.":::
You can make changes in JSON directly, if necessary. These changes will be refle
### Cool-down period effects
-Autoscale uses a cool-down period with is the amount of time to wait after a scale operation before scaling again. For example, if the cooldown is 10 minutes, Autoscale won't attempt to scale again until 10 minutes after the previous scale action. The cooldown period allows the metrics to stabilize and avoids scaling more than once for the same condition. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation).
+Autoscale uses a cool-down period. This period is the amount of time to wait after a scale operation before scaling again. The cool-down period allows the metrics to stabilize and avoids scaling more than once for the same condition. Cool-down applies to both scale-in and scale-out events. For example, if the cooldown is set to 10 minutes and Autoscale has just scaled-in, Autoscale won't attempt to scale again for another 10 minutes in either direction. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation).
### Flapping
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
description: This article explains autoscale settings, how they work, and how th
Previously updated : 11/02/2022 Last updated : 11/29/2023
The following table describes the elements in the preceding autoscale setting's
| rule | scaleAction | Action |The action to take when the metricTrigger of the rule is triggered. | | scaleAction | direction | Operation |"Increase" to scale out, or "Decrease" to scale in.| | scaleAction | value |Instance count |How much to increase or decrease the capacity of the resource. |
-| scaleAction | cooldown | Cool down (minutes)|The amount of time to wait after a scale operation before scaling again. For example, if **cooldown = "PT10M"**, autoscale doesn't attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
+| scaleAction | cooldown | Cool down (minutes)|The amount of time to wait after a scale operation before scaling again. The cooldown period comes into effect after a scale-in or a scale-out event. For example, if **cooldown = "PT10M"**, autoscale doesn't attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
## Autoscale profiles
There are three types of autoscale profiles:
## Autoscale evaluation
-Autoscale settings can have multiple profiles. Each profile can have multiple rules. Each time the autoscale job runs, it begins by choosing the applicable profile for that time. Autoscale then evaluates the minimum and maximum values, any metric rules in the profile, and decides if a scale action is necessary. The autoscale job runs every 30 to 60 seconds, depending on the resource type.
+Autoscale settings can have multiple profiles. Each profile can have multiple rules. Each time the autoscale job runs, it begins by choosing the applicable profile for that time. Autoscale then evaluates the minimum and maximum values, any metric rules in the profile, and decides if a scale action is necessary. The autoscale job runs every 30 to 60 seconds, depending on the resource type. After a scale action occurs, the autoscale job waits for the cooldown period before it scales again. The cooldown period applies to both scale-out and scale-in actions.
### Which profile will autoscale use?
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Navigate to your cluster. Open the _Workbooks_ tab for your cluster and look for
:::image type="content" source="media/container-insights-syslog/syslog-workbook-container-insights-reports-tab.gif" lightbox="media/container-insights-syslog/syslog-workbook-container-insights-reports-tab.gif" alt-text="Video of Syslog workbook being accessed from cluster workbooks tab." border="true":::
+### Access using a Grafana dashboard
+
+Customers can use our Syslog dashboard for Grafana to get an overview of their Syslog data. Customers who use Azure-managed Grafana will have this dashboard available in their Grafana instance by default. Once syslog collection is enabled, no other steps are needed. Other customers can [import the Syslog dashboard from Grafana marketplace](https://grafana.com/grafana/dashboards/19866-azure-monitor-container-insights-syslog/).
++ ### Access using log queries Syslog data is stored in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in your Log Analytics workspace. You can create your own [log queries](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md) to analyze this data or use any of the [prebuilt queries](../logs/log-query-overview.md).
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
The time picker is displayed next to the **Run** button and indicates that you'r
### Add a time filter to the query
-You can also define your own time range by adding a time filter to the query. It's best to place the time filter immediately after the table name:
+You can also define your own time range by adding a time filter to the query. Adding a time filter overrides the time range selected in the [time picker](#use-the-time-picker).
+
+It's best to place the time filter immediately after the table name:
```Kusto SecurityEvent
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Whether you work with the results of your queries interactively or use them with
> [!TIP] > This article describes Log Analytics and its features. If you want to jump right into a tutorial, see [Log Analytics tutorial](./log-analytics-tutorial.md).
+Here's a video version of this tutorial:
+
+> [!VIDEO https://www.youtube.com/embed/-aMecR2Nrfc]
+ ## Start Log Analytics To start Log Analytics in the Azure portal, on the **Azure Monitor** menu select **Logs**. You'll also see this option on the menu for most Azure resources. No matter where you start Log Analytics, the tool is the same. But the menu you use to start Log Analytics determines the data that's available.
The top bar has controls for working with a query in the query window.
| Scope | Specifies the scope of data used for the query. This could be all the data in a Log Analytics workspace or data for a particular resource across multiple workspaces. See [Query scope](./scope.md). | | Run button | Run the selected query in the query window. You can also select **Shift+Enter** to run a query. | | Time picker | Select the time range for the data available to the query. This action is overridden if you include a time filter in the query. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md). |
-| Save button | Save the query to **Query Explorer** for the workspace. |
- Copy button | Copy a link to the query, the query text, or the query results to the clipboard. |
-| New alert rule button | Create a new tab with an empty query. |
+| Save button | Save the query to a [query pack](./query-packs.md). Saved queries are available from: <ul><li> The **Other** section in the **Query Explorer** for the workspace</li><li>The **Other** section in the **Queries** tab in the [left sidebar](#left-sidebar) for the workspace</ul> |
+ Share button | Copy a link to the query, the query text, or the query results to the clipboard. |
+| New alert rule button | Open the Create an alert rule page. Use this page to [create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=log) with an alert type of [log alert](../alerts/alerts-types.md#log-alerts). The page opens with the [Conditions tab](../alerts/alerts-create-new-alert-rule.md?tabs=log#set-the-alert-rule-conditions) selected, and your query is added to the **Search query** field. |
| Export button | Export the results of the query to a CSV file or the query to Power Query Formula Language format for use with Power BI. | | Pin to button | Pin the results of the query to an Azure dashboard or add them to an Azure workbook. | | Format query button | Arrange the selected text for readability. |
-| Example queries button | Open the example queries dialog that appears when you first open Log Analytics. |
-| Query Explorer button | Open **Query Explorer**, which provides access to saved queries in the workspace. |
+| Search job mode toggle | [Run search jobs](./search-jobs.md).
+| Queries button | Open **Query Explorer**, which provides access to saved queries in the workspace. |
### Left sidebar
-The sidebar on the left lists tables in the workspace, sample queries, and filter options for the current query.
+The sidebar on the left lists tables in the workspace, sample queries, functions, and filter options for the current query.
| Tab | Description | |:|:| | Tables | Lists the tables that are part of the selected scope. Select **Group by** to change the grouping of the tables. Hover over a table name to display a dialog with a description of the table and options to view its documentation and preview its data. Expand a table to view its columns. Double-click a table or column name to add it to the query. | | Queries | List of example queries that you can open in the query window. This list is the same one that appears when you open Log Analytics. Select **Group by** to change the grouping of the queries. Double-click a query to add it to the query window or hover over it for other options. |
+| Functions | Lists the [functions](./functions.md) in the workspace. |
| Filter | Creates filter options based on the results of a query. After you run a query, columns appear with different values from the results. Select one or more values, and then select **Apply & Run** to add a **where** command to the query and run it again. | ### Query window
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* Ensure that youΓÇÖre using the proper mount instructions for the volume. See [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
-* The NFS client should be in the same VNet or peered VNet as the Azure NetApp Files volume. Connecting from outside the VNet is supported; however, it will introduce additional latency and decrease overall performance.
+* The NFS client should be in the same virtual network or peered virtual network as the Azure NetApp Files volume. Connecting from outside the virtual network is supported; however, it will introduce additional latency and decrease overall performance.
* Ensure that the NFS client is up to date and running the latest updates for the operating system. ## Create an NFS volume
-1. Click the **Volumes** blade from the Capacity Pools blade. Click **+ Add volume** to create a volume.
+1. Select the **Volumes** blade from the Capacity Pools blade. Select **+ Add volume** to create a volume.
![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
-2. In the Create a Volume window, click **Create**, and provide information for the following fields under the Basics tab:
+2. In the Create a Volume window, select **Create**, and provide information for the following fields under the Basics tab:
* **Volume name** Specify the name for the volume that you are creating.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
These fields configure [standard storage with cool access in Azure NetApp Files](cool-access-introduction.md). For descriptions, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md). * **Virtual network**
- Specify the Azure virtual network (VNet) from which you want to access the volume.
+ Specify the Microsoft Azure Virtual Network from which you want to access the volume.
- The VNet you specify must have a subnet delegated to Azure NetApp Files. The Azure NetApp Files service can be accessed only from the same Vnet or from a Vnet that is in the same region as the volume through VNet peering. You can also access the volume from your on-premises network through Express Route.
+ The Virtual Network you specify must have a subnet delegated to Azure NetApp Files. The Azure NetApp Files service can be accessed only from the same Virtual Network or from a virtual network that's in the same region as the volume through virtual network peering. You can also access the volume from your on-premises network through Express Route.
* **Subnet** Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files.
- If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
+ If you have not delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each Virtual Network, only one subnet can be delegated to Azure NetApp Files.
![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
- * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu.
+ * If you want to apply an existing snapshot policy to the volume, select **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu.
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
If you use a new VNet, you can create a subnet and delegate the subnet to Azure
If the VNet is peered with another VNet, you can't expand the VNet address space. For that reason, the new delegated subnet needs to be created within the VNet address space. If you need to extend the address space, you must delete the VNet peering before expanding the address space. >[!IMPORTANT]
->The address space size of the Azure NetApp Files VNet should be larger than its delegated subnet. If it is not, Azure NetApp Files volume creation will fail in some scenarios.
+> Ensure the address space size of the Azure NetApp Files VNet is larger than its delegated subnet.
+>
+> For example, if the delegated subnet is /24, the VNet address space containing the subnet must be /23 or larger. Noncompliance with this guideline can lead to unexpected issues in some traffic patterns: traffic traversing a hub-and-spoke topology that reaches Azure NetApp Files via a Network Virtual Appliance does not function properly. Additionally, this configuration can result in failures when creating SMB and CIFS volumes if they attempt to reach DNS through hub-and-spoke network topology.
> > It's also recommended that the size of the delegated subnet be at least /25 for SAP workloads and /26 for other workload scenarios.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Maximum number of quota rules per volume | 100 | Yes | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No |
-| Number of cross-region replication data protection volumes (destination volumes) | 20 | Yes |
-| Number of cross-zone replication data protection volumes (destination volumes) | 20 | Yes |
+| Number of cross-region replication data protection volumes (destination volumes) | 50 | Yes |
+| Number of cross-zone replication data protection volumes (destination volumes) | 50 | Yes |
| Maximum numbers of policy-based (scheduled) backups per volume | <ul><li> Daily retention count: 2 (minimum) to 1019 (maximum) </li> <li> Weekly retention count: 1 (minimum) to 1019 (maximum) </li> <li> Monthly retention count: 1 (minimum) to 1019 (maximum) </ol></li> <br> The maximum hourly, daily, weekly, and monthly backup retention counts *combined* is 1019. | No | | Maximum size of protected volume | 100 TiB | No | | Maximum number of volumes that can be backed up per subscription | 20 | Yes |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references for High Performance Computing (HPC) solutions.
### Electronic design automation (EDA)
+* [Azure Modeling and Simulation Workbench](../modeling-simulation-workbench/index.yml)
* [EDA workloads on Azure NetApp Files - Performance Best Practice](https://techcommunity.microsoft.com/t5/azure-global/eda-workloads-on-azure-netapp-files-performance-best-practice/ba-p/2119979) * [Benefits of using Azure NetApp Files for electronic design automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md) * [Azure CycleCloud: EDA HPC Lab with Azure NetApp Files](https://github.com/Azure/cyclecloud-hands-on-labs/blob/master/ED)
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
na Previously updated : 07/10/2023 Last updated : 10/17/2023 # Restore a backup to a new volume
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error: `Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>`
- * The **Quota** value must be greater than or equal to the size of the backup from which the restore is triggered (minimum 100 GiB).
+ * The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered (minimum 100 GiB). Once the restore is complete, the volume can be resized depending on the size used.
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) (Preview)
- We are excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
+ We're excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
* Availability zone volume placement enhancement - [Populate existing volumes](manage-availability-zone-volume-placement.md#populate-an-existing-volume-with-availability-zone-information) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features - Edit volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes) (Preview)
- Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Azure Virtual Network (VNet) experience through various security and connectivity features that are available on Azure VNets to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
- * Increased number of client IPs in a virtual network (including immediately peered VNets) accessing Azure NetApp Files volumes - the [same as Azure VMs](azure-netapp-files-resource-limits.md#resource-limits)
+ Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Microsoft Azure Virtual Network experience through various security and connectivity features that are available on Virtual Networks to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
+ * Increased number of client IPs in a virtual network (including immediately peered Virtual Networks) accessing Azure NetApp Files volumes - the [same as Azure VMs](azure-netapp-files-resource-limits.md#resource-limits)
* Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on Azure NetApp Files delegated subnets * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
- Standard network features now includes Global VNet peering.
+ Standard network features now includes Global virtual network peering.
Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability is a result of innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files. You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets:
- * Increased IP limits for the VNets with Azure NetApp Files volumes at par with VMs
+ * Increased IP limits for the virtual networks with Azure NetApp Files volumes at par with VMs
* Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on the Azure NetApp Files delegated subnet * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files storage service add-ons](storage-service-add-ons.md)
- The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by clicking an add-on tile to quickly access the add-on.
+ The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by selecting an add-on tile to quickly access the add-on.
- **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Clicking the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
+ **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Selecting the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared virtual networks when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
* Support for throughput [metrics](azure-netapp-files-metrics.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
- You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption can't access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might impact the client (CPU overhead for encrypting and decrypting messages). It might also impact storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
+ You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption can't access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might affect the client (CPU overhead for encrypting and decrypting messages). It might also affect storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Dynamic volume service level change](dynamic-change-volume-service-level.MD) (Preview)
- Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data. It also doesn't impact the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies). It's currently in preview. You can register for the feature preview by following the [dynamic volume service level change documentation](dynamic-change-volume-service-level.md).
+ Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data. It also doesn't affect the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies). It's currently in preview. You can register for the feature preview by following the [dynamic volume service level change documentation](dynamic-change-volume-service-level.md).
* [Volume snapshot policy](snapshots-manage-policy.md) (Preview)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Web * apiManagementAccounts/apis
-* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp
+* certificates
* sites ## Next steps
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
azure-vmware Ecosystem Disaster Recovery Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-disaster-recovery-vms.md
Title: Disaster recovery solutions for Azure VMware Solution virtual machines
description: Learn about leading disaster recovery solutions for your Azure VMware Solution private cloud. Previously updated : 11/29/2021 Last updated : 11/29/2023+ # Disaster recovery solutions for Azure VMware Solution virtual machines (VMs)
-One of the most important aspects of any Azure VMware Solution deployment is disaster recovery, which can be achieved by creating disaster recovery plans between different Azure VMware Solution regions or between Azure and an on-premises vSphere environment.
+One of the most important aspects of any Azure VMware Solution deployment is disaster recovery. You can create disaster recovery plans between different Azure VMware Solution regions or between Azure and an on-premises vSphere environment.
We currently offer customers the possibility to implement their disaster recovery plans using state-of-the-art VMware solution like [SRM](disaster-recovery-using-vmware-site-recovery-manager.md) or [HCX](deploy-disaster-recovery-using-vmware-hcx.md).
-Following our principle of giving customers the choice to apply their investments in skills and technology we┬┤ve collaborated with some of the leading partners in the industry.
+Following our principle of giving customers the choice to apply their investments in skills and technology, we collaborated with some of the leading partners in the industry.
-You can find more information about their solutions in the links below:
-- [Jetstream](https://www.jetstreamsoft.com/2020/09/28/disaster-recovery-for-avs/)-- [Zerto](https://www.zerto.com/solutions/use-cases/disaster-recovery/)
+You can find more information about their solutions in the following links:
+- [JetStream](https://www.jetstreamsoft.com/2020/09/28/disaster-recovery-for-avs/)
+- [Zerto](https://help.zerto.com/bundle/Install.AVS.HTML/page/Prerequisites_Zerto_AVS.htm)
- [RiverMeadow](https://www.rivermeadow.com/disaster-recovery-azure-blob)
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following list presents the set of features that are currently available in
| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Cancel adding an endpoint to an existing call | ✔️ | ✔️ | ✔️ | ✔️ |
| | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio using Text-to-Speech | ✔️ | ✔️ | ✔️ | ✔️ | | | Recognize user input through DTMF | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer* a participant from group call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ |
The Call Automation events are sent to the web hook callback URI specified when
| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed | | AddParticipantSucceeded| Your application added a participant | | AddParticipantFailed | Your application was unable to add a participant |
+| CancelAddParticipantSucceeded| Your application canceled adding a participant |
+| CancelAddParticipantFailed | Your application was unable to cancel adding a participant |
| RemoveParticipantSucceeded| Your application has successfully removed a participant from the call. | | RemoveParticipantFailed | Your application was unable to remove a participant from the call. | | ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
To understand which events are published for different actions, refer to [this g
To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md).
+### Operation Callback Uri
+
+It is an optional parameter in some mid-call APIs that use events as their async responses. By default, all events are sent to the default callback Uri set by CreateCall / AnswerCall API when the user establishes a call. With the usage of Operation Callback Uri, corresponding events of this individual (one-time only) request will be sent to the new Uri.
+
+| Supported API | Corresponding event |
+| -- | |
+| AddParticipant | AddParticipantSucceed / AddParticipantFailed |
+| RemoveParticipant | RemoveParticipantSucceed / RemoveParticipantFailed |
+| TransferCall | CallTransferAccepted / CallTransferFailed |
+| CancelAddParticipant | CancelAddParticipantSucceeded / CancelAddParticipantFailed |
+| Play | PlayCompleted / PlayFailed / PlayCanceled |
+| PlayToAll | PlayCompleted / PlayFailed / PlayCanceled |
+| Recognize | RecognizeCompleted / RecognizeFailed / RecognizeCanceled |
+| StopContinuousDTMFRecognition | ContinuousDtmfRecognitionStopped |
+| SendDTMF | ContinuousDtmfRecognitionToneReceived / ContinuousDtmfRecognitionToneFailed |
+ ## Next steps > [!div class="nextstepaction"]
communication-services Phone Number Management For Austria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-austria.md
More details on eligible subscription types are as follows:
| :- | |Austria| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Austria alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Belgium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-belgium.md
More details on eligible subscription types are as follows:
| Country/Region | | :- | |Belgium|
+|United States|
+|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
## Find information about other countries/regions
communication-services Phone Number Management For Denmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-denmark.md
More details on eligible subscription types are as follows:
|Italy| |Sweden| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Denmark alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For France https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
| Local | - | - | General Availability | General Availability\* | |Alphanumeric Sender ID\**|General Availability |-|-|-|
More details on eligible subscription types are as follows:
|France| |Italy| |United States|
+|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where France alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-germany.md
More details on eligible subscription types are as follows:
| :- | |Germany| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Germany alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Ireland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md
More details on eligible subscription types are as follows:
|Italy| |Sweden| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Ireland alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Luxembourg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-luxembourg.md
More details on eligible subscription types are as follows:
| Country/Region | | :- | |Luxembourg|
+|United States|
+|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
## Find information about other countries/regions
communication-services Phone Number Management For Netherlands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-netherlands.md
More details on eligible subscription types are as follows:
| :- | |Netherlands| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
## Azure subscription billing locations where Netherlands alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Norway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-norway.md
More details on eligible subscription types are as follows:
|France| |Sweden| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Find information about other countries/regions
communication-services Phone Number Management For Portugal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-portugal.md
More details on eligible subscription types are as follows:
| Country/Region | | :- | |Portugal|
-|United States*|
+|United States|
+|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
## Azure subscription billing locations where Portugal alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Slovakia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovakia.md
More details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- | | Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
-| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go|
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go|
+* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
## Azure subscription billing locations where Slovakia phone numbers are available | Country/Region | | :- | |Slovakia| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Find information about other countries/regions
communication-services Phone Number Management For Spain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-spain.md
More details on eligible subscription types are as follows:
## Azure subscription billing locations where Spain phone numbers are available | Country/Region | | :- |
-|Spain|
-|United States*|
+| Spain |
+| United States |
+| United Kingdom |
+| Canada |
+| Japan |
+| Australia |
## Azure subscription billing locations where Spain alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-sweden.md
More details on eligible subscription types are as follows:
## Azure subscription billing locations where Sweden phone numbers are available | Country/Region | | :- |
-|Canada|
+|Sweden|
|Denmark| |Ireland| |Italy| |Puerto Rico|
-|Sweden|
-|United Kingdom|
|United States|
+|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Sweden alphanumeric sender IDs are available | Country/Region |
communication-services Phone Number Management For Switzerland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-switzerland.md
More details on eligible subscription types are as follows:
| :- | |Switzerland| |United States|
-|Canada|
|United Kingdom|
+|Canada|
+|Japan|
+|Australia|
+ ## Azure subscription billing locations where Switzerland alphanumeric sender IDs are available | Country/Region |
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
### Phone number leasing charges |Number type |Monthly fee | |--|--|
-|Geographic |USD 1.00/mo |
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 8.00/mo |
### Usage charges |Number type |To make calls* |To receive calls|
-|--|--||
-|Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
+|--|--|-|
+|Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0160/min |USD 0.1100/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
All prices shown below are in USD.
### Usage charges |Number type |To make calls* |To receive calls| |--|--||
-|Geographic |Starting at USD 0.165/min |USD 0.0072/min |
-|Toll-free |Starting at USD 0.165/min | USD 0.2200/min |
+|Geographic |Starting at USD 0.0165/min |USD 0.0072/min |
+|Toll-free |Starting at USD 0.0165/min | USD 0.2200/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
All prices shown below are in USD.
|Toll-free |N/A |USD 0.1587/min | - ## United Arab Emirates telephony offers ### Phone number leasing charges |Number type |Monthly fee |
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
call_invite = CallInvite(
-- No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it generates a new IncomingCall event with 'to' field set to the target you specified.
-## Transfer a 1:1 call
+## Transfer a participant in call
When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call removes your application from the call and hence remove its ability to control the call using Call Automation. The call invite to the target will display the caller ID of the endpoint being transferred. Providing a custom caller ID is not supported.
When your application answers a call or places an outbound call to an endpoint,
```csharp var transferDestination = new CommunicationUserIdentifier("<user_id>");
-var transferOption = new TransferToParticipantOptions(transferDestination);
+var transferOption = new TransferToParticipantOptions(transferDestination) {
+ OperationContext = "<Your_context>",
+ OperationCallbackUri = new Uri("<uri_endpoint>") // Sending event to a non-default endpoint.
+};
+// adding customCallingContext
+transferOption.CustomCallingContext.AddVoip("customVoipHeader1", "customVoipHeaderValue1");
+transferOption.CustomCallingContext.AddVoip("customVoipHeader2", "customVoipHeaderValue2");
+ TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption); ``` # [Java](#tab/java) ```java
-CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
-TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination);
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+TransferCallToParticipantOptions options = new TransferCallToParticipantOptions(transferDestination)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>"); // Sending event to a non-default endpoint.
+// set customCallingContext
+options.getCustomCallingContext().addVoip("voipHeaderName", "voipHeaderValue");
+ Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block(); ```
Response<TransferCallResult> transferResponse = callConnectionAsync.transferToPa
```javascript const transferDestination = { communicationUserId: "<user_id>" };
-const result = await callConnection.transferCallToParticipant(transferDestination);
+const options = { operationContext: "<Your_context>", operationCallbackUrl: "<url_endpoint>" };
+// adding customCallingContext
+const customCallingContext: CustomCallingContext = [];
+customCallingContext.push({ kind: "voip", key: "customVoipHeader1", value: "customVoipHeaderValue1" })
+options.customCallingContext = customCallingContext;
+
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
``` # [Python](#tab/python)
const result = await callConnection.transferCallToParticipant(transferDestinatio
```python transfer_destination = CommunicationUserIdentifier("<user_id>") call_connection_client = call_automation_client.get_call_connection("<call_connection_id_from_ongoing_call>")
+# set custom context
+voip_headers = {"customVoipHeader1", "customVoipHeaderValue1"}
+ result = call_connection_client.transfer_call_to_participant(
- target_participant=transfer_destination
+ target_participant=transfer_destination,
+ voip_headers=voip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
) ``` --
-The sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
+When your application answers a group call or places an outbound group call to an endpoint or added a participant to a 1:1 call, an endpoint can be transferred from the call to another destination endpoint, except call automation endpoint. Transferring a participant in a group call removes the endpoint being transferred from the call. The call invite to the target will display the caller ID of the endpoint being transferred. Providing a custom caller ID is not supported.
+
+# [csharp](#tab/csharp)
+
+```csharp
+// Transfer User
+var transferDestination = new CommunicationUserIdentifier("<user_id>");
+var transferee = new CommunicationUserIdentifier("<transferee_user_id>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.Transferee = transferee;
+
+// adding customCallingContext
+transferOption.CustomCallingContext.AddVoip("customVoipHeader1", "customVoipHeaderValue1");
+transferOption.CustomCallingContext.AddVoip("customVoipHeader2", "customVoipHeaderValue2");
+
+transferOption.OperationContext = "<Your_context>";
+transferOption.OperationCallbackUri = new Uri("<uri_endpoint>");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+
+// Transfer PSTN User
+var transferDestination = new PhoneNumberIdentifier("<target_phoneNumber>");
+var transferee = new PhoneNumberIdentifier("<transferee_phoneNumber>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.Transferee = transferee;
+
+// adding customCallingContext
+transferOption.CustomCallingContext.AddSipUui("uuivalue");
+transferOption.CustomCallingContext.AddSipX("header1", "headerValue");
+
+transferOption.OperationContext = "<Your_context>";
+
+// Sending event to a non-default endpoint.
+transferOption.OperationCallbackUri = new Uri("<uri_endpoint>");
+
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+
+# [Java](#tab/java)
+
+```java
+// Transfer User
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+CommunicationIdentifier transferee = new CommunicationUserIdentifier("<transferee_user_id>");
+TransferCallToParticipantOptions options = new TransferCallToParticipantOptions(transferDestination);
+options.setTransferee(transferee);
+options.setOperationContext("<Your_context>");
+options.setOperationCallbackUrl("<url_endpoint>");
+
+// set customCallingContext
+options.getCustomCallingContext().addVoip("voipHeaderName", "voipHeaderValue");
+
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+
+// Transfer Pstn User
+CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("<taget_phoneNumber>");
+CommunicationIdentifier transferee = new PhoneNumberIdentifier("<transferee_phoneNumber>");
+TransferCallToParticipantOptions options = new TransferCallToParticipantOptions(transferDestination);
+options.setTransferee(transferee);
+options.setOperationContext("<Your_context>");
+options.setOperationCallbackUrl("<url_endpoint>");
+
+// set customCallingContext
+options.getCustomCallingContext().addSipUui("UUIvalue");
+options.getCustomCallingContext().addSipX("sipHeaderName", "value");
+
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// Transfer User
+const transferDestination = { communicationUserId: "<user_id>" };
+const transferee = { communicationUserId: "<transferee_user_id>" };
+const options = { transferee: transferee, operationContext: "<Your_context>", operationCallbackUrl: "<url_endpoint>" };
+
+// adding customCallingContext
+const customCallingContext: CustomCallingContext = [];
+customContext.push({ kind: "voip", key: "customVoipHeader1", value: "customVoipHeaderValue1" })
+options.customCallingContext = customCallingContext;
+
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
+
+// Transfer pstn User
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
+const transferDestination = { phoneNumber: "<taget_phoneNumber>" };
+const transferee = { phoneNumber: "<transferee_phoneNumber>" };
+const options = { transferee: transferee, operationContext: "<Your_context>", operationCallbackUrl: "<url_endpoint>" };
+
+// adding customCallingContext
+const customCallingContext: CustomCallingContext = [];
+customContext.push({ kind: "sipuui", key: "", value: "uuivalue" });
+customContext.push({ kind: "sipx", key: "headerName", value: "headerValue" })
+options.customCallingContext = customCallingContext;
+
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
+```
+
+# [Python](#tab/python)
+
+```python
+# Transfer to user
+transfer_destination = CommunicationUserIdentifier("<user_id>")
+transferee = CommnunicationUserIdentifer("transferee_user_id")
+call_connection_client = call_automation_client.get_call_connection("<call_connection_id_from_ongoing_call>")
+
+# create custom context
+voip_headers = {"customVoipHeader1", "customVoipHeaderValue1"}
+
+result = call_connection_client.transfer_call_to_participant(
+ target_participant=transfer_destination,
+ transferee=transferee,
+ voip_headers=voip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+
+# Transfer to PSTN user
+transfer_destination = PhoneNumberIdentifer("<target_phoneNumber>")
+transferee = PhoneNumberIdentifer("transferee_phoneNumber")
+
+# create custom context
+sip_headers={}
+sip_headers.add("X-MS-Custom-headerName", "headerValue")
+sip_headers.add("User-To-User","uuivale")
+
+call_connection_client = call_automation_client.get_call_connection("<call_connection_id_from_ongoing_call>")
+result = call_connection_client.transfer_call_to_participant(
+ target_participant=transfer_destination,
+ transferee=transferee,
+ sip_headers=sip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+```
+--
+The sequence diagram shows the expected flow when your application places an outbound call and then transfers it to another endpoint.
++ ![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png)
You can add a participant (Communication Services user or phone number) to an ex
# [csharp](#tab/csharp) ```csharp
+// Add user
+var addThisPerson = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+// add custom calling context
+addThisPerson.CustomCallingContext.AddVoip("myHeader", "myValue");
+AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson);
+
+// Add PSTN user
var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller var addThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
-AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson);
+// add custom calling context
+addThisPerson.CustomCallingContext.AddSipUui("value");
+addThisPerson.CustomCallingContext.AddSipX("header1", "customSipHeaderValue1");
+
+// Use option bag to set optional parameters
+var addParticipantOptions = new AddParticipantOptions(new CallInvite(addThisPerson))
+{
+ InvitationTimeoutInSeconds = 60,
+ OperationContext = "operationContext",
+ OperationCallbackUri = new Uri("uri_endpoint"); // Sending event to a non-default endpoint.
+};
+
+AddParticipantsResult result = await callConnection.AddParticipantAsync(addParticipantOptions);
``` # [Java](#tab/java) ```java
+// Add user
+CallInvite callInvite = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+// add custom calling context
+callInvite.getCustomCallingContext().addVoip("voipHeaderName", "voipHeaderValue");
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>");
+Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
+
+// Add PSTN user
PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
-CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
-AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite);
+CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
+// add custom calling context
+callInvite.getCustomCallingContext().addSipUui("value");
+callInvite.getCustomCallingContext().addSipX("header1", "customSipHeaderValue1");
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>");
Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block(); ``` # [JavaScript](#tab/javascript) ```javascript
+// Add user
+// add custom calling context
+const customCallingContext: CustomCallingContext = [];
+customContext.push({ kind: "voip", key: "voipHeaderName", value: "voipHeaderValue" })
+
+const addThisPerson = {
+ targetParticipant: { communicationUserId: "<acs_user_id>" },
+ customCallingContext: customCallingContext,
+};
+const addParticipantResult = await callConnection.addParticipant(addThisPerson, {
+ operationCallbackUrl: "<url_endpoint>",
+ operationContext: "<operation_context>"
+});
+
+// Add PSTN user
const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the Azure Communication Services provisioned phone number for the caller
+// add custom calling context
+const customCallingContext: CustomCallingContext = [];
+customContext.push({ kind: "sipuui", key: "", value: "value" });
+customContext.push({ kind: "sipx", key: "headerName", value: "headerValue" })
const addThisPerson = { targetParticipant: { phoneNumber: "+16041234567" }, sourceCallIdNumber: callerIdNumber,
+ customCallingContext: customCallingContext,
};
-const addParticipantResult = await callConnection.addParticipant(addThisPerson);
+const addParticipantResult = await callConnection.addParticipant(addThisPerson, {
+ operationCallbackUrl: "<url_endpoint>",
+ operationContext: "<operation_context>"
+});
``` # [Python](#tab/python) ```python
+# Add user
+voip_headers = {"voipHeaderName", "voipHeaderValue"}
+target = CommunicationUserIdentifier("<acs_user_id>")
+
+call_connection_client = call_automation_client.get_call_connection(
+ "<call_connection_id_from_ongoing_call>"
+)
+result = call_connection_client.add_participant(
+ target,
+ voip_headers=voip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+
+# Add PSTN user
caller_id_number = PhoneNumberIdentifier( "+18888888888" ) # This is the Azure Communication Services provisioned phone number for the caller
-call_invite = CallInvite(
- target=PhoneNumberIdentifier("+18008008800"),
- source_caller_id_number=caller_id_number,
-)
+sip_headers = {}
+sip_headers.add("User-To-User", "value")
+sip_headers.add("X-MS-Custom-headerName", "headerValue")
+target = PhoneNumberIdentifier("+18008008800"),
+ call_connection_client = call_automation_client.get_call_connection( "<call_connection_id_from_ongoing_call>" )
-result = call_connection_client.add_participant(call_invite)
+result = call_connection_client.add_participant(
+ target,
+ sip_headers=sip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>",
+ source_caller_id_number=caller_id_number
+)
``` --
AddParticipant publishes a `AddParticipantSucceeded` or `AddParticipantFailed` e
![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png)
+## Cancel an add participant request
+
+# [csharp](#tab/csharp)
+
+```csharp
+// add a participant
+var addThisPerson = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+var addParticipantResponse = await callConnection.AddParticipantAsync(addThisPerson);
+
+// cancel the request with optional parameters
+var cancelAddParticipantOperationOptions = new CancelAddParticipantOperationOptions(addParticipantResponse.Value.InvitationId)
+{
+ OperationContext = "operationContext",
+ OperationCallbackUri = new Uri("uri_endpoint"); // Sending event to a non-default endpoint.
+}
+await callConnection.CancelAddParticipantOperationAsync(cancelAddParticipantOperationOptions);
+```
+
+# [Java](#tab/java)
+
+```java
+// Add user
+CallInvite callInvite = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+AddParticipantOperationOptions addParticipantOperationOptions = new AddParticipantOptions(callInvite);
+Response<AddParticipantResult> addParticipantOperationResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
+
+// cancel the request
+CancelAddParticipantOperationOptions cancelAddParticipantOperationOptions = new CancelAddParticipantOperationOptions(addParticipantResultResponse.invitationId)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>");
+callConnectionAsync.cancelAddParticipantOperationWithResponse(cancelAddParticipantOperationOptions).block();
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// Add user
+const addThisPerson = {
+ targetParticipant: { communicationUserId: "<acs_user_id>" },
+};
+const { invitationId } = await callConnection.addParticipant(addThisPerson, {
+ operationCallbackUrl: "<url_endpoint>",
+ operationContext: "<operation_context>"
+});
+
+// cancel the request
+await callConnection.cancelAddParticipantOperation(invitationId, {
+ operationCallbackUrl: "<url_endpoint>",
+ operationContext: "<operation_context>"
+});
+```
+
+# [Python](#tab/python)
+
+```python
+# Add user
+target = CommunicationUserIdentifier("<acs_user_id>")
+
+call_connection_client = call_automation_client.get_call_connection(
+ "<call_connection_id_from_ongoing_call>"
+)
+result = call_connection_client.add_participant(target)
+
+# cancel the request
+call_connection_client.cancel_add_participant_operation(result.invitation_id, opration_context="Your context", operationCallbackUrl="<url_endpoint>")
+```
+--
+ ## Remove a participant from a call # [csharp](#tab/csharp) ```csharp var removeThisUser = new CommunicationUserIdentifier("<user_id>");
-RemoveParticipantsResult result = await callConnection.RemoveParticipantAsync(removeThisUser);
+
+// remove a participant from the call with optional parameters
+var removeParticipantOptions = new RemoveParticipantOptions(removeThisUser)
+{
+ OperationContext = "operationContext",
+ OperationCallbackUri = new Uri("uri_endpoint"); // Sending event to a non-default endpoint.
+}
+
+RemoveParticipantsResult result = await callConnection.RemoveParticipantAsync(removeParticipantOptions);
``` # [Java](#tab/java) ```java CommunicationIdentifier removeThisUser = new CommunicationUserIdentifier("<user_id>");
-RemoveParticipantOptions removeParticipantOptions = new RemoveParticipantOptions(removeThisUser);
-Response<RemoveParticipantResult> removeParticipantResultResponse = callConnectionAsync.removeParticipantWithResponse(removeThisUser).block();
+RemoveParticipantOptions removeParticipantOptions = new RemoveParticipantOptions(removeThisUser)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>");
+Response<RemoveParticipantResult> removeParticipantResultResponse = callConnectionAsync.removeParticipantWithResponse(removeParticipantOptions).block();
``` # [JavaScript](#tab/javascript) ```javascript const removeThisUser = { communicationUserId: "<user_id>" };
-const removeParticipantResult = await callConnection.removeParticipant(removeThisUser);
+const removeParticipantResult = await callConnection.removeParticipant(removeThisUser, {
+ operationCallbackUrl: "<url_endpoint>",
+ operationContext: "<operation_context>"
+});
``` # [Python](#tab/python)
remove_this_user = CommunicationUserIdentifier("<user_id>")
call_connection_client = call_automation_client.get_call_connection( "<call_connection_id_from_ongoing_call>" )
-result = call_connection_client.remove_participant(remove_this_user)
+result = call_connection_client.remove_participant(remove_this_user, opration_context="Your context", operationCallbackUrl="<url_endpoint>")
``` --
communication-services Connect Whatsapp Business Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/connect-whatsapp-business-account.md
Get started with the Azure Communication Services Advanced Messaging, which exte
## Overview
-This document provides information about registering a WhatsApp Business Account with Azure Communication Services. This [video](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=04c63978-6f27-4289-93d6-625d8569ee28) demonstrates the process.
+This document provides information about registering a WhatsApp Business Account with Azure Communication Services. The following video demonstrates this process.
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=04c63978-6f27-4289-93d6-625d8569ee28]
## Prerequisites
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
In addition to providing incremental data feed from analytical store to diverse
- Supports applying filters, projections and transformations on the Change feed via source query - Multiple change feeds on the same container can be consumed simultaneously - Each change in container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you-- Changes can be synchronized "from the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥
+- Changes can be synchronized "from the beginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥
- There's no limitation around the fixed data retention period for which changes are available ## Efficient incremental data capture with internally managed checkpoints
Change data capture in Azure Cosmos DB analytical store supports the following k
### Capturing changes from the beginning
-When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, what is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load.
+When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, which is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load.
### Capturing changes from a given timestamp
You can create multiple processes to consume CDC in analytical store. This appro
### Throughput isolation, lower latency and lower TCO
-Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
+Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. Change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
## Scenarios
Change data capture capability enables an end-to-end analytical solution providi
The linked service interface for the API for MongoDB isn't available within Azure Data Factory data flows yet. You can use your API for MongoDB's account endpoint with the **Azure Cosmos DB for NoSQL** linked service interface as a work around until the Mongo linked service is directly supported.
-In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (ex: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (ex: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`)
+In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (Example: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (Example: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`)
## Next steps
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 11/10/2023 Last updated : 11/29/2023
Before you transfer billing products, read [Supplemental information about trans
>[!IMPORTANT] > - When you have a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency, you can't transfer it. Instead you must use it in the original enrollment. However, you change the scope of the savings plan so that is used by other subscriptions. For more information, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope). You can view your billing currency in the Azure portal on the enrollment properties page. For more information, see [To view enrollment properties](direct-ea-administration.md#to-view-enrollment-properties). > - When you transfer subscriptions, cost and usage data for your Azure products aren't accessible after the transfer. We recommend that you [download your cost and usage data](../understand/download-azure-daily-usage.md) and invoices before you transfer subscriptions.-
-When there's is a currency change during or after an EA enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly, not up front, reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](ea-transfers.md#prerequisites-1).
+> - When there's is a currency change during or after an EA enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly, not up front, reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](ea-transfers.md#prerequisites-1).
Before you begin, make sure that the people involved in the product transfer have the required permissions.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
data-manager-for-agri Concepts Understanding Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-understanding-throttling.md
Title: APIs throttling guidance for customers using Azure Data Manager for Agriculture.
-description: Provides information on APIs throttling limits to plan usage.
--
+ Title: APIs throttling guidance for customers using Azure Data Manager for Agriculture
+description: Provides information on APIs throttling limits to plan usage.
++ Previously updated : 04/18/2023 Last updated : 11/15/2023
+# APIs throttling guidance for Azure Data Manager for Agriculture
+The REST APIs throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs.
+
+- Throttling limits, the number of requests to our service in a time span to prevent overuse of resources.
+- Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.
+- Throttling limits are contingent on selected version and the specific capabilities of the product being used. Now, we support two distinct versions: **Standard** (recommended) and **Basic** (suitable for prototyping requirements). These limits operate within three different time windows (per 1 minute, per 5 minutes, and per one month) to safeguard against sudden surges in traffic.
+
+This article shows you how to track the number of requests that remain before reaching the limit, and how to respond when you reach the limit. These [APIs](/rest/api/data-manager-for-agri/#data-plane-rest-apis), falling under the purview of the throttling limits.
+
+## Classification of APIs
+We categorize all our APIs into three main parts for better understanding:
+- **Write operations** - Comprising APIs utilizing REST API methods like `PATCH`, `POST`, and `DELETE` for altering data.
+- **Read operations** - Encompassing APIs that use REST API method type `GET` to retrieve data including search APIs of method type `POST`.
+- **Long running job operations** - Involving Long running asynchronous job APIs using the REST API method type `PUT`.
+
+The overall available quota units as explained in the following table, are shared among these categories. For instance, using up the entire quota on write operations means no remaining quota for other operations. Each operation consumes a specific unit of quota, detailed in the table, helping tracking the remaining quota for further use.
+
+Operation | Units cost for each request|
+-| -- |
+Write | 5 |
+Read| 1 <sup>1</sup>|
+Long running job [Solution inference](/rest/api/data-manager-for-agri/#solution-and-model-inferences) | 5 |
+Long running job [Farm operation](/rest/api/data-manager-for-agri/#farm-operation-job) | 5 |
+Long running job [Image rasterize](/rest/api/data-manager-for-agri/#image-rasterize-job) | 2 |
+Long running job (Cascade delete of an entity) | 2 |
+Long running job [Weather ingestion](/rest/api/data-manager-for-agri/#weather) | 1 |
+Long running job [Satellite ingestion](/rest/api/data-manager-for-agri/#satellite-data-ingestion-job) | 1 |
+
+<sup>1</sup>An extra unit cost is taken into account for each item returned in the response when more than one item is being retrieved.
+
+## Basic version API limits
+
+### Total available units per category
+Operation | Throttling time window | Units reset after each time window.|
+-| -- | |
+Write/Read| per 1 Minute | 25,000 |
+Write/Read| per 5 Minutes| 100,000|
+Write/Read| per one Month| 5,000,000 |
+Long running job| per 5 Minutes| 1000|
+Long running job| per one Month| 100,000 |
+
+## Standard version API limits
+Standard version offers a five times increase in API quota per month compared to the Basic version, while all other quota limits remain unchanged.
+
+### Total available units per category
+Operation | Throttling time window | Units reset after each time window.|
+-| -- | |
+Write/Read| per 1 Minute | 25,000 |
+Write/Read| per 5 Minutes| 100,000|
+Write/Read| per one Month| 25,000,000 <sup>1</sup>
+Long running job| per 5 Minutes| 1000|
+Long running job| per one Month| 500,000 <sup>2</sup>|
+
+<sup>1</sup>This limit is five times the Basic version limit.
+
+<sup>2</sup>This limit is five times the Basic version limit.
+
+## Error code
+When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value elapses, your request isn't processed and a new retry value is returned.
+After the specified time elapses, you can make requests again to the Azure Data Manager for Agriculture. Attempting to establish a TCP connection or using different user authentication methods doesn't bypass these limits, as they're specific to each tenant.
-# APIs throttling guidance for Azure Data Manager for Agriculture.
-
-The APIs throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs. Throttling limits, the number of requests to our service in a time span to prevent overuse of resources. Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.
-
-Throttling limits vary based on product type and capabilities being used. Currently we have two versions, standard and basic (for your POC needs).
-
-## Data Plane Service API limits
-
-Throttling category | Units available per Standard version| Units available per Basic version |
-|:|:|:|
-Per Minute | 25,000 | 25,000 |
-Per 5 Minutes| 100,000| 100,000 |
-Per Month| 25,000,000| 5,000,000|
-
-### Maximum requests allowed per type for standard version
-API Type| Per minute| Per 5 minutes| Per month|
-|:|:|:|:|
-PUT |5,000 |20,000 |5,000,000
-PATCH |5,000 |20,000 |5,000,000
-POST |5,000 |20,000 |5,000,000
-DELETE |5,000 |20,000 |5,000,000
-GET (single object) |25,000 |100,000 |25,000,000
-LIST with paginated response |25,000 results |100,000 results |25,000,000 results
-
-### Maximum requests allowed per type for basic version
-API Type| Per minute| Per 5 minutes| Per month|
-|:|:|:|:|
-PUT |5,000 |20,000 |1,000,000
-PATCH |5,000 |20,000 |1,000,000
-POST |5,000 |20,000 |1,000,000
-DELETE |5,000 |20,000 |1,000,000
-GET (single object) |25,000 |100,000 |5,000,000
-LIST with paginated response |25,000 results |100,000 results |5,000,000 results
-
-### Throttling cost by API type
-API Type| Cost per request|
-|:|::|
-PUT |5
-PATCH |5
-POST |5
-DELETE |5
-GET (single object) |1
-GET Sensor Events |1 + 0.01 per result
-LIST with paginated response |1 per request + 1 per result
-
-## Jobs create limits per instance of our service
-The maximum queue size for each job type is 10,000.
-
-### Total units available
-Throttling category| Units available per Standard version| Units available per Basic version|
-|:|:|:|
-Per 5 Minutes |1,000 |1,000
-Per Month |500,000 |100,000
--
-### Maximum create job requests allowed for standard version
-Job Type| Per 5 mins| Per month|
-|:|:|:|
-Cascade delete| 500| 250,000
-Satellite| 1,000| 500,000
-Model inference| 200| 100,000
-Farm Operation| 200| 100,000
-Rasterize| 500| 250,000
-Weather| 1,000| 250,000
--
-### Maximum create job requests allowed for basic version
-Job Type| Per 5 mins| Per month
-|:|:|:|
-Cascade delete| 500| 50,000
-Satellite| 1,000| 100,000
-Model inference| 200| 20,000
-Farm Operation| 200| 20,000
-Rasterize| 500| 50,000
-Weather| 1000| 100,000
-
-### Sensor events limits
-100,000 event ingestion per hour by our sensor job.
+## Frequently asked questions (FAQs)
-## Error code
-When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned.
+### 1. If I exhaust the allocated API quota entirely for write operations within a per-minute time window, can I successfully make requests for read operations within the same time window?
+The quota limits are shared among the listed operation categories. Using the entire quota for write operations implies no remaining quota for other operations. The specific quota units consumed for each operation are detailed in this article.
-After waiting for specified time, you can also close and reopen your connection to Azure Data Manager for Agriculture.
+### 2. How can I calculate the total number of successful requests allowed for a particular time window?
+The total allowed number of successful API requests depends on the specific version provisioned and the time window in which requests are made. For instance, with the Standard version, you can make 25,000 (Units reset after each time window) / 5 (Units cost for each request) = 5,000 write operation APIs within a 1-minute time window. Or combination of 4000 write operations & 5000 read operations which results in total 4000 * 5 + 5000 * 1 = 25000 total units consumption. Similarly, for the Basic version, you can perform 5,000,000 (Units reset after each time window) / 1 (Units cost for each request) = 5,000,000 read operation APIs within a one month time window.
+### 3. How many sensor events can a customer ingest as the maximum number?
+The system allows a maximum limit of 100,000 event ingestions per hour. While new events are continually accepted, there might be a delay in processing, resulting in these events not being immediately available for real-time egress scenarios alongside the ingestion.
+
## Next steps * See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md). * Understand our APIs [here](/rest/api/data-manager-for-agri).
+* Also look at common API [response headers](/rest/api/data-manager-for-agri/common-rest-response-headers).
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023
defender-for-cloud Concept Integration 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md
Title: Alerts and incidents in Microsoft 365 Defender (Preview) description: Learn about the benefits of receiving Microsoft Defender for Cloud's alerts in Microsoft 365 Defender Previously updated : 11/23/2023 Last updated : 11/29/2023 # Alerts and incidents in Microsoft 365 Defender (Preview)
-Microsoft Defender for Cloud's integration with Microsoft 365 Defender allows security teams to access Defender for Cloud alerts and incidents within the Microsoft 365 Defender portal. This integration provides richer context to investigations that span cloud resources, devices, and identities.
+Microsoft Defender for Cloud is now integrated with Microsoft 365 Defender (Preview). This integration allows security teams to access Defender for Cloud alerts and incidents within the Microsoft 365 Defender portal. This integration provides richer context to investigations that span cloud resources, devices, and identities.
-The partnership with Microsoft 365 Defender allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. This is achieved through immediate correlations of alerts and incidents.
+The partnership with Microsoft 365 Defender allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. Security teams can accomplish this goal through immediate correlations of alerts and incidents.
-Microsoft 365 Defender offers a comprehensive solution that combines protection, detection, investigation, and response capabilities to protect against attacks on device, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency.
+Microsoft 365 Defender offers a comprehensive solution that combines protection, detection, investigation, and response capabilities. The solution protects against attacks on devices, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency.
Incidents and alerts are now part of [Microsoft 365 Defender's public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal.
The following table describes the detection and investigation experience in Micr
| Area | Description | |--|--| | Incidents | All Defender for Cloud incidents are integrated to Microsoft 365 Defender. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue?view=o365-worldwide) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. |
-| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft 365 Defender. Defenders for Cloud alerts show on the Microsoft 365 Defender [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br> <br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.|
+| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft 365 Defender. Defenders for Cloud alerts show on the Microsoft 365 Defender [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br>Microsoft 365 Defender <br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.|
| Alert and incident correlation | Alerts and incidents are automatically correlated, providing robust context to security operations teams to understand the complete attack story in their cloud environment. | | Threat detection | Accurate matching of virtual entities to device entities to ensure precision and effective threat detection. | | Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft 365 DefenderΓÇÖs public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide), allowing customers to export their security alerts data into other systems using one API. | Learn more about [handling alerts in Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide).
+## Sentinel customers
+
+Microsoft Sentinel customers can [benefit from the Defender for Cloud integration with Microsoft 365 Defender](../sentinel/ingest-defender-for-cloud-incidents.md) in their workspaces using the Microsoft 365 Defender incidents and alerts connector.
+
+First you need to [enabled incident integration in your Microsoft 365 Defender connector](../sentinel/connect-microsoft-365-defender.md).
+
+Then, enable the `Tenant-based Microsoft Defender for Cloud (Preview)` connector to synchronize your subscriptions with your tenant-based Defender for Cloud incidents to stream through the Microsoft 365 Defender incidents connector.
+
+The connector is available through the Microsoft Defender for Cloud solution, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the Content Hub.
+
+If you have the legacy subscription-based Microsoft Defender for Cloud alerts connector enabled (which is displayed as `Subscription-based Microsoft Defender for Cloud (Legacy)`), we recommend you disconnect the connector in order to prevent duplicating alerts in your logs.
+
+We recommend you disable analytic rules that are enabled (either scheduled or through Microsoft creation rules), from creating incidents from your Defender for Cloud alerts.
+
+You can use automation rules to close incidents immediately and prevent specific types of Defender for Cloud alerts from becoming incidents. You can also use the built-in tuning capabilities in the Microsoft 365 Defender portal to prevent alerts from becoming incidents.
+
+Customers who integrated their Microsoft 365 Defender incidents into Sentinel and want to keep their subscription-based settings and avoid tenant-based syncing can [opt out of syncing incidents and alerts](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide) through the Microsoft 365 Defender connector.
+
+Learn how [Defender for Cloud and Microsoft 365 Defender handle your data's privacy](data-security.md#defender-for-cloud-and-microsoft-defender-365-defender-integration).
+ ## Next steps [Security alerts - a reference guide](alerts-reference.md)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 11/07/2023 Last updated : 11/29/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 | | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | | June 2023| | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | | August 2023 |
-| [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | | September 2023 |
+| [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | | November 2023 |
| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | | September 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | | November 2023 | | [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 |
The following table lists the alerts to be deprecated:
## Classic connectors for multicloud will be retired
-**Estimated date for change: September 15, 2023**
+**Estimated date for change: November, 2023**
-The classic multicloud connectors will be retiring on September 15, 2023 and no data will be streamed to them after this date. These classic connectors were used to connect AWS Security Hub and GCP Security Command Center recommendations to Defender for Cloud and onboard AWS EC2s to Defender for Servers.
+The classic multicloud connectors will be retired and no data will be streamed to them after this date. These classic connectors were used to connect AWS Security Hub and GCP Security Command Center recommendations to Defender for Cloud and onboard AWS EC2s to Defender for Servers.
The full value of these connectors has been replaced with the native multicloud security connectors experience, which has been Generally Available for AWS and GCP since March 2022 at no extra cost. The new native connectors are included in your plan and offer an automated onboarding experience with options to onboard single accounts, multiple accounts (with Terraform), and organizational onboarding with auto provisioning for the following Defender plans: free foundational CSPM capabilities, Defender Cloud Security Posture Management (CSPM), Defender for Servers, Defender for SQL, and Defender for Containers.
-If you're currently using the classic multicloud connectors, we strongly recommend that you begin your migration to the native security connectors before September 15, 2023.
+If you're currently using the classic multicloud connectors, we strongly recommend that you migrate to the native security connectors as soon as possible.
How to migrate to the native security connectors:
dns Delegate Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain.md
Previously updated : 09/27/2022 Last updated : 11/28/2023 # Delegate an Azure DNS subdomain
-You can use the Azure portal to delegate a DNS subdomain. For example, if you own the contoso.com domain, you may delegate a subdomain called *engineering* to another separate zone that you can administer separately from the contoso.com zone.
+You can use the Azure portal to delegate a DNS subdomain. For example, if you own the *adatum.com* domain, you can delegate a subdomain called *engineering.adatum.com* to another separate zone that you can administer separately from the adatum.com zone.
-If you prefer, you can also delegate a subdomain using [Azure PowerShell](delegate-subdomain-ps.md).
+You can also delegate a subdomain using [Azure PowerShell](delegate-subdomain-ps.md).
## Prerequisites
-To delegate an Azure DNS subdomain, you must first delegate your public domain to Azure DNS. See [Delegate a domain to Azure DNS](./dns-delegate-domain-azure-dns.md) for instructions on how to configure your name servers for delegation. Once your domain is delegated to your Azure DNS zone, you can delegate your subdomain.
+To delegate an Azure DNS subdomain, the parent public domain must first be delegated to Azure DNS. See [Delegate a domain to Azure DNS](./dns-delegate-domain-azure-dns.md) for instructions on how to configure your name servers for delegation. Once your domain is delegated to Azure DNS, you can delegate a subdomain.
> [!NOTE]
-> Contoso.com is used as an example throughout this article. Substitute your own domain name for contoso.com.
+> The `adatum.com` zone is used as an example of a parent DNS zone and `engineering.adatum.com` is used for the subdomain. Substitute your own domain names for these domains.
-## Create a zone for your subdomain
+## Delegate a subdomain
-First, create the zone for the **engineering** subdomain.
+The **engineering.adatum.com** subdomain can already exist. If it doesn't exist, it is created.
-1. From the Azure portal, select **+ Create a resource**.
+To delegate the **engineering** subdomain under **adatum.com**:
-1. Search for **DNS zone** and then select **Create**.
+1. From the Azure portal, search for **DNS zones** and select the **adatum.com** parent zone.
+2. Select **+ Child zone** and enter **engineering** next to **Name**. The **Create DNS zone** window opens.
-1. On the **Create DNS zone** page, select the resource group for your zone. You may want to use the same resource group as the parent zone to keep similar resources together.
+ ![A screenshot showing creation of a child DNS zone.](./media/delegate-subdomain/new-child-zone.png)
-1. Enter `engineering.contoso.com` for the **Name** and then select **Create**.
+3. If desired, change the **Subscription** and **Resource group**. In this example, we use the same subscription and resource group as the parent zone.
+4. Select **Review create**, and then select **Create**.
+5. When deployment is complete, select **Go to resource** to view the new delegated zone: **engineering.adatum.com**.
-1. After the deployment succeeds, go to the new zone.
+ [ ![A screenshot showing contents of the child zone.](./media/delegate-subdomain/child-zone-contents.png) ](./media/delegate-subdomain/child-zone-contents.png#lightbox)
-## Note the name servers
+6. Select the parent **adatum.com** zone again and notice that an **NS** record has been added with the name **engineering** and contents the same as NS records in the child zone. You might need to refresh the page. These are the Azure DNS nameservers that are authoritative for the subdomain (child zone).
-Next, note the four name servers for the engineering subdomain.
+ [ ![A screenshot showing contents of the parent zone.](./media/delegate-subdomain/parent-zone-contents.png) ](./media/delegate-subdomain/parent-zone-contents.png#lightbox)
-On the **engineering** zone overview page, note the four name servers for the zone. You'll need these name servers at a later time.
+## Manual entry of NS records (optional)
-## Create a test record
-
-Create an **A** record to use for testing. For example, create a **www** A record and configure it with a **10.10.10.10** IP address.
-
-## Create an NS record
-
-Next, create a name server (NS) record for the **engineering** zone.
+If desired, you can also create your subdomain and add the subdomain NS record manually.
-1. Navigate to the zone for the parent domain.
+To create a new subdomain zone, use **Create a resource > DNS zone** and create a zone named **engineering.adatum.com**.
-1. Select **+ Record set** at the top of the overview page.
+To create a subdomain delegation manually, add a new NS record set (**+ Record set** option) to the parent zone **adatum.com** with the name: **engineering** and specify each of the nameserver entries that are listed in the subdomain (child) zone.
-1. On the **Add record set** page, type **engineering** in the **Name** text box.
+<br><img src="./media/delegate-subdomain/add-ns-record-set.png" alt="A screenshot showing how to add an NS record set." width="50%">
-1. For **Type**, select **NS**.
+This method doesn't use the **+ Child zone** option, but both methods result in the same delegation.
-1. Under **Name server**, enter the four name servers that you noted previously from the **engineering** zone.
+## Create a test record
-1. Select **OK** to save the record.
+Next, create an **A** record in the **engineering.adatum.com** zone to use for testing. For example, create a **www** A record and configure it with a **10.10.10.10** IP address.
## Test the delegation Use nslookup to test the delegation.
-1. Open a PowerShell window.
-
-1. At command prompt, type `nslookup www.engineering.contoso.com.`
-
-1. You should receive a non-authoritative answer showing the address **10.10.10.10**.
+1. Open a command prompt.
+2. At command prompt, type `nslookup www.engineering.adatum.com.`
+3. You should receive a non-authoritative answer showing the address **10.10.10.10**.
## Next steps
dns Dns Operations Recordsets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-portal.md
Previously updated : 09/27/2022 Last updated : 11/27/2023
To create a record set in the Azure portal, see [Create DNS records by using the
## View a record set
-1. In the Azure portal, go to the **DNS zone** overview page.
+1. In the Azure portal, go to the **DNS zones** overview page.
-1. Search for the record set and select it will open the record set properties.
+1. Select your DNS zone. The current record sets are displayed.
:::image type="content" source="./media/dns-operations-recordsets-portal/overview.png" alt-text="Screenshot of contosotest.com zone overview page.":::
energy-data-services Concepts Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-authentication.md
In the OSDU instance,
2. The first Service Principal is used for API access. It can also manage infrastructure resources. 3. The second Service Principal is used for service-to-service (S2S) communications. -
+## Refresh Auth Token
+You can refresh the authorization token using the steps outlined in [Generate a refresh token](how-to-generate-refresh-token.md).
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
If the user isn't part of the required group, you can add the required entitleme
[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png#lightbox)
-If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-osdu-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
### Prepare Subproject
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
If the user isn't part of the required group, you can add the required entitleme
[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png#lightbox)
-If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-osdu-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
### Prepare Subproject
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
In this article, you learn how to manage users and their memberships in OSDU gro
2. Locate `tenant-id` under the basic information section in the *Overview* tab. 3. Copy the `tenant-id` and paste it into an editor to be used later. :::image type="content" source="media/how-to-manage-users/tenant-id.png" alt-text="Screenshot of finding the tenant-id.":::
A `client-secret` is a string value your app can use in place of a certificate t
:::image type="content" source="media/how-to-manage-users/data-partition-id-second-option.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance overview page."::: :::image type="content" source="media/how-to-manage-users/data-partition-id-second-option-step-2.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance overview page with the data partitions.":::
-## Generate access token
+## Generate service principal access token
1. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the above steps.
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
1. Find the 'object-id' (OID) of the user(s) first. If you are managing an application's access, you must find and use the application ID (or client ID) instead of the OID. 2. Input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance. :::image type="content" source="media/how-to-manage-users/profile-object-id.png" alt-text="Screenshot of finding the object-id from the profile."::: ## First time addition of users in a new data partition
-In order to add entitlements to a new data partition of Azure Data Manager for Energy instance, use the SPN token of the app that was used to provision the instance. If you try to directly use user tokens for adding entitlements, it results in 401 error. The SPN token must be used to add initial users in the system and those users (with admin access) can then manage additional users.
-
-The SPN is generated using client_credentials flow
-```bash
-curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oauth2/token' \
header 'Content-Type: application/x-www-form-urlencoded' \data-urlencode 'grant_type=client_credentials' \data-urlencode 'scope=<client-id>.default' \data-urlencode 'client_id=<client-id>' \data-urlencode 'client_secret=<client-secret>' \data-urlencode 'resource=<client-id>'
-```
+1. In order to add entitlements to a new data partition of Azure Data Manager for Energy instance, use the access token of the app that was used to provision the instance.
+2. Get the service principal access token using [Generate service principal access token](how-to-manage-users.md#generate-service-principal-access-token).
+3. If you try to directly use user tokens for adding entitlements, it results in 401 error. The service principal access token must be used to add initial users in the system and those users (with admin access) can then manage more users.
+4. Use the service principal access token to do these three steps using the commands outlined in the following sections.
+5. Add the users to the `users@<data-partition-id>.<domain>` OSDU group.
+6. Get the OSDU group such as `service.legal.editor@<data-partition-id>.<domain>` you want to add the user to.
+7. Add the users to that group.
## Get the list of all available groups in a data partition
Run the below curl command in Azure Cloud Bash to get all the groups that are av
--header 'Authorization: Bearer <access_token>' ```
-## Add user(s) to an OSDU group in a data partition
+## Add users to an OSDU group in a data partition
1. Run the below curl command in Azure Cloud Bash to add the user(s) to the "Users" group using the Entitlement service. 2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email. ```bash
- curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/users@<data-partition-id>.dataservices.energy/members' \
+ curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \
Run the below curl command in Azure Cloud Bash to get all the groups that are av
}' ```
-**Sample request**
+**Sample request for `users` OSDU group**
Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
Consider an Azure Data Manager for Energy instance named "medstest" with a data
"role": "MEMBER" } ```
-> [!IMPORTANT]
-> The app-id is the default OWNER of all the groups.
-
-## Add user(s) to an entitlements group in a data partition
-
-1. Run the below curl command in Azure Cloud Bash to add the user(s) to an entitlement group using the Entitlement service.
-2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email.
-
+**Sample request for `legal service editor` OSDU group**
```bash
- curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/service.search.user@<data-partition-id>.dataservices.energy/members' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer <access_token>' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "<Object_ID>",
- "role": "MEMBER"
- }'
-```
--
-**Sample request**
-
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1".
-
-```bash
- curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.search.user@medstest-dp1.dataservices.energy/members' \
+ curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.legal.editor@medstest-dp1.dataservices.energy/members' \
--header 'data-partition-id: medstest-dp1' \ --header 'Authorization: Bearer abcdefgh123456.............' \ --header 'Content-Type: application/json' \ --data-raw '{
- "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "role": "MEMBER"
- }'
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }'
```
-**Sample response**
-
-```JSON
- {
- "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "role": "MEMBER"
- }
-```
+> [!IMPORTANT]
+> The app-id is the default OWNER of all the groups.
-## Get entitlements groups for a given user in a data partition
+## Get OSDU groups for a given user in a data partition
1. Run the below curl command in Azure Cloud Bash to get all the groups associated with the user.
Consider an Azure Data Manager for Energy instance named "medstest" with a data
} ```
-## Delete entitlement groups of a given user in a data partition
+## Delete OSDU groups of a given user in a data partition
1. Run the below curl command in Azure Cloud Bash to delete a given user from a given data partition.
-2. As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER who can manage users in that group.
+2. **DO NOT** delete the OWNER of a group unless you have another OWNER who can manage users in that group.
```bash curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \
event-grid Configure Firewall Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-mqtt.md
Title: Configure IP firewall for Azure Event Grid namespaces
+ Title: Configure IP firewall for Azure Event Grid namespaces (MQTT)
description: This article describes how to configure firewall settings for Azure Event Grid namespaces that have MQTT enabled.
-# Configure IP firewall for Azure Event Grid namespaces
+# Configure IP firewall for Azure Event Grid namespaces (MQTT)
By default, Event Grid namespaces and entities in them such as Message Queuing Telemetry Transport (MQTT) topic spaces are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the MQTT clients that fall into the allowed IP range can connect to publish and subscribe. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md). This article describes how to configure IP firewall settings for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
event-grid Configure Firewall Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-namespace-topics.md
+
+ Title: Configure IP firewall for Azure Event Grid namespaces
+description: This article describes how to configure firewall settings for Azure Event Grid namespaces.
+ Last updated : 11/29/2023++++
+# Configure IP firewall for Azure Event Grid namespaces
+By default, Event Grid namespaces and entities are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the clients that fall into the allowed IP range can connect to publish and subscribe (pull-based clients only). Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
+
+This article describes how to configure IP firewall settings for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
+
+## Create a namespace with IP firewall settings
+
+1. On the **Networking** page, if you want to allow clients to connect to the namespace endpoint via a public IP address, select **Public access** for **Connectivity method** if it's not already selected.
+2. You can restrict access to the topic from specific IP addresses by specifying values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+
+ :::image type="content" source="./media/configure-firewall-namespace-topics/ip-firewall-settings.png" alt-text="Screenshot that shows IP firewall settings on the Networking page of the Create namespace wizard.":::
+
+## Update a namespace with IP firewall settings
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. In the **search box**, enter **Event Grid Namespaces** and select **Event Grid Namespaces** from the results.
+
+ :::image type="content" source="./media/create-view-manage-namespaces/portal-search-box-namespaces.png" alt-text="Screenshot showing Event Grid Namespaces in the search results.":::
+1. Select your Event Grid namespace in the list to open the **Event Grid Namespace** page for your namespace.
+1. On the **Event Grid Namespace** page, select **Networking** on the left menu.
+1. Specify values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+
+ :::image type="content" source="./media/configure-firewall-namespace-topics/namespace-ip-firewall-settings.png" alt-text="Screenshot that shows IP firewall settings on the Networking page of an existing namespace.":::
+
+## Next steps
+See [Allow access via private endpoints](configure-private-endpoints-pull.md).
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Previously updated : 06/12/2023 Last updated : 11/28/2023
See [here](./designing-for-high-availability-with-expressroute.md) for designing
You can achieve high availability by connecting up to 4 ExpressRoute circuits in the same peering location to your virtual network. You can also connect up to 16 ExpressRoute circuits in different peering locations to your virtual network. For example, Singapore and Singapore2. If one ExpressRoute circuit disconnects, connectivity fails over to another ExpressRoute circuit. By default, traffic leaving your virtual network is routed based on Equal Cost Multi-path Routing (ECMP). You can use **connection weight** to prefer one circuit to another. For more information, see [Optimizing ExpressRoute Routing](expressroute-optimize-routing.md). > [!NOTE]
-> Although it is possible to connect up to 16 circuits to your virtual network, the outgoing traffic from your virtual network will be load-balanced using Equal-Cost Multipath (ECMP) across a maximum of 4 circuits.
+> - Although it is possible to connect up to 16 circuits to your virtual network, the outgoing traffic from your virtual network will be load-balanced using Equal-Cost Multipath (ECMP) across a maximum of 4 circuits.
+> - Equal-Cost Multipath (ECMP) in ExpressRoute uses the Per-Flow (based on 5-tuple) load balancing method. Accordingly, traffic flow between a given source and destination host pair are guaranteed to take the same path, even if multiple ECMP paths are available.
### How do I ensure that my traffic destined for Azure Public services like Azure Storage and Azure SQL on Microsoft peering or public peering is preferred on the ExpressRoute path?
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 11/21/2023 Last updated : 11/29/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 11/21/2023 Last updated : 11/29/2023
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
keywords: hadoop getting started,hadoop linux,hadoop quickstart,hive getting sta
Previously updated : 10/20/2022 Last updated : 11/29/2023 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Azure portal and run a Hive job
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
hdinsight Apache Spark Jupyter Spark Sql Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal.md
description: This quickstart shows how to use the Azure portal to create an Apac
Previously updated : 10/13/2022 Last updated : 11/29/2023 #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
industry Configure Rules Alerts In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/configure-rules-alerts-in-azure-farmbeats.md
Title: Configure rules and manage alerts description: Describes how to configure rules and manage alerts in FarmBeats-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Configure rules and manage alerts Azure FarmBeats allows you to create rules based on the business logic, in addition to the sensor data that flows from the sensors and devices deployed in your farm. The rules trigger alerts in the system whenever sensor values cross a threshold value. By viewing and analyzing the alerts created after the threshold values, you can quickly act on any issues and get required solutions.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+>We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ ## Create rule 1. On the home page, go to **Rules**.
industry Disaster Recovery For Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/disaster-recovery-for-farmbeats.md
Data recovery protects you from losing your data in an event like collapse of Azure region. In such an event, you can start failover, and recover the data stored in your FarmBeats deployment. Data recovery is not a default feature in Azure FarmBeats. You can configure this feature manually by configuring the required Azure resources that are used by FarmBeats to store data in an Azure paired region. Use Active ΓÇô Passive approach to enable recovery.-
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
The following sections provide information about how you can configure data recovery in Azure FarmBeats: - [Enable data redundancy](#enable-data-redundancy)
industry Generate Maps In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/generate-maps-in-azure-farmbeats.md
Title: Generate maps description: This article describes how to generate maps in Azure FarmBeats.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Generate maps
Using Azure FarmBeats, you can generate the following maps by using satellite im
- **Satellite Indices map**: Shows the vegetation index and water index for a farm. - **Soil Moisture heatmap**: Shows soil moisture distribution by fusing satellite data and sensor data.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ ## Sensor Placement map A FarmBeats Sensor Placement map assists you with the placement of soil moisture sensors. The map output consists of a list of coordinates for sensor deployment. The inputs from these sensors are used along with satellite imagery to generate the Soil Moisture heatmap.
industry Generate Soil Moisture Map In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/generate-soil-moisture-map-in-azure-farmbeats.md
Title: Generate Soil Moisture Heatmap description: Describes how to generate Soil Moisture Heatmap in Azure FarmBeats-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Generate Soil Moisture Heatmap Soil moisture is the water that is held in the spaces between soil particles. Soil Moisture Heatmap helps you understand the moisture data at any depth, and at high resolution within your farms. To generate an accurate and usable soil moisture heatmap, a uniform deployment of sensors from the same provider is required. Different providers will have differences in the way soil moisture is measured along with differences in calibration. The Heatmap is generated for a particular depth using the sensors deployed at that depth.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ This article describes the process of generating a Soil Moisture Heatmap for your farm, using the Azure FarmBeats Accelerator. In this article, you will learn how to: - [Create Farms](#create-a-farm)
industry Get Drone Imagery In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-drone-imagery-in-azure-farmbeats.md
Title: Get drone imagery description: This article describes how to get drone imagery from partners.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Get drone imagery from drone partners This article describes how you can bring in orthomosaic data from your drone imagery partners to Azure FarmBeats Datahub. An orthomosaic is an aerial illustration or image that's geometrically corrected and stitched from data collected by a drone.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ Currently, the following imagery partners are supported. ![FarmBeats drone imagery partners](./media/get-drone-imagery-from-drone-partner/drone-partner-1.png)
industry Get Sensor Data From Sensor Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md
Title: Get sensor data from the partners description: This article describes how to get sensor data from partners.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Get sensor data from sensor partners Azure FarmBeats helps you to bring streaming data from your IoT devices and sensors into Datahub. Currently, the following sensor device partners are supported.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information, see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
++ ![FarmBeats partners](./media/get-sensor-data-from-sensor-partner/partner-information-2.png) Integrating device data with Azure FarmBeats helps you get ground data from the IoT sensors deployed in your farm to the data hub. The data, once available, can be visualized through the FarmBeats accelerator. The data can be used for data fusion and machine learning/artificial intelligence (ML/AI) model building by using FarmBeats.
-To start sensor data streaming, ensure the following:
+To start sensor data streaming, ensure the following steps:
- You installed FarmBeats in Azure Marketplace. - You decided on the sensors and devices that you want to install on your farm.
Follow the below steps to generate the above information:
a. Go to **Microsoft Entra ID** > **App Registrations**
- b. Select the **App Registration** that was created as part of your FarmBeats deployment. It will have the same name as your FarmBeats datahub.
+ b. Select the **App Registration** that was created as part of your FarmBeats deployment. It has the same name as your FarmBeats datahub.
- c. Select **Expose an API** > select **Add a client application** and enter **04b07795-8ddb-461a-bbee-02f9e1bf7b46** and check **Authorize Scope**. This will give access to the Azure CLI (Cloud Shell) to perform the below steps:
+ c. Select **Expose an API** > select **Add a client application** and enter **04b07795-8ddb-461a-bbee-02f9e1bf7b46** and check **Authorize Scope**. This gives access to the Azure CLI (Cloud Shell) to perform the below steps:
3. Open Cloud Shell. This option is available on the toolbar in the upper-right corner of the Azure portal. ![Azure portal toolbar](./media/get-drone-imagery-from-drone-partner/navigation-bar-1.png)
-4. Ensure the environment is set to **PowerShell**. By default, it's set to Bash.
+4. Ensure the environment is set to **PowerShell**. By default, it is set to Bash.
![PowerShell toolbar setting](./media/get-sensor-data-from-sensor-partner/power-shell-new-1.png)
Follow the below steps to generate the above information:
cd ```
-6. Run the following command. This connects an authenticated account to use for Microsoft Entra ID requests
+6. Run the following command to connect an authenticated account to use for Microsoft Entra ID requests
```azurepowershell-interactive Connect-AzureAD ```
-7. Run the following command. This will download a script to your home directory.
+7. Run the following command. This downloads a script to your home directory.
```azurepowershell-interactive
Now you have the following information generated from the previous section.
- Client secret - Tenant ID
-You will need to provide this to your device partner for linking FarmBeats. Go to the device partner portal for doing the same. For example, in case you are using devices from Davis Instruments, Teralytic or Pessl Instruments (Metos.at) go to the corresponding pages as mentioned below:
+You need to provide this to your device partner for linking FarmBeats. Go to the device partner portal for doing the same. For example, in case you're using devices from Davis Instruments, Teralytic or Pessl Instruments (Metos.at) go to the corresponding pages as mentioned below:
1. [Davis Instruments](https://weatherlink.github.io/azure-farmbeats/setup)
Currently, FarmBeats supports the following devices:
Follow these steps: 1. On the home page, select **Devices** from the menu.
- The **Devices** page displays the device type, model, status, the farm it's placed in, and the last updated date for metadata. By default, the farm column is set to *NULL*. You can choose to assign a device to a farm. For more information, see [Assign devices](#assign-devices).
+ The **Devices** page displays the device type, model, status, the farm it is placed in, and the last updated date for metadata. By default, the farm column is set to *NULL*. You can choose to assign a device to a farm. For more information, see [Assign devices](#assign-devices).
2. Select the device to view the device properties, telemetry, and child devices connected to the device. ![Devices page](./media/get-sensor-data-from-sensor-partner/view-devices-1.png)
Follow these steps:
Follow these steps: 1. On the home page, select **Sensors** from the menu.
- The **Sensors** page displays details about the type of sensor, the farm it's connected to, parent device, port name, port type, and the last updated status.
+ The **Sensors** page displays details about the type of sensor, the farm it is connected to, parent device, port name, port type, and the last updated status.
2. Select the sensor to view sensor properties, active alerts, and telemetry from the sensor. ![Sensors page](./media/get-sensor-data-from-sensor-partner/view-sensors-1.png)
industry Get Weather Data From Weather Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-weather-data-from-weather-partner.md
Title: Get weather data from weather partners description: This article describes how to get weather data from partners.-+ Previously updated : 03/31/2020- Last updated : 11/29/2023+ # Get weather data from weather partners Azure FarmBeats helps you to bring weather data from your weather data providers by using a Docker-based Connector Framework. Using this framework, weather data providers implement a Docker that can be integrated with FarmBeats. Currently, the following weather data provider is supported.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
++ ![FarmBeats partners](./media/get-sensor-data-from-sensor-partner/dtn-logo.png) [DTN](https://www.dtn.com/dtn-content-integration/)
industry Imagery Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/imagery-partner-integration-in-azure-farmbeats.md
Title: Imagery partner integration description: This article describes imagery partner integration.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Imagery partner integration This article describes how to use the Azure FarmBeats Translator component to send imagery data to FarmBeats. Agricultural imagery data can be generated from various sources, such as multispectral cameras, satellites, and drones. Agricultural imagery partners can integrate with FarmBeats to provide customers with custom-generated maps for their farms.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ Data, once available, can be visualized through the FarmBeats Accelerator and potentially be used for data fusion and machine learning/artificial intelligence (ML/AI) model building by agricultural businesses or customer system integrators. FarmBeats provides the ability to:
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Title: Ingest historical telemetry data description: This article describes how to ingest historical telemetry data.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+
This article describes how to ingest historical sensor data into Azure FarmBeats.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information, see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ Ingesting historical data from Internet of Things (IoT) resources such as devices and sensors is a common scenario in FarmBeats. You create metadata for devices and sensors and then ingest the historical data to FarmBeats in a canonical format. ## Before you begin
Before you proceed with this article, ensure that you've installed FarmBeats and
## Enable partner access
-You need to enable partner integration to your Azure FarmBeats instance. This step creates a client that has access to your Azure FarmBeats instance as your device partner and provides you with the following values that are required in the subsequent steps:
+You need to enable partner integration to your Azure FarmBeats instance. Doing so creates a client that has access to your Azure FarmBeats instance as your device partner and provides you with the following values that are required in the subsequent steps:
- API endpoint: This is the Datahub URL, for example, https://\<datahub>.azurewebsites.net - Tenant ID
Follow these steps:
a. Go to **Microsoft Entra ID** > **App Registrations**
- b. Select the **App Registration** that was created as part of your FarmBeats deployment. It will have the same name as your FarmBeats datahub.
+ b. Select the **App Registration** that was created as part of your FarmBeats deployment. It has the same name as your FarmBeats datahub.
- c. Select **Expose an API** > select **Add a client application** and enter **04b07795-8ddb-461a-bbee-02f9e1bf7b46** and check **Authorize Scope**. This will give access to the Azure CLI (Cloud Shell) to perform the below steps:
+ c. Select **Expose an API** > select **Add a client application** and enter **04b07795-8ddb-461a-bbee-02f9e1bf7b46** and check **Authorize Scope**. This gives access to the Azure CLI (Cloud Shell) to perform the below steps:
3. Open Cloud Shell. This option is available on the toolbar in the upper-right corner of the Azure portal.
Follow these steps:
Connect-AzureAD ```
-7. Run the following command. This will download a script to your home directory.
+7. Run the following command. This downloads a script to your home directory.
```azurepowershell-interactive 
Follow these steps:
- /**DeviceModel**: DeviceModel corresponds to the metadata of the device, such as the manufacturer and the type of device, which is either a gateway or a node. - /**Device**: Device corresponds to a physical device present on the farm. - /**SensorModel**: SensorModel corresponds to the metadata of the sensor, such as the manufacturer, the type of sensor, which is either analog or digital, and the sensor measurement, such as ambient temperature and pressure.-- /**Sensor**: Sensor corresponds to a physical sensor that records values. A sensor is typically connected to a device with a device ID.
+- /**Sensor**: Sensor corresponds to a physical sensor that record values. A sensor is typically connected to a device with a device ID.
| DeviceModel | Suggestions | |--|--|
Follow these steps:
| Ports | Port name and type, which is digital or analog. | | Name | Name to identify the resource. For example, the model name or product name. | | Description | Provide a meaningful description of the model. |
-| Properties | Additional properties from the manufacturer. |
+| Properties | other properties from the manufacturer. |
| **Device** | | | DeviceModelId | ID of the associated device model. | | HardwareId | Unique ID for the device, such as the MAC address. |
Follow these steps:
| ParentDeviceId | ID of the parent device to which this device is connected. For example, a node that's connected to a gateway. A node has parentDeviceId as the gateway. | | Name | A name to identify the resource. Device partners must send a name that's consistent with the device name on the partner side. If the partner device name is user defined, then the same user-defined name should be propagated to FarmBeats. | | Description | Provide a meaningful description. |
-| Properties | Additional properties from the manufacturer. |
+| Properties | other properties from the manufacturer. |
| **SensorModel** | | | Type (analog, digital) | The type of sensor, whether it's analog or digital. | | Manufacturer | The manufacturer of the sensor. |
Follow these steps:
| SensorMeasures > AggregationType | Values can be none, average, maximum, minimum, or StandardDeviation. | | Name | Name to identify a resource. For example, the model name or product name. | | Description | Provide a meaningful description of the model. |
-| Properties | Additional properties from the manufacturer. |
+| Properties | other properties from the manufacturer. |
| **Sensor** | | | HardwareId | Unique ID for the sensor set by the manufacturer. | | SensorModelId | ID of the associated sensor model. |
Follow these steps:
| DeviceID | ID of the device that the sensor is connected to. | | Name | Name to identify resource. For example, sensor name or product name and model number or product code. | | Description | Provide a meaningful description. |
-| Properties | Additional properties from the manufacturer. |
+| Properties | other properties from the manufacturer. |
### API request to create metadata
response = requests.post(ENDPOINT + "/DeviceModel", data=payload, headers=header
### Send telemetry
-Now that you've created the devices and sensors in FarmBeats, you can send the associated telemetry messages.
+Now that devices and sensors are created in FarmBeats, you can send the associated telemetry messages.
### Create a telemetry client
Here's an example of a telemetry message:
### Can't view telemetry data after ingesting historical/streaming data from your sensors
-**Symptom**: Devices or sensors are deployed, and you've created the devices/sensors on FarmBeats and ingested telemetry to the EventHub, but you can't get or view telemetry data on FarmBeats.
+**Symptom**: Devices or sensors are deployed, and you have created the devices/sensors on FarmBeats and ingested telemetry to the EventHub, but you can't get or view telemetry data on FarmBeats.
**Corrective action**:
industry Install Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/install-azure-farmbeats.md
Title: Install Azure FarmBeats description: This article describes how to install Azure FarmBeats in your Azure subscription-+ Previously updated : 1/17/2020- Last updated : 11/29/2023+ # Install Azure FarmBeats
This article describes how to install Azure FarmBeats in your Azure subscription
Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers and generation of actionable insights. Azure FarmBeats does so by enabling you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. The two main components of Azure FarmBeats are:
-> [!NOTE]
-> Azure FarmBeats is on path to be retired. We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
- **Data hub**: An API layer that enables aggregation, normalization, and contextualization of various agriculture data sets across different providers.
industry Integration Patterns In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/integration-patterns-in-azure-farmbeats.md
Title: Azure FarmBeats Architecture description: Describes the architecture of Azure FarmBeats-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Integration patterns Azure FarmBeats is a business-to-business offering, available in Azure Marketplace. FarmBeats enables aggregation of agriculture datasets across providers, and generation of actionable insights by building Artificial Intelligence (AI) or Machine Learning (ML) models by fusing the data sets.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ ![Project Farm Beats](./media/architecture-for-farmbeats/farmbeats-architecture-1.png) The following sections describe the integration pattern for Azure FarmBeats.
industry Manage Farms In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/manage-farms-in-azure-farmbeats.md
Title: Manage Farms description: Describes how to manage farms-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Manage farms
-You can manage your farms in Azure FarmBeats. This article provides the information about how to create farms, install devices, sensors, and drones that helps you manage your farms.
+You can manage your farms in Azure FarmBeats. This article provides the information about how to create farms, install devices, sensors, and drones that help you manage your farms.
+
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information, see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
## Create farms Use the following steps:
-1. Login to the Farm Accelerator, the **Farms** page displays.
- The **Farms** page displays the list of farms in case they have already been created in subscription.
+1. Sign-in to the Farm Accelerator, the **Farms** page displays.
+ The **Farms** page displays the list of farms in case they're created in subscription.
- Here is the sample image:
+ Here's the sample image:
![Screenshot that shows the Farms page.](./media/create-farms-in-azure-farmbeats/create-farm-main-page-1.png) 2. Select **Create Farm** and provide **Name**, **Crops** and **Address**.
-3. In the **Define Farm Boundary**, (mandatory field) select either **Mark on Map** or **Paste GeoJSON code**.
+3. In the **Define Farm Boundary** (mandatory field) select either **Mark on Map** or **Paste GeoJSON code**.
Here are the two ways to define a farm boundary:
Use the tooltips to help fill in the information.
The Farm list page displays a list of created farms. Select a farm to view the list of:
+ - **Device countΓÇödisplays the number and status of devices deployed within the farm.
+ - **MapΓÇömap of the farm with the devices deployed in the farm.
+ - **TelemetryΓÇödisplays the telemetry from the sensors deployed in the farm.
+ - **Latest Precision MapsΓÇödisplays the latest Satellite Indices map (EVI, NDWI), Soil Moisture Heatmap and Sensor Placement map.
## Edit farm
The **Farms** page displays a list of farms created. Use the following steps to
## Next steps
-Now that you have created your farm, learn how to [get sensor data](get-sensor-data-from-sensor-partner.md) flowing into your farm.
+Now that your farm is created, learn how to [get sensor data](get-sensor-data-from-sensor-partner.md) flowing into your farm.
industry Manage Users In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/manage-users-in-azure-farmbeats.md
Title: Manage users in Azure FarmBeats description: This article describes how to manage users in Azure FarmBeats.-+ Previously updated : 12/02/2019- Last updated : 11/29/2023+ - # Manage users- Azure FarmBeats includes user management for people who are part of your Microsoft Entra instance. You can add users to your Azure FarmBeats instance to access the APIs, view the generated maps, and access sensor telemetry from the farm.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ ## Prerequisites - Azure FarmBeats installation is required. For more information, see [Install Azure FarmBeats](install-azure-farmbeats.md).
industry Overview Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/overview-azure-farmbeats.md
Title: What is Azure FarmBeats description: Provides an overview of Azure FarmBeats-- Previously updated : 11/04/2019-++ Last updated : 11/29/2023+
Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers. Azure FarmBeats enables you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. By using Azure FarmBeats, agriculture businesses can focus on core value-adds instead of the undifferentiated heavy lifting of data engineering.
-> [!NOTE]
-> Azure FarmBeats is on path to be retired. We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
![Project Farm Beats](./media/architecture-for-farmbeats/farmbeats-architecture-1.png)
With the preview of Azure FarmBeats you can:
- Gain actionable insights by building AI/ML models on top of aggregated datasets. - Build or augment your digital agriculture solution by providing farm health advisories.
-> [!NOTE]
-> Azure FarmBeats is currently in public preview. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure FarmBeats is provided without a service level agreement.
## Data hub
industry Query Telemetry Data From Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/query-telemetry-data-from-azure-farmbeats.md
Title: Query ingested telemetry data description: This article describes how to query ingested telemetry data.-+ Previously updated : 03/11/2020- Last updated : 11/29/2023+ # Query ingested telemetry data
This article describes how to query ingested sensor data from Azure FarmBeats.
Ingesting data from Internet of Things (IoT) resources such as devices and sensors is a common scenario in FarmBeats. You create metadata for devices and sensors and then ingest the historical data to FarmBeats in a canonical format. Once the sensor data is available on FarmBeats Datahub, we can query the same to generate actionable insights or build models.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information, see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ ## Before you begin
-Before you proceed with this article, ensure that you've installed FarmBeats and ingested sensor telemetry data from your IoT devices to FarmBeats.
+Before you proceed with this article, ensure that FarmBeats is installed and sensor telemetry data from your IoT devices is ingested to FarmBeats.
To ingest sensor telemetry data, visit [ingest historical telemetry data](ingest-historical-telemetry-data-in-azure-farmbeats.md)
-Before you proceed, you also need to ensure you are familiar with FarmBeats REST APIs as you will query ingested telemetry using the APIs. For more information on FarmBeats APIs, see [FarmBeats REST APIs](rest-api-in-azure-farmbeats.md). **Ensure that you are able to make API requests to your FarmBeats Datahub endpoint**.
+Before you proceed, you also need to ensure you're familiar with FarmBeats REST APIs as you query ingested telemetry using the APIs. For more information on FarmBeats APIs, see [FarmBeats REST APIs](rest-api-in-azure-farmbeats.md). **Ensure that you are able to make API requests to your FarmBeats Datahub endpoint**.
## Query ingested sensor telemetry data
There are two ways to access and query telemetry data from FarmBeats:
Follow the steps to query the ingested sensor telemetry data using FarmBeats REST APIs:
-1. Identify the sensor you are interested in. You can do this by making a GET request on /Sensor API.
+1. Identify the sensor you're interested in. You can do so by making a GET request on /Sensor API.
> [!NOTE] > The **id** and the **sensorModelId** of the interested sensor object.
Make a note of the response from the GET/{id} call for the Sensor Model.
] } ```
-4. The response from the /Telemetry API will look something like this:
+4. The response from the /Telemetry API looks something like this:
```json {
Make a note of the response from the GET/{id} call for the Sensor Model.
] } ```
-In the above example response, the queried sensor telemetry gives data for two timestamps along with the measure name ("moist_soil_last") and values of the reported telemetry in the two timestamps. You will need to refer to the associated Sensor Model (as described in step 2) to interpret the type and unit of the reported values.
+In the above example response, the queried sensor telemetry gives data for two timestamps along with the measure name ("moist_soil_last") and values of the reported telemetry in the two timestamps. You need to refer to the associated Sensor Model (as described in step 2) to interpret the type and unit of the reported values.
### Query using Azure Time Series Insights (TSI)
-FarmBeats leverages [Azure Time Series Insights (TSI)](https://azure.microsoft.com/services/time-series-insights/) to ingest, store, query, and visualize data at IoT scale--data that's highly contextualized and optimized for time series.
+FarmBeats uses [Azure Time Series Insights (TSI)](https://azure.microsoft.com/services/time-series-insights/) to ingest, store, query, and visualize data at IoT scale, data that is highly contextualized and optimized for time series.
Telemetry data is received on an EventHub and then processed and pushed to a TSI environment within FarmBeats resource group. Data can then be directly queried from the TSI. For more information, see [TSI documentation](../../time-series-insights/time-series-insights-explorer.md) Follow the steps to visualize data on TSI: 1. Go to **Azure Portal** > **FarmBeats DataHub resource group** > select **Time Series Insights** environment (tsi-xxxx) > **Data Access Policies**. Add user with Reader or Contributor access.
-2. Go to the **Overview** page of **Time Series Insights** environment (tsi-xxxx) and select the **Time Series Insights Explorer URL**. You'll now be able to visualize the ingested telemetry.
+2. Go to the **Overview** page of **Time Series Insights** environment (tsi-xxxx) and select the **Time Series Insights Explorer URL**. You can now visualize the ingested telemetry.
Apart from storing, querying and visualization of telemetry, TSI also enables integration to a Power BI dashboard. For more information, see [here](../../time-series-insights/how-to-connect-power-bi.md) ## Next steps
-You now have queried sensor data from your Azure FarmBeats instance. Now, learn how to [generate maps](generate-maps-in-azure-farmbeats.md#generate-maps) for your farms.
+After querying sensor data from your Azure FarmBeats instance, learn how to [generate maps](generate-maps-in-azure-farmbeats.md#generate-maps) for your farms.
industry Rest Api In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/rest-api-in-azure-farmbeats.md
Title: Azure FarmBeats APIs description: Learn about Azure FarmBeats APIs, which provide agricultural businesses with a standardized RESTful interface with JSON-based responses.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Azure FarmBeats APIs This article describes the Azure FarmBeats APIs. The Azure FarmBeats APIs provide agricultural businesses with a standardized RESTful interface with JSON-based responses to help you take advantage of Azure FarmBeats capabilities, such as:
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ - APIs to get sensor, camera, drone, weather, satellite, and curated ground data. - Normalization and contextualization of data across common data providers. - Schematized access and query capabilities on all ingested data.
industry Sensor Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/sensor-partner-integration-in-azure-farmbeats.md
Title: Sensor partner integration description: This article describes sensor partner integration.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Sensor partner integration This article provides information about the Azure FarmBeats **Translator** component, which enables sensor partner integration.
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
+ Using this component, partners can integrate with FarmBeats using FarmBeats Datahub APIs and send customer device data and telemetry to FarmBeats Datahub. Once the data is available in FarmBeats, it is visualized using the FarmBeats Accelerator and can be used for data fusion and for building machine learning/artificial intelligence models. ## Before you start
industry Troubleshoot Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/troubleshoot-azure-farmbeats.md
Title: Troubleshoot Azure FarmBeats description: This article describes how to troubleshoot Azure FarmBeats.-+ Previously updated : 11/04/2019- Last updated : 11/29/2023+ # Troubleshoot Azure FarmBeats
-This article provides solutions to common Azure FarmBeats issues. For additional help, contact our [Q&A Support Forum](/answers/topics/azure-farmbeats.html) or email us at farmbeatssupport@microsoft.com.
+This article provides solutions to common Azure FarmBeats issues. For extra help, contact our [Q&A Support Forum](/answers/topics/azure-farmbeats.html).
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information, see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
-> [!NOTE]
- > If you have installed FarmBeats during April and your jobs are failing with an empty error message, your installation may not have been allocated any batch quota to prioritize support for critical health and safety organizations. See [here](https://azure.microsoft.com/blog/update-2-on-microsoft-cloud-services-continuity/) for more information. You will need to request VMs to be allocated to the Batch account to run jobs successfully.
## Install issues-
- > [!NOTE]
- > If you are restarting the install because of an error, ensure to delete the **Resource Group** or delete all resources from the Resource Group before re-triggering the installation.
+If you're restarting the install because of an error, ensure to delete the **Resource Group** or delete all resources from the Resource Group before retriggering the installation.
### Invalid Sentinel credentials The Sentinel credentials provided during install are incorrect. Restart the installation with the correct credentials.
-### The regional account quota of Batch Accounts for the specified subscription has been reached
+### The regional account quota of Batch Accounts for the specified subscription is reached
Increase the quota, or delete the unused batch accounts and restart the installation.
Contact us with the following details:
### Can't view telemetry data
-**Symptom**: Devices or sensors are deployed, and you've linked FarmBeats with your device partner, but you can't get or view telemetry data on FarmBeats.
+**Symptom**: Devices or sensors are deployed, and device partner is linked to FarmBeats, but you can't get or view telemetry data on FarmBeats.
**Corrective action** 1. Go to your FarmBeats resource group.
-2. Select the **Event Hub** namespace ("sensor-partner-eh-namespace-xxxx"), click on "Event Hubs" and then check for the number of incoming messages in the event hub that is assigned to the partner
-3. Do either of the following:
+2. Select the **Event Hub** namespace ("sensor-partner-eh-namespace-xxxx"), select on "Event Hubs" and then check for the number of incoming messages in the event hub that is assigned to the partner
+3. Do either of the following steps:
- If there are *no incoming messages*, contact your device partner. - If there are *incoming messages*, contact us with your Datahub and Accelerator logs and captured telemetry.
To understand how to download logs, go to the ["Collect logs manually"](#collect
### Can't view telemetry data after ingesting historical/streaming data from your sensors
-**Symptom**: Devices or sensors are deployed, and you've created the devices/sensors on FarmBeats and ingested telemetry to the EventHub, but you can't get or view telemetry data on FarmBeats.
+**Symptom**: Devices or sensors are deployed, created on FarmBeats and ingested telemetry to the EventHub, but you can't get or view telemetry data on FarmBeats.
**Corrective action**
-1. Ensure you have done the partner registration correctly - you can check this by going to your datahub swagger, navigate to /Partner API, Do a Get and check if the partner is registered. If not, follow these [steps](get-sensor-data-from-sensor-partner.md#enable-device-integration-with-farmbeats) to add partner.
+1. Ensure partner registration is done correctly - you can check this by going to your datahub swagger, navigate to /Partner API, Do a Get and check if the partner is registered. If not, follow these [steps](get-sensor-data-from-sensor-partner.md#enable-device-integration-with-farmbeats) to add partner.
-2. Ensure that you have used the correct Telemetry message format:
+2. Ensure that you used the correct Telemetry message format:
```json {
To understand how to download logs, go to the ["Collect logs manually"](#collect
### Device appears offline
-**Symptoms**: Devices are installed, and you've linked FarmBeats with your device partner. The devices are online and sending telemetry data, but they appear offline.
+**Symptoms**: Devices are installed, and FarmBeats is linked with your device partner. The devices are online and sending telemetry data, but they appear offline.
**Corrective action** The reporting interval isn't configured for this device. To set the reporting interval, contact your device manufacturer.ΓÇ»
This issue might result from a temporary failure in the data pipeline. Create th
**Corrective action** Check the email ID for which you're trying to add a role assignment. The email ID must be an exact match of the ID, which is registered for that user in the Active Directory. If the error persists, contact us with the error message/logs.
-### Unable to log in to Accelerator
+### Unable to sign-in to Accelerator
-**Message**: "Error: You are not authorized to call the service. Contact the administrator for authorization."
+**Message**: "Error: You aren't authorized to call the service. Contact the administrator for authorization."
**Corrective action**
-Ask the administrator to authorize you to access the FarmBeats deployment. This can be done by doing a POST of the RoleAssignment APIs or through the Access Control in the **Settings** pane in Accelerator.
+Ask the administrator to authorize you to access the FarmBeats deployment by doing a POST of the RoleAssignment APIs or through the Access Control in the **Settings** pane in Accelerator.
-If you've already been granted access and facing this error, try again by refreshing the page. If the error persists, contact us with the error message/logs.
+If you have access and facing this error, try again by refreshing the page. If the error persists, contact us with the error message/logs.
![Screenshot that shows the authorization error.](./media/troubleshoot-azure-farmbeats/accelerator-troubleshooting-1.png) ### Accelerator issues
-**Issue**: You've received an Accelerator error of undetermined cause.
+**Issue**: You get an Accelerator error of undetermined cause.
**Message**: "Error: An unknown error occurred." **Corrective action** This error occurs if you leave the page idle for too long. Refresh the page. If the error persists, contact us with the error message/logs.
-**Issue**: FarmBeats Accelerator isn't showing the latest version, even after you've upgraded FarmBeatsDeployment.
+**Issue**: FarmBeats Accelerator isn't showing the latest version, even after you upgraded FarmBeatsDeployment.
**Corrective action**
-This error occurs because of service worker persistence in the browser. Do the following:
+This error occurs because of service worker persistence in the browser. Do the following steps:
1. Close all browser tabs that have Accelerator open, and close the browser window. 2. Start a new instance of the browser, and reload the Accelerator URI. This action loads the new version of Accelerator.
This error occurs because of service worker persistence in the browser. Do the f
**Job failure message**: "Full authentication is required to access this resource."
-**Corrective action**: Do one of the following:
+**Corrective action**: Do one of the following steps:
- Update FarmBeats with the correct username/password using the below steps and retry the job.
This error occurs because of service worker persistence in the browser. Do the f
**Corrective action**: 1. Open [Sentinel](https://scihub.copernicus.eu/dhus/) in your browser to see whether the website is accessible.
-2. If the website isn't accessible, check whether any firewall, company network, or other blocking software is preventing access to the website, and then take the necessary steps to allow the Sentinel URL. 
+2. If the website isn't accessible, check whether any firewall, company network, or other blocking software is preventing access to the website. After checking, take the necessary steps to allow the Sentinel URL. 
3. Rerun the failed job, or run a satellite indices job for a date range of 5 to 7 days, and then check whether the job is successful. ### Sentinel server: Down for maintenance
-**Job failure message**: "The Copernicus Open Access Hub will be back soon! Sorry for the inconvenience, we're performing some maintenance at the moment. We'll be back online shortly!" 
+**Job failure message**: "The Copernicus Open Access Hub is back soon! Sorry for the inconvenience, we're performing some maintenance at the moment. We will be back online shortly!" 
**Corrective action**:
This issue can occur if any maintenance activities are being done on the Sentine
**Job failure message**: "Maximum number of two concurrent flows achieved by the user '\<username>'."
-**Meaning**: If a job fails because the maximum number of connections has been reached, the same Sentinel account is being used in multiple jobs.
+**Meaning**: If a job fails because the maximum number of connections are reached, the same Sentinel account is being used in multiple jobs.
-**Corrective action**: Try either of the following:
+**Corrective action**: Try either of the following options:
-* Wait for the other jobs to finish before re-running the failed job.
+* Wait for the other jobs to finish before rerunning the failed job.
* Create a new Sentinel account, and then update the Sentinel username and password in FarmBeats. ### Sentinel server: Refused connection
This issue can occur if any maintenance activities are being done on the Sentine
**Issue**: The **Soil Moisture map** was generated, but the map has mostly white areas.
-**Corrective action**: This issue can occur if the satellite indices generated for the time for which the map was requested has NDVI values that is less than 0.3. For more information, visit [Technical Guide from Sentinel](https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-2-msi).
+**Corrective action**: This issue can occur if the satellite indices generated for the time for which the map was requested has NDVI values that are less than 0.3. For more information, visit [Technical Guide from Sentinel](https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-2-msi).
1. Rerun the job for a different date range and check if the NDVI values in the satellite indices are more than 0.3.
This issue can occur if any maintenance activities are being done on the Sentine
### Collect logs to troubleshoot weather data job failures 1. Go to your FarmBeats resource group in the Azure portal.
-2. Click on the Data Factory service that is part of the resource group. The service will have a tag "sku: Datahub"
+2. Select on the Data Factory service that is part of the resource group. The service has a tag "sku: Datahub"
> [!NOTE] > To view the tags of the services within the resource group, click on "Edit Columns" and add "Tags" to the resource group view :::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-1.png" alt-text="Screenshot that highlights the sku:Datahub tag.":::
-3. On the Overview page of the Data factory, click on **Author and Monitor**. A new tab opens on your browser. Click on **Monitor**
+3. On the Overview page of the Data factory, select on **Author and Monitor**. A new tab opens on your browser. Select on **Monitor**
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-2.png" alt-text="Screenshot that highlights the Monitor menu option.":::
-4. You will see a list of pipeline runs that are part of the weather job execution. Click on the Job that you want to collect logs for
+4. You see a list of pipeline runs that are part of the weather job execution. Select on the Job that you want to collect logs for
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-3.png" alt-text="Screenshot that highlights the Pipeline runs menu option and the selected job.":::
-5. On the pipeline overview page, you will see the list of activity runs. Make a note of the Run IDs of the activities that you want to collect logs for
+5. On the pipeline overview page, you see the list of activity runs. Make a note of the Run IDs of the activities that you want to collect logs for
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-4.png" alt-text="Screenshot that shows the list of activity runs.":::
-6. Go back to your FarmBeats resource group in Azure portal and click on the Storage Account with the name **datahublogs-XXXX**
+6. Go back to your FarmBeats resource group in Azure portal and select on the Storage Account with the name **datahublogs-XXXX**
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-5.png" alt-text="Screenshot that highlights the Storage Account with the name datahublogs-XXXX.":::
-7. Click on **Containers** -> **adfjobs**. In the Search box, enter the job Run ID that you noted in step 5 above.
+7. Select on **Containers** -> **adfjobs**. In the Search box, enter the job Run ID that you noted in step 5 above.
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-6.png" alt-text="Project FarmBeats":::
-8. The search result will contain the folder which has the logs pertaining to the job. Download the logs and send it to farmbeatssupport@microsoft.com for assistance in debugging the issue.
+8. The search result contains the folder that has the logs pertaining to the job. Download the logs and send it to farmbeatssupport@microsoft.com for assistance in debugging the issue.
industry Weather Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/weather-partner-integration-in-azure-farmbeats.md
Title: Weather partner integration description: Learn about how a weather data provider can integrate with FarmBeats.-+ Previously updated : 07/09/2020- Last updated : 11/29/2023+ # Weather partner integration with FarmBeats
This article provides information about the Azure FarmBeats Connector Docker component. As a weather data provider, you can use the Connector Docker to integrate with FarmBeats. Use its APIs to send weather data to FarmBeats. In FarmBeats, the data can be used for data fusion and for building machine learning models or artificial intelligence models.
- > [!NOTE]
- > In this article, we use a [reference implementation](https://github.com/azurefarmbeats/noaa_docker) that was built by using Azure Open Datasets and weather data from National Oceanic and Atmospheric Administration (NOAA). We also use the corresponding [Docker image](https://hub.docker.com/r/azurefarmbeats/farmbeats-noaa).
+> [!IMPORTANT]
+> Azure FarmBeats is retired. You can see the public announcement [**here**](https://azure.microsoft.com/updates/project-azure-farmbeats-will-be-retired-on-30-sep-2023-transition-to-azure-data-manager-for-agriculture/).
+>
+> We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
You must provide a [suitable Docker image or program](#docker-specifications) and host the docker image in a container registry that customers can access. Provide the following information to your customers:
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
lighthouse Recommended Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/recommended-security-practices.md
Title: Recommended security practices description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 11/28/2022 Last updated : 11/28/2023
When using [Azure Lighthouse](../overview.md), it's important to consider securi
## Require Microsoft Entra multifactor authentication
-[Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) (also known as two-step verification) helps prevent attackers from gaining access to an account by requiring multiple authentication steps. You should require Microsoft Entra multifactor authentication for all users in your managing tenant, including users who will have access to delegated customer resources.
+[Microsoft Entra multifactor authentication](/entra/identity/authentication/concept-mfa-howitworks) (also known as two-step verification) helps prevent attackers from gaining access to an account by requiring multiple authentication steps. You should require Microsoft Entra multifactor authentication for all users in your managing tenant, including users who will have access to delegated customer resources.
We recommend that you ask your customers to implement Microsoft Entra multifactor authentication in their tenants as well.
+> [!IMPORTANT]
+> Conditional access policies that are set on a customer's tenant don't apply to users who access that customer's resources through Azure Lighthouse. Only policies set on the managing tenant apply to those users. We strongly recommend requiring Microsoft Entra multifactor authentication for both the managing tenant and the managed (customer) tenant.
+ ## Assign permissions to groups, using the principle of least privilege To make management easier, use Microsoft Entra groups for each role required to manage your customers' resources. This lets you add or remove individual users to the group as needed, rather than assigning permissions directly to each user. > [!IMPORTANT]
-> In order to add permissions for a Microsoft Entra group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Microsoft Entra ID](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+> In order to add permissions for a Microsoft Entra group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members](/entra/fundamentals/how-to-manage-groups#create-a-basic-group-and-add-members).
When creating your permission structure, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job, helping to reduce the chance of inadvertent errors.
Keep in mind that when you [onboard customers through a public managed service
## Next steps - Review the [security baseline information](/security/benchmark/azure/baselines/lighthouse-security-baseline) to understand how guidance from the Microsoft cloud security benchmark applies to Azure Lighthouse.-- [Deploy Microsoft Entra multifactor authentication](../../active-directory/authentication/howto-mfa-getstarted.md).
+- [Deploy Microsoft Entra multifactor authentication](/entra/identity/authentication/howto-mfa-getstarted).
- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md
Previously updated : 12/12/2022 Last updated : 11/29/2023
-# Tutorial: Load balance multiple IP configurations using the Azure portal
+# Tutorial: Load balance multiple IP configurations using the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Portal](load-balancer-multiple-ip.md)
+> * [CLI](load-balancer-multiple-ip-cli.md)
+> * [PowerShell](load-balancer-multiple-ip-powershell.md)
To host multiple websites, you can use another network interface associated with a virtual machine. Azure Load Balancer supports deployment of load-balancing to support the high availability of the websites.
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create virtual network
-
-In this section, you'll create a virtual network for the load balancer and virtual machines.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-
-3. In **Virtual networks**, select **+ Create**.
-
-4. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **TutorialLBIP-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **West Europe** |
-
-5. Select the **IP Addresses** tab or select **Next: IP Addresses**.
-
-6. In the **IP Addresses** tab, enter the following information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-7. Select **+ Add subnet**.
-
-8. In **Add subnet**, enter the following information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-9. Select **Add**.
-
-10. Select the **Security** tab.
-
-11. Under **BastionHost**, select **Enable**. Enter the following information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-
-12. Select the **Review + create** tab or select the blue **Review + create** button at the bottom of the page.
-13. Select **Create**.
> [!IMPORTANT]
In this section, you'll create a virtual network for the load balancer and virtu
>
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorialLBIP-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab select the **Next: Outbound IP**.
-
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or **Next: Subnet**.
-
-9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select **Review + create**.
-
-12. Select **Create**.
- ## Create virtual machines
-In this section, you'll create two virtual machines to host the IIS websites.
+In this section, you create two virtual machines to host the IIS websites.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
In this section, you'll create two virtual machines to host the IIS websites.
|--|-| | **Project Details** | | | Subscription | Select your Azure subscription |
- | Resource Group | Select **TutorialLBIP-rg** |
+ | Resource Group | Select **load-balancer-rg** |
| **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Security type | Leave the default of **Standard**. |
In this section, you'll create two virtual machines to host the IIS websites.
|-|-| | **Network interface** | | | Virtual network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet(10.1.0.0/24)** |
+ | Subnet | Select **backend-subnet(10.1.0.0/24)** |
| Public IP | Select **None**. | | NIC network security group | Select **Advanced**| | Configure network security group | Select **Create new**. </br> In **Create network security group**, enter **myNSG** in **Name**. </br> In **Inbound rules**, select **+Add an inbound rule**. </br> In **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> In **Name**, enter **myNSGrule** </br> Select **Add** </br> Select **OK** |
In this section, you'll create two virtual machines to host the IIS websites.
## Create secondary network configurations
-In this section, you'll change the private IP address of the existing NIC of each virtual machine to **Static**. Next, you'll add a new NIC resource to each virtual machine with a **Static** private IP address configuration.
+In this section, you change the private IP address of the existing NIC of each virtual machine to **Static**. Next, you add a new NIC resource to each virtual machine with a **Static** private IP address configuration.
For more information on configuring floating IP in the virtual machine configuration, see [Floating IP Guest OS configuration](load-balancer-floating-ip.md#floating-ip-guest-os-configuration).
For more information on configuring floating IP in the virtual machine configura
4. Select **Networking** in **Settings**.
-5. In **Networking**, select the name of the network interface next to **Network interface**. The network interface will begin with the name of the VM and have a random number assigned. In this example, **myVM1266**.
+5. In **Networking**, select the name of the network interface next to **Network interface**. The network interface begins with the name of the VM and has a random number assigned. In this example, **myVM1266**.
:::image type="content" source="./media/load-balancer-multiple-ip/myvm1-nic.png" alt-text="Screenshot of myVM1 networking configuration in Azure portal.":::
For more information on configuring floating IP in the virtual machine configura
| Setting | Value | | - | -- | | **Project details** | |
- | Resource group | Select **TutorialLBIP-rg**. |
+ | Resource group | Select **load-balancer-rg**. |
| **Network interface** | | | Name | Enter **myVM1NIC2** |
- | Subnet | Select **myBackendSubnet (10.1.0.0/24)**. |
+ | Subnet | Select **backend-subnet (10.1.0.0/24)**. |
| NIC network security group | Select **Advanced**. | | Configure network security group | Select **myNSG**. | | Private IP address assignment | Select **Static**. |
For more information on configuring floating IP in the virtual machine configura
## Configure virtual machines
-You'll connect to **myVM1** and **myVM2** with Azure Bastion and configure the secondary network configuration in this section. You'll add a route for the gateway for the secondary network configuration. You'll then install IIS on each virtual machine and customize the websites to display the hostname of the virtual machine.
+You connect to **myVM1** and **myVM2** with Azure Bastion and configure the secondary network configuration in this section. You add a route for the gateway for the secondary network configuration. Then you install IIS on each virtual machine and customize the websites to display the hostname of the virtual machine.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
You'll connect to **myVM1** and **myVM2** with Azure Bastion and configure the s
## Create load balancer
-You'll create a zone redundant load balancer that load balances virtual machines in this section.
+You create a zone redundant load balancer that load balances virtual machines in this section.
With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-During the creation of the load balancer, you'll configure:
+During the creation of the load balancer, you configure:
* Two frontend IP addresses, one for each website. * Backend pools
During the creation of the load balancer, you'll configure:
| | | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **TutorialLBIP-rg**. |
+ | Resource group | Select **load-balancer-rg**. |
| **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **West Europe**. |
+ | Region | Select **East US**. |
| SKU | Leave the default **Standard**. | | Type | Select **Public**. | | Tier | Leave the default **Regional**. |
During the creation of the load balancer, you'll configure:
## Test load balancer
-In this section, you'll discover the public IP address for each website. You'll enter the IP into a browser to test the websites you created earlier.
+In this section, you discover the public IP address for each website. You enter the IP into a browser to test the websites you created earlier.
1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
If you're not going to continue to use this application, delete the virtual mach
1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
-2. Select **TutorialLBIP-rg** in **Resource groups**.
+2. Select **load-balancer-rg** in **Resource groups**.
3. Select **Delete resource group**.
-4. Enter **TutorialLBIP-rg** in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**.
+4. Enter **load-balancer-rg** in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**.
## Next steps
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
For more information about SAP services and ports, review the [TCP/IP Ports of A
### SAP NCo client library prerequisites
-To use the SAP connector, based on whether you have a Consumption or Standard workflow, you'll need install the SAP Connector NCo client library for Microsoft .NET 3.0 or 3.1, respectively. The following list describes the prerequisites for the SAP NCo client library, based on which workflow where you're using with the SAP connector:
+To use the SAP connector, you have to install the SAP Connector NCo client library for Microsoft .NET 3.1. The following list describes the prerequisites for the SAP NCo client library, based on the workflow where you're using the SAP connector:
* Version:
- * For Consumption logic app workflows that use the on-premises data gateway, make sure that you install the latest 64-bit version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). SAP Connector (NCo 3.1) isn't currently supported as dual-version capability is unavailable. The data gateway runs only on 64-bit systems. Installing the unsupported 32-bit version results in a **"bad image"** error.
-
- Earlier versions of SAP NCo might experience the following issues:
-
- * When more than one IDoc message is sent at the same time, this condition blocks all later messages that are sent to the SAP destination, causing messages to time out.
-
- * Session activation might fail due to a leaked session. This condition might block calls sent by SAP to the logic app workflow trigger.
-
- * The on-premises data gateway (June 2021 release and newer releases) depends on the `SAP.Middleware.Connector.RfcConfigParameters.Dispose()` method in SAP NCo to free up resources.
-
- * After you upgrade the SAP server environment, you get the following exception message: **"The only destination &lt;some-GUID&gt; available failed when retrieving metadata from &lt;SAP-system-ID&gt; -- see log for details"**.
-
- * For Standard logic app workflows, you can install the latest 64-bit or 32-bit version for [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.2.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html). However, make sure that you install the version that matches the configuration in your Standard logic app resource. To check the version used by your logic app, follow these steps:
+ * For Consumption logic app workflows that use the on-premises data gateway, make sure that you install the latest 64-bit version, [SAP Connector for Microsoft .NET 3.1.3.0 for Windows 64bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). The data gateway runs only on 64-bit systems. Installing the unsupported 32-bit version results in a **"bad image"** error.
+
+ * For Standard logic app workflows, you can install the latest 64-bit or 32-bit version for [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.3.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html). However, make sure that you install the version that matches the configuration in your Standard logic app resource. To check the version used by your logic app, follow these steps:
1. In the [Azure portal](https://portal.azure.com), open your Standard logic app.
To use the SAP connector, based on whether you have a Consumption or Standard wo
1. On the **Configuration** pane, under **Platform settings**, check whether the **Platform** value is set to 64-bit or 32-bit.
- 1. Make sure to install the version of the [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.2.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html) that matches your platform configuration.
+ 1. Make sure to install the version of the [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.3.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html) that matches your platform configuration.
* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows. Or, optionally, if you're using only the SAP managed connector, when you install the SAP NCo client library, select **Global Assembly Cache registration**. The ISE zip archive and SAP built-in connector currently doesn't support GAC registration.
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023 ms.suite: integration
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Use the following steps to deploy an MLflow model with a custom scoring script.
```pythonS environment = Environment( conda_file="sklearn-diabetes/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04:latest",
) ```
Use the following steps to deploy an MLflow model with a custom scoring script.
1. Select the tab __Custom environments__ > __Create__. 1. Enter the name of the environment, in this case `sklearn-mlflow-online-py37`. 1. On __Select environment type__ select __Use existing docker image with conda__.
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04`.
1. On __Customize__ section copy the content of the file `sklearn-diabetes/environment/conda.yml` we introduced before. 1. Click on __Next__ and then on __Create__. 1. The environment is ready to be used.
Use the following steps to deploy an MLflow model with a custom scoring script.
endpoint_name: my-endpoint model: azureml:sklearn-diabetes@latest environment:
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04
+ image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04
conda_file: sklearn-diabetes/environment/conda.yml code_configuration: code: sklearn-diabetes/src
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
When you create a private endpoint for Azure Machine Learning dependency resourc
The Azure Machine Learning managed VNet feature is free. However, you're charged for the following resources that are used by the managed VNet: * Azure Private Link - Private endpoints used to secure communications between the managed VNet and Azure resources relies on Azure Private Link. For more information on pricing, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing.
+* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. The Azure Firewall (standard SKU) is provisioned by Azure Machine Learning.
> [!IMPORTANT]
- > The firewall isn't created until you add an outbound FQDN rule. If you don't use FQDN rules, you will not be charged for Azure Firewall. For more information on pricing, see [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
+ > The firewall isn't created until you add an outbound FQDN rule. If you don't use FQDN rules, you will not be charged for Azure Firewall. For more information on pricing, see [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/) and view prices for the _standard_ version.
## Limitations
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Microsoft Entra Conditional Access can be used to further control or restrict ac
## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md).
-* [Configure your development environment](how-to-configure-environment.md) or use a [Azure Machine Learning compute instance](how-to-create-compute-instance.md) and install the [Azure Machine Learning SDK v2](https://aka.ms/sdk-v2-install).
+* [Configure your development environment](how-to-configure-environment.md) or use an [Azure Machine Learning compute instance](how-to-create-compute-instance.md) and install the [Azure Machine Learning SDK v2](https://aka.ms/sdk-v2-install).
* Install the [Azure CLI](/cli/azure/install-azure-cli).
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
mysql Concepts Accelerated Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-accelerated-logs.md
Database servers with mission-critical workloads demand robust performance, requ
- [High Availability](./concepts-high-availability.md) (HA) servers. - Servers enabled with [Customer Managed Keys](./concepts-customer-managed-key.md) (CMK). - Servers enabled with [Microsoft Entra ID](./concepts-azure-ad-authentication.md) authentication.
- - [Read-replicas](concepts-read-replicas.md) servers.
- Performing a [major version upgrade](./how-to-upgrade.md) on your Azure Database for MySQL flexible server with the accelerated logs feature enabled is **not supported**. Suppose you wish to proceed with a major version upgrade. In that case, you should temporarily [disable](#disable-accelerated-logs-feature-preview) the accelerated logs feature, carry out the upgrade, and re-enable the accelerated logs feature once the upgrade is complete.
mysql How To Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-cli.md
To update the **slow\_query\_log** server parameter of server **mydemoserver.mys
```azurecli-interactive az mysql flexible-server parameter set --name slow_query_log --resource-group myresourcegroup --server-name mydemoserver --value ON ```+
+To update multiple server parameters like **slow\_query\_log** and **audit\_log\_enabled** of server **mydemoserver.mysql.database.azure.com** under resource group **myresourcegroup.**
+```azurecli-interactive
+az mysql flexible-server parameter set-batch -resource-group myresourcegroup --server-name mydemoserver --source "user-override" --args slow_query_log="ON" audit_log_enabled="ON"
+```
++ If you want to reset the value of a parameter, omit the optional `--value` parameter, and the service applies the default value. For the example above, it would look like: ```azurecli-interactive az mysql flexible-server parameter set --name slow_query_log --resource-group myresourcegroup --server-name mydemoserver
mysql Tutorial Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-configure-audit.md
Audit logs are integrated with Azure Monitor diagnostics settings to allow you t
1. After you've configured the data sinks to pipe the audit logs to, select **Save**.
- :::image type="content" source="./media/tutorial-configure-audit/save-diagnostic-setting.png" alt-text="Screenshot of the 'Save' button at the top of the 'Diagnostics settings' pane.":::
## View audit logs by using Log Analytics
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
## November 2023
+- **Modify multiple server parameters using Azure CLI**
+
+ You can now conveniently update multiple server parameters for your Azure Database for MySQL - Flexible Server using Azure CLI. [Learn more](./how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value)
+ - **Accelerated logs in Azure Database for MySQL - Flexible Server (Preview)**
-We're excited to announce preview of the accelerated logs feature for Azure Database for MySQL ΓÇô Flexible Server. This feature is available within the Business Critical service tier. Accelerated logs significantly enhances the performance of MySQL flexible servers, offering a dynamic solution that is designed for high throughput needs that also reduces latency and optimizes cost efficiency.[Learn more](./concepts-accelerated-logs.md).
+ We're excited to announce preview of the accelerated logs feature for Azure Database for MySQL ΓÇô Flexible Server. This feature is available within the Business Critical service tier. Accelerated logs significantly enhances the performance of MySQL flexible servers, offering a dynamic solution that is designed for high throughput needs that also reduces latency and optimizes cost efficiency.[Learn more](./concepts-accelerated-logs.md).
- **Universal Geo Restore in Azure Database for MySQL - Flexible Server (General Availability)**
-Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
+
+ Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
## October 2023 - **Addition of New vCore Options in Azure Database for MySQL - Flexible Server**
-We are excited to inform you that we have introduced new 20 vCores options under the Business Critical Service tier for our Azure Database for MySQL - Flexible Server. Please find more information under [Compute Option for Azure Database for MySQL - Flexible Server](./concepts-service-tiers-storage.md#service-tiers-size-and-server-types).
+ We are excited to inform you that we have introduced new 20 vCores options under the Business Critical Service tier for our Azure Database for MySQL - Flexible Server. Please find more information under [Compute Option for Azure Database for MySQL - Flexible Server](./concepts-service-tiers-storage.md#service-tiers-size-and-server-types).
- **Metrics computation for Azure Database for MySQL - Flexible Server**
-"Host Memory Percent" metric will provide more accurate calculations of memory usage. It will now reflect the actual memory consumed by the server, excluding re-usable memory from the calculation. This improvement ensures that you have a more precise understanding of your server's memory utilization. After the completion of the [scheduled maintenance window](./concepts-maintenance.md), existing servers will benefit from this enhancement.
+ "Host Memory Percent" metric will provide more accurate calculations of memory usage. It will now reflect the actual memory consumed by the server, excluding re-usable memory from the calculation. This improvement ensures that you have a more precise understanding of your server's memory utilization. After the completion of the [scheduled maintenance window](./concepts-maintenance.md), existing servers will benefit from this enhancement.
+ - **Known Issues** - When attempting to modify the User assigned managed identity and Key identifier in a single request while changing the CMK settings, the operation gets struck. We are working on the upcoming deployment for the permanent solution to address this issue, in the meantime, please ensure that you perform the two operations of updating the User Assigned Managed Identity and Key identifier in separate requests. The sequence of these operations is not critical, as long as the user-assigned identities have the necessary access to both Key Vault
We are excited to inform you that we have introduced new 20 vCores options under
## September 2023 - **Flexible Maintenance for Azure Database for MySQL - Flexible server(Public Preview)**
-Flexible Maintenance for Azure Database for MySQL - Flexible Server enables a tailored maintenance schedule to suit your operational rhythm. This feature allows you to reschedule maintenance tasks within a maximum 14-day window and initiate on-demand maintenance, granting you unprecedented control over server upkeep timing. Stay tuned for more customizable experiences in the future. [Learn more](concepts-maintenance.md).
+
+ Flexible Maintenance for Azure Database for MySQL - Flexible Server enables a tailored maintenance schedule to suit your operational rhythm. This feature allows you to reschedule maintenance tasks within a maximum 14-day window and initiate on-demand maintenance, granting you unprecedented control over server upkeep timing. Stay tuned for more customizable experiences in the future. [Learn more](concepts-maintenance.md).
- **Universal Cross Region Read Replica on Azure Database for MySQL- Flexible Server (General Availability)**
-Azure Database for MySQL - Flexible server now supports Universal Read Replicas in Public regions. The feature allows you to replicate your data from an instance of Azure Database for MySQL Flexible Server to a read-only server in Universal region which could be any region from the list of Azure supported region where flexible server is available. [Learn more](concepts-read-replicas.md)
+
+ Azure Database for MySQL - Flexible server now supports Universal Read Replicas in Public regions. The feature allows you to replicate your data from an instance of Azure Database for MySQL Flexible Server to a read-only server in Universal region which could be any region from the list of Azure supported region where flexible server is available. [Learn more](concepts-read-replicas.md)
- **Private Link for Azure Database for MySQL - Flexible Server (General Availability)**
-You can now enable private endpoints to provide a secure means to access Azure Database for MySQL Flexible Server via a Private Link, allowing both public and private access simultaneously. If necessary, you have the choice to restrict public access, ensuring that connections are exclusively routed through private endpoints for heightened network security. It's also possible to configure or update Private Link settings either during or after the creation of the server. [Learn more](./concepts-networking-private-link.md).
+
+ You can now enable private endpoints to provide a secure means to access Azure Database for MySQL Flexible Server via a Private Link, allowing both public and private access simultaneously. If necessary, you have the choice to restrict public access, ensuring that connections are exclusively routed through private endpoints for heightened network security. It's also possible to configure or update Private Link settings either during or after the creation of the server. [Learn more](./concepts-networking-private-link.md).
- **Azure MySQL Import Smart Defaults for Azure Database for MySQL - Single to Flexible Server migration (Public Preview)**
-You can now migrate an Azure Database for MySQL Single Server to an Azure Database for MySQL Flexible Server by running a single CLI command with minimal inputs as the command leverages smart defaults for target Flexible Server provisioning based on the source server SKU and properties! [Learn more](../migrate/migrate-single-flexible-mysql-import-cli.md)
+
+ You can now migrate an Azure Database for MySQL Single Server to an Azure Database for MySQL Flexible Server by running a single CLI command with minimal inputs as the command leverages smart defaults for target Flexible Server provisioning based on the source server SKU and properties! [Learn more](../migrate/migrate-single-flexible-mysql-import-cli.md)
- **Nominate eligible Azure DB for MySQL Single Server instance for in-place automigration to Flexible Server**
-If you own a Azure DB for MySQL Single Server workload with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for in-place automigration to Flexible Server by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u)
+
+ If you own an Azure DB for MySQL Single Server workload with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, Microsoft Entra ID, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for in-place automigration to Flexible Server by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u)
## August 2023 - **Universal Geo Restore in Azure Database for MySQL - Flexible Server (Public Preview)**
-Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
+
+ Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
- **Generated Invisible Primary Key in Azure Database for MySQL - Flexible Server**
-Azure Database for MySQL Flexible Server now supports [generated invisible primary key (GIPK)](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) for MySQL version 8.0. With this change, by default, the value of the server system variable "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key) " is ON for all MySQL - Flexible Server on MySQL 8.0. With GIPK mode ON, MySQL generates an invisible primary key to any InnoDB table which is new created without an explicit primary key. Learn more about the GIPK mode:
-[Generated Invisible Primary Keys](./concepts-limitations.md#generated-invisible-primary-keys)
+
+ Azure Database for MySQL Flexible Server now supports [generated invisible primary key (GIPK)](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) for MySQL version 8.0. With this change, by default, the value of the server system variable "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key) " is ON for all MySQL - Flexible Server on MySQL 8.0. With GIPK mode ON, MySQL generates an invisible primary key to any InnoDB table which is new created without an explicit primary key. Learn more about the GIPK mode: [Generated Invisible Primary Keys](./concepts-limitations.md#generated-invisible-primary-keys)
[Invisible Column Metadata](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html#invisible-column-metadata)
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Database for MySQL
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
Previously updated : 09/28/2023 Last updated : 11/29/2023 #CustomerIntent: As a network administrator, I want to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
After creating **VNet1GW** and **VNet2GW** virtual network gateways, you can cre
1. Select **+ Add** to create a connection from **VNet1** to **VNet2**.
-1. In **Add connection**, enter or select the following values:
+1. In **Create connection**, enter or select the following values in the **Basics** tab:
| Setting | Value | | | |
- | Name | Enter ***to-VNet2***. |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroup**. |
+ | **Instance details** | |
| Connection type | Select **VNet-to-VNet**. |
+ | Name | Enter ***to-VNet2***. |
+ | Region | Select **East US**. |
+
+1. Select the **Settings** tab or select **Next: Settings** button.
+
+1. In **Settings** tab, enter or select the following values:
+
+ | Setting | Value |
+ | | |
+ | **Virtual network gateway** | |
+ | First virtual network gateway | Select **VNet1GW**. |
| Second virtual network gateway | Select **VNet2GW**. | | Shared key (PSK) | Enter ***123***. |
-1. Select **OK**.
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
### Create second connection
After creating **VNet1GW** and **VNet2GW** virtual network gateways, you can cre
| Setting | Value | | | | | Name | **to-VNet1** |
+ | First virtual network gateway | **VNet2GW** |
| Second virtual network gateway | **VNet1GW** | | Shared key (PSK) | **000** |
Fix the problem by correcting the key on **to-VNet1** connection to match the ke
1. Go to **to-VNet1** connection.
-1. Under **Settings**, select **Shared key**.
+1. Under **Settings**, select **Authentication Type**.
1. In **Shared key (PSK)**, enter ***123*** and then select **Save**.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 11/16/2023 Last updated : 11/28/2023
-#CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can better monitor and optimize my network.
+#CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can monitor my network and optimize its performance.
# Flow logging for network security groups
When you delete a network security group, the associated flow log resource is de
### Storage account -- **Location**: The storage account used must be in the same region as the network security group.-- **Performance tier**: Currently, only standard-tier storage accounts are supported.
+- **Location**: The storage account must be in the same region as the network security group.
+- **Subscription**: The storage account must be in a subscription associated with the same Microsoft Entra tenant as the network security group's subscription.
+- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported.
- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, NSG flow logs stop working. To fix this problem, you must disable and then re-enable NSG flow logs. ### Cost
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Previously updated : 08/16/2023 Last updated : 11/29/2023+
+#CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize the network performance.
# VNet flow logs (preview)
For continuation (`C`) and end (`E`) flow states, byte and packet counts are agg
### Storage account -- **Location**: The storage account used must be in the same region as the virtual network.-- **Performance tier**: Currently, only standard-tier storage accounts are supported.
+- **Location**: The storage account must be in the same region as the virtual network.
+- **Subscription**: The storage account must be in a subscription associated with the same Microsoft Entra tenant as the virtual network's subscription.
+- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported.
- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs. ### Cost
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
networking Secure Application Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/secure-application-delivery.md
description: Learn how you can use a decision tree to help choose a secure appli
Previously updated : 11/15/2023 Last updated : 11/29/2023
Choosing a topology for web application ingress has a few different options, so this decision tree helps identify the initial pattern to start with when considering a web application flow for your workload. The key consideration is whether you're using a globally distributed web-based pattern with Web Application Firewall (WAF). Patterns in this classification are better served at the Azure edge versus within your specific virtual network.
-Azure Front Door, for example, sits at the edge, supports WAF, and additionally includes application acceleration capabilities. Azure Front Door can be used in combination with Application Gateway for more layers of protection and more granular rules per application. If you aren't distributed, then an Application Gateway also works with WAF and can be used to manage web based traffic with TLS inspection. Finally, if you have media based workloads then the Verizon Media Streaming service delivered via Azure is the best option you.
+[Azure Front Door](../frontdoor/front-door-overview.md), for example, sits at the edge, supports WAF, and additionally includes application acceleration capabilities. Azure Front Door can be used in combination with [Application Gateway](../application-gateway/overview.md) for more layers of protection and more granular rules per application. If you aren't distributed, then an Application Gateway also works with WAF and can be used to manage web based traffic with TLS inspection. Finally, if you have media based workloads then the Verizon Media Streaming service delivered via Azure is the best option you.
## Decision tree
Treat this decision tree as a starting point. Every deployment has unique requir
## Next steps -- [Choose a secure network topology](secure-network-topology.md)
+- [Choose a secure network topology](secure-network-topology.md)
+- [Learn more about Azure network security](security/index.yml)
networking Secure Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/secure-network-topology.md
description: Learn how you can use a decision tree to help choose the best topol
Previously updated : 11/15/2023 Last updated : 11/29/2023
A network topology defines the basic routing and traffic flow architecture for your workload. However, you must consider security with the network topology. To simplify the initial decision to formulate a direction, there are some simple paths that can be used to help define the secure topology. This includes whether the workload is a globally distributed workload or a single region-based workload. You also must consider plans to use third-party network virtual appliances (NVAΓÇÖs) to handle both routing and security.
+[Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) is a networking service that brings many networking, security, and routing functionalities together to provide a single operational interface.
+
+[Azure Virtual Network Manager](../virtual-network-manager/overview.md) is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. [Security admin rules](../virtual-network-manager/concept-security-admins.md) can be applied to the virtual network to control access to the network and the resources within the network.
+ ## Decision tree The following decision tree helps you to choose a network topology for your security requirements. The decision tree guides you through a set of key decision criteria to reach a recommendation.
Treat this decision tree as a starting point. Every deployment has unique requir
## Next steps -- [Choose a secure application delivery service](secure-application-delivery.md)
+- [Choose a secure application delivery service](secure-application-delivery.md)
+- [Learn more about Azure network security](security/index.yml)
openshift Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/troubleshoot.md
Currently, the `RedHatOpenShift/OpenShiftClusters` resource that's automatically
If creating a cluster results in an error that `No registered resource provider found for location '<location>' and API version '2019-04-30' for type 'openShiftManagedClusters'. The supported api-versions are '2018-09-30-preview'.`, then you were part of the preview and now need to [purchase Azure virtual machine reserved instances](https://aka.ms/openshift/buy) to use the generally available product. A reservation reduces your spend by pre-paying for fully managed Azure services. For more information about reservations and how they save you money, see [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+## Exceeding Azure storage limits
+
+If requests are being throttled due to Azure storage limits being exceeded, it might be due to one of the following reasons:
+
+- There's a maximum of approximately 50 clusters per subscription ID + region. Create fewer than 50 clusters per subscription + region. For example: 25 clusters in subscription + eastus and 25 clusters in subscription + eastus2.
+- Avoid creating multiple clusters within a single subscription + region at the same time. If you need to create multiple clusters in a short period of time, federate over multiple subscriptions or regions.
+
+If the issue persists please create a support ticket for investigation.
+ ## Next steps - Visit the [OpenShift documentation](https://docs.openshift.com/container-platform)
operator-nexus Howto Use Mde Runtime Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-mde-runtime-protection.md
az networkcloud cluster update \
--subscription ${SUBSCRIPTION_ID} \ --resource-group ${RESOURCE_GROUP} \ --cluster-name ${CLUSTER_NAME} \runtime-protection-configuration enforcement-level="Disabled"
+--runtime-protection enforcement-level="Disabled"
``` Upon execution, inspect the output for the following:
Running this command will make the Cluster aware of the MDE runtime protection s
to a value other than `Disabled` in the next section > [!NOTE]
->As you have noted, the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"` serves two purposes: enabling/disabling MDE service and updating the enforcement level.
+>As you have noted, the argument `--runtime-protection enforcement-level="<enforcement level>"` serves two purposes: enabling/disabling MDE service and updating the enforcement level.
If you want to disable the MDE service across your Cluster, use an `<enforcement level>` of `Disabled`. ## Configuring enforcement level
-The `az networkcloud cluster update` allows you to update of the settings for Cluster runtime protection *enforcement level* by using the argument `--runtime-protection-configuration enforcement-level="<enforcement level>"`.
+The `az networkcloud cluster update` allows you to update of the settings for Cluster runtime protection *enforcement level* by using the argument `--runtime-protection enforcement-level="<enforcement level>"`.
The following command configures the `enforcement level` for your Cluster.
az networkcloud cluster update \
--subscription ${SUBSCRIPTION_ID} \ --resource-group ${RESOURCE_GROUP} \ --cluster-name ${CLUSTER_NAME} \runtime-protection-configuration enforcement-level="<enforcement level>"
+--runtime-protection enforcement-level="<enforcement level>"
``` Allowed values for `<enforcement level>`: `Audit`, `Disabled`, `OnDemand`, `Passive`, `RealTime`.
operator-service-manager Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/glossary.md
Last updated 08/15/2023
+content_well_notification:
+ - AI-contribution
# Glossary: Azure Operator Service Manager
orbital About Ground Stations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/about-ground-stations.md
In addition, we support public satellites for downlink-only operations that util
## Partner ground stations
-Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner.
+Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner. Once you have the proper contract(s) and regulatory approval(s) in place, your subscription is approved to access partner ground station sites by the Azure Orbital Ground Station team. Learn how to [request authorization of a spacecraft](register-spacecraft.md#request-authorization-of-the-new-spacecraft-resource) and [configure a contact profile](concepts-contact-profile.md#configuring-a-contact-profile-for-applicable-partner-ground-stations) for partner ground stations.
## Next steps
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
- Title: License your spacecraft - Azure Orbital
-description: Learn how to license your spacecraft with Azure Orbital Ground Station.
---- Previously updated : 07/12/2022---
-# License your spacecraft
-
-This page provides an overview on how to register or license your spacecraft with Azure Orbital.
-
- > [!NOTE]
- > This process is for the ground station license only. Microsoft manages the ground station licenses in our network and ensures customer satellites are added and authorized.
- > The customer is responsible for acquiring a spacecraft license for their spacecraft. Microsoft can provide technical information needed to complete the federal regulator and ITU processes as needed.
-
-## Prerequisites
-
-To initiate the spacecraft licensing process, you'll need:
--- A spacecraft object that corresponds to the spacecraft in orbit or slated for launch. The links in this object must match all current and planned filings.-- A list of ground stations that you wish to use to communicate with your satellite.-
-## Step 1: Initiate the request
-
-The process starts by initiating the licensing request via the Azure portal.
-
-1. Navigate to the spacecraft object and select New Support Request under the Support + troubleshooting category to the left.
-1. Complete the following fields:
-
- | **Field** | **Value** |
- | | |
- | Summary | Provide a relevant ticket title. |
- | Issue type | Technical |
- | Subscription | Choose your current subscription. |
- | Service | My Service |
- | Service Type | Azure Orbital |
- | Problem type | Spacecraft Management and Setup |
- | Problem subtype | Spacecraft Registration |
-
-1. Click next to Solutions.
-1. Click next to Details.
-1. Enter the desired ground stations in the Description field.
-1. Enable advanced diagnostic information.
-1. Click next to Review + Create.
-1. Click Create.
-
-## Step 2: Provide more details
-
-When the request is generated, our regulatory team will investigate the request and determine if more detail is required. If so, a customer support representative will reach out to you with a regulatory intake form. You'll need to input information regarding relevant filings, call signs, orbital parameters, link details, antenna details, point of contacts, etc.
-
-Fill out all relevant fields in this form as it helps speeds up the process. When you're done entering information, email this form back to the customer support representative.
-
-## Step 3: Await feedback from our regulatory team
-
-Based on the details provided in the steps above, our regulatory team will make an assessment on time and cost to onboard your spacecraft to all requested ground stations. This step will take a few weeks to execute.
-
-Once the determination is made, we'll confirm the cost with you and ask you to authorize before proceeding.
-
-## Step 4: Azure Orbital requests the relevant licensing
-
-Upon authorization, you will be billed the fees associated with each relevant ground station. Our regulatory team will seek the relevant licenses to enable your spacecraft to communicate with the desired ground stations. Refer to the following table for an estimated timeline for execution:
-
-| **Station** | **Qunicy** | **Chile** | **Sweden** | **South Africa** | **Singapore** |
-| -- | - | | - | - | - |
-| Onboarding Timeframe | 3-6 months | 3-6 months | 3-6 months | <1 month | 3-6 months |
-
-## Step 5: Spacecraft is authorized
-
-Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. To understand how the authorizations are applied, see [Spacecraft Object](./spacecraft-object.md).
-
-## FAQ
-
-**Q.** Are third party ground stations such as KSAT included in this process?
-**A.** No, the process on this page applies to Microsoft sites only. For more information, see [Integrate partner network ground stations](./partner-network-integration.md).
-
-**Q.** Do public satellites requite licensing?
-**A.** The Azure Orbital Ground Station service supports several public satellites that do not require licensing. These include Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
--
-## Next steps
-- [Integrate partner network ground stations](./partner-network-integration.md)-- [Receive real-time telemetry](receive-real-time-telemetry.md)
orbital Partner Network Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/partner-network-integration.md
- Title: Integrate partner network ground stations into your Azure Orbital Ground Station solution
-description: Leverage partner network ground station locations through Azure Orbital.
---- Previously updated : 01/05/2023---
-# Integrate partner network ground stations into your Azure Orbital Ground Station solution
-
-This article describes how to integrate partner network ground stations for customers with partner network contracts. In order to use Azure Orbital Ground Station to make contacts with partner network ground station sites, your spacecraft must be authorized in the portal.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Contributor permissions](/azure/role-based-access-control/rbac-and-directory-admin-roles#azure-roles) at the subscription level.-- A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.-- A spacecraft license is required for private spacecraft.-- An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station:
- - [KSAT Lite](https://azuremarketplace.microsoft.com/marketplace/apps/kongsbergsatelliteservicesas1657024593438.ksatlite?exp=ubp8&tab=Overview)
- - [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
-- A ground station license for each of the partner network sites you wish to contact is required for private spacecraft.-- A registered spacecraft object. Learn more on how to [register a spacecraft](register-spacecraft.md).-
-## Obtain licencses
-
-Obtain the proper **spacecraft license(s)** for a private spacecraft. Additionally, work with the partner network to obtain a **ground station license** for each partner network site you intend to use with your spacecraft.
-
- > [!NOTE]
- > Public spacecraft do not require licensing for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
-
-## Create spacecraft resource
-
-Create a registered spacecraft object on the Orbital portal by following the [spacecraft registration](register-spacecraft.md) instructions.
-
-## Request authorization of the new spacecraft resource
-
-1. Navigate to the newly created spacecraft resource's overview page.
-2. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
-3. In the **New support request** page, enter or select this information in the Basics tab:
-
-| **Field** | **Value** |
-| | |
-| Summary | Request Authorization for [Spacecraft Name] |
-| Issue type | Select **Technical** |
-| Subscription | Select the subscription in which the spacecraft resource was created |
-| Service | Select **My services** |
-| Service type | Search for and select **Azure Orbital** |
-| Problem type | Select **Spacecraft Management and Setup** |
-| Problem subtype | Select **Spacecraft Registration** |
-
-4. Select the Details tab at the top of the page
-5. In the Details tab, enter this information in the Problem details section:
-
-| **Field** | **Value** |
-| | |
-| When did the problem start? | Select the current date & time |
-| Description | List your spacecraft's **frequency bands** and **desired partner network ground stations**. |
-| File upload | Upload all pertinent **spacecraft licensing material**, **ground station licensing material**, **partner network contract details**, or **partner POCs**, if applicable. |
-
-6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
-7. Select the **Review + create** tab, or select the **Review + create** button.
-8. Select **Create**.
-
- > [!NOTE]
- > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
-
-After the authorization request is generated, our regulatory team will investigate the request and validate the material. The partner network must inform Microsoft of the ground station license approval(s) to complete the spacecraft authorization. Once verified, we will enable your spacecraft to communicate with the partner network ground stations outlined in the request.
-
-## Confirm spacecraft is authorized
-
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the Spacecraft page, select the **newly registered spacecraft**.
-3. In the new spacecraft's overview page, check that the **Authorization status** shows **Allowed**.
-
-## Next steps
--- [Configure a contact profile](./contact-profile.md)-- [Learn more about the contact profile object](./concepts-contact-profile.md)-
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Invoke [Azure OpenAI embeddings](../../ai-services/openai/reference.md#embedding
1. Create an Open AI account and [request access to Azure OpenAI Service](https://aka.ms/oai/access). 1. Grant Access to Azure OpenAI in the desired subscription. 1. Grant permissions toΓÇ»[create Azure OpenAI resources and to deploy models](../../ai-services/openai/how-to/role-based-access-control.md).-
-[Create and deploy an Azure OpenAI service resource and a model](../../ai-services/openai/how-to/create-resource.md), for example deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings.
--
+1. [Create and deploy an Azure OpenAI service resource and a model](../../ai-services/openai/how-to/create-resource.md), for example deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings.
## Configure OpenAI endpoint and key In the Azure OpenAI resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and the keys for your Azure OpenAI resource. Use the endpoint and one of the keys to enable `azure_ai` extension to invoke the model deployment.
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-link Private Endpoint Dns Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns-integration.md
DNS is a critical component to make the application work correctly by successful
Based on your preferences, the following scenarios are available with DNS resolution integrated:
- - [Virtual network workloads without custom DNS server](#virtual-network-workloads-without-custom-dns-server)
+- [Virtual network workloads without Azure Private Resolver](#virtual-network-workloads-without-azure-private-resolver)
- - [On-premises workloads using a DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)
+- [Peered virtual network workloads without Azure Private Resolver](#virtual-network-workloads-without-custom-dns-server)
+
+- [Azure Private Resolver for on-premises workloads](#azure-private-resolver-for-on-premises-workloads)
- - [Virtual network and on-premises workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
-
-> [!NOTE]
-> [Azure Firewall DNS proxy](../firewall/dns-settings.md#dns-proxy) can be used as DNS forwarder for [On-premises workloads](#on-premises-workloads-using-a-dns-forwarder) and [Virtual network workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
+- [Azure Private Resolver with on-premises DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)
+
+- [Azure Private Resolver for virtual network and on-premises workloads](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
-## Virtual network workloads without custom DNS server
+## Virtual network workloads without Azure Private Resolver
This configuration is appropriate for virtual network workloads without a custom DNS server. In this scenario, the client queries for the private endpoint IP address to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). Azure DNS is responsible for DNS resolution of the private DNS zones.
To configure properly, you need the following resources:
- Client virtual network -- Private DNS zone [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
+- Private DNS zone [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
- Private endpoint information (FQDN record name and private IP address)
The following screenshot illustrates the DNS resolution sequence from virtual ne
:::image type="content" source="media/private-endpoint-dns/single-vnet-azure-dns.png" alt-text="Diagram of single virtual network and Azure-provided DNS.":::
-You can extend this model to peered virtual networks associated to the same private endpoint. [Add new virtual network links](../dns/private-dns-virtual-network-links.md) to the private DNS zone for all peered virtual networks.
+## <a name="virtual-network-workloads-without-custom-dns-server"></a> Peered virtual network workloads without Azure Private Resolver
-> [!IMPORTANT]
-> A single private DNS zone is required for this configuration. Creating multiple zones with the same name for different virtual networks would need manual operations to merge the DNS records.
+You can extend this model to peered virtual networks associated to the same private endpoint. [Add new virtual network links](../dns/private-dns-virtual-network-links.md) to the private DNS zone for all peered virtual networks.
> [!IMPORTANT]
-> If you're using a private endpoint in a hub-and-spoke model from a different subscription or even within the same subscription, link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
+> - A single private DNS zone is required for this configuration. Creating multiple zones with the same name for different virtual networks would need manual operations to merge the DNS records.
+>
+> - If you're using a private endpoint in a hub-and-spoke model from a different subscription or even within the same subscription, link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
In this scenario, there's a [hub and spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) networking topology. The spoke networks share a private endpoint. The spoke virtual networks are linked to the same private DNS zone. :::image type="content" source="media/private-endpoint-dns/hub-and-spoke-azure-dns.png" alt-text="Diagram of hub and spoke with Azure-provided DNS.":::
-## On-premises workloads using a DNS forwarder
+## Azure Private Resolver for on-premises workloads
-For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](private-endpoint-dns.md) in Azure. A [DNS forwarder](/windows-server/identity/ad-ds/plan/reviewing-dns-concepts#resolving-names-by-using-forwarding) is a Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can proxy DNS queries coming from other Virtual Networks or from on-premises. This is required as the query must be originated from the Virtual Network to Azure DNS. A few options for DNS proxies are: Windows running DNS services, Linux running DNS services, [Azure Firewall](../firewall/dns-settings.md).
+For on-premises workloads to resolve the FQDN of a private endpoint, use Azure Private Resolver to resolve the Azure service public DNS zone in Azure. Azure Private Resolver is an Azure managed service that can resolve DNS queries without the need for a virtual machine acting as a DNS forwarder.
-The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
+The following scenario is for an on-premises network configured to use an Azure Private Resolver. The private resolver forwards the request for the private endpoint to Azure DNS.
> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md).
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone values](private-endpoint-dns.md).
-To configure properly, you need the following resources:
+The following resources are required for a proper configuration:
-- On-premises network-- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- DNS forwarder deployed in Azure -- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)-- Private endpoint information (FQDN record name and private IP address)
+- On-premises network
+
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+
+- [Azure Private Resolver](/azure/dns/dns-private-resolver-overview)
+
+- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
+- Private endpoint information (FQDN record name and private IP address)
-The following diagram illustrates the DNS resolution sequence from an on-premises network. The configuration uses a DNS forwarder deployed in Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
+The following diagram illustrates the DNS resolution sequence from an on-premises network. The configuration uses a Private Resolver deployed in Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
:::image type="content" source="media/private-endpoint-dns/on-premises-using-azure-dns.png" alt-text="Diagram of on-premises using Azure DNS.":::
-This configuration can be extended for an on-premises network that already has a DNS solution in place. 
-The on-premises DNS solution is configured to forward DNS traffic to Azure DNS via a [conditional forwarder](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). The conditional forwarder references the DNS forwarder deployed in Azure.
+## <a name="on-premises-workloads-using-a-dns-forwarder"></a> Azure Private Resolver with on-premises DNS forwarder
+
+This configuration can be extended for an on-premises network that already has a DNS solution in place.
+
+The on-premises DNS solution is configured to forward DNS traffic to Azure DNS via a [conditional forwarder](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). The conditional forwarder references the Private Resolver deployed in Azure.
> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md)
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone values](private-endpoint-dns.md)
-To configure properly, you need the following resources:
+To configure properly, you need the following resources:
-- On-premises network with a custom DNS solution in place -- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- DNS forwarder deployed in Azure-- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)-- Private endpoint information (FQDN record name and private IP address)
+- On-premises network with a custom DNS solution in place
+
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+
+- [Azure Private Resolver](/azure/dns/dns-private-resolver-overview)
-The following diagram illustrates the DNS resolution from an on-premises network. DNS resolution is conditionally forwarded to Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md).
+- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
+
+- Private endpoint information (FQDN record name and private IP address)
+
+The following diagram illustrates the DNS resolution from an on-premises network. DNS resolution is conditionally forwarded to Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md).
> [!IMPORTANT]
-> The conditional forwarding must be made to the recommended [public DNS zone forwarder](private-endpoint-dns.md). For example: `database.windows.net` instead of **privatelink**.database.windows.net.
+> The conditional forwarding must be made to the recommended [public DNS zone forwarder](private-endpoint-dns.md). For example: `database.windows.net` instead of **privatelink**.database.windows.net.
:::image type="content" source="media/private-endpoint-dns/on-premises-forwarding-to-azure.png" alt-text="Diagram of on-premises forwarding to Azure DNS.":::
-## Virtual network and on-premises workloads using a DNS forwarder
+## <a name="virtual-network-and-on-premises-workloads-using-a-dns-forwarder"></a> Azure Private Resolver for virtual network and on-premises workloads
-For workloads accessing a private endpoint from virtual and on-premises networks, use a DNS forwarder to resolve the Azure service [public DNS zone](private-endpoint-dns.md) deployed in Azure.
+For workloads accessing a private endpoint from virtual and on-premises networks, use Azure Private Resolver to resolve the Azure service [public DNS zone](private-endpoint-dns.md) deployed in Azure.
The following scenario is for an on-premises network with virtual networks in Azure. Both networks access the private endpoint located in a shared hub network.
-This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
+The private resolver is responsible for resolving all the DNS queries via the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
> [!IMPORTANT]
-> A single private DNS zone is required for this configuration. All client connections made from on-premises and [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) must  also use the same private DNS zone.
+> A single private DNS zone is required for this configuration. All client connections made from on-premises and [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) must also use the same private DNS zone.
> [!NOTE] > This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md).
This DNS forwarder is responsible for resolving all the DNS queries via a server
To configure properly, you need the following resources: - On-premises network+ - Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)+ - [Peered virtual network](../virtual-network/virtual-network-peering-overview.md) -- DNS forwarder deployed in Azure+
+- Azure Private Resolver
+ - Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)+ - Private endpoint information (FQDN record name and private IP address)
-The following diagram shows the DNS resolution for both networks, on-premises and virtual networks. The resolution is using a DNS forwarder. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
+The following diagram shows the DNS resolution for both networks, on-premises and virtual networks. The resolution is using Azure Private Resolver.
+
+The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
:::image type="content" source="media/private-endpoint-dns/hybrid-scenario.png" alt-text="Diagram of hybrid scenario.":::
The following diagram shows the DNS resolution for both networks, on-prem
If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group has a strong association between the private DNS zone and the private endpoint. It helps with managing the private DNS zone records when there's an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated with the correct number of records.
-Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there is no need to write any additional CLI/PowerShell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group will be deleted as well.
-
-A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating different zones in each spoke.
+Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there's no need to write any extra CLI/PowerShell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group are deleted.
-> [!NOTE]
-> Each DNS zone group can support up to 5 DNS zones.
+In a hub-and-spoke topology, a common scenario allows the creation of private DNS zones only once in the hub. This setup permits the spokes to register to it, instead of creating different zones in each spoke.
-> [!NOTE]
-> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
> [!NOTE]
-> Delete and update operations for DNS records can be seen performed by "Azure Traffic Manager and DNS." This is a normal platform operation necessary for managing your DNS Records.
+> - Each DNS zone group can support up to 5 DNS zones.
+> - Adding multiple DNS zone groups to a single Private Endpoint is not supported.
+> - Delete and update operations for DNS records can be seen performed by **Azure Traffic Manager and DNS.** This is a normal platform operation necessary for managing your DNS Records.
## Next steps - [Learn about private endpoints](private-endpoint-overview.md)
quotas How To Guide Monitoring Alerting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/how-to-guide-monitoring-alerting.md
Title: Monitoring and alerting - how to guide
+ Title: Create alerts for quotas
description: Learn how to create alerts for quotas Previously updated : 10/11/2023 Last updated : 11/29/2023
-# Monitoring & Alerting: How-To Guide
+# Create alerts for quotas
-## Create an alert rule
+You can create alerts for quotas and manage them.
-#### Prerequisite
+## Create an alert rule
-| Requirement | Description |
-|:--|:--|
-| Access to Create Alerts | Users who are creating Alert should have [Access to Create Alert](../azure-monitor/alerts/alerts-overview.md#azure-role-based-access-control-for-alerts) |
-| [Managed Identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | When utilizing an existing Managed Identity, ensure it has **Subscription Reader** access for accessing usage data. In cases where a new Managed Identity is generated, the Subscription **Owner** is responsible for **granting** Subscription **Reader** access to this newly created Managed Identity. |
+### Prerequisites
+Users must have the necessary [permissions to create alerts](../azure-monitor/alerts/alerts-overview.md#azure-role-based-access-control-for-alerts).
-### Create Alerts from Portal
+The [managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) must have the **Reader** role (or another role that includes read access) on the subscription.
-Step-by-Step instructions to create an alert rule for your quota in the Azure portal.
+### Create alerts in the Azure portal
-1. Sign in to the [Azure portal](https://portal.azure.com) and enter **"quotas"** in the search box, then select **Quotas**. In Quotas page, Click **My quotas** and choose **Compute** Resource Provider. Upon page load, you can choose `Quota Name` for creating new alert rule.
+The simplest way to create a quota alert is to use the Azure portal. Follow these steps to create an alert rule for your quota.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and enter **"quotas"** in the search box, then select **Quotas**. In Quotas page, select **My quotas** and choose **Compute** Resource Provider. Once the page loads, select **Quota Name** to create a new alert rule.
:::image type="content" source="media/monitoring-alerting/my-quotas-create-rule-navigation-inline.png" alt-text="Screenshot showing how to select Quotas to navigate to create Alert rule screen." lightbox="media/monitoring-alerting/my-quotas-create-rule-navigation-expanded.png":::
-2. When the Create usage alert rule page appears, **populate the fields** with data as shown in the table. Make sure you have the **right access** to the subscriptions and Quotas to **create alerts**.
+1. When the **Create usage alert rule** page appears, populate the fields with data as shown in the table. Make sure you have the [permissions to create alerts](../azure-monitor/alerts/alerts-overview.md#azure-role-based-access-control-for-alerts).
:::image type="content" source="media/monitoring-alerting/quota-details-create-rule-inline.png" alt-text="Screenshot showing create Alert rule screen with required fields." lightbox="media/monitoring-alerting/quota-details-create-rule-expanded.png"::: | **Fields** | **Description** | |:--|:--|
- | Alert Rule Name | Alert rule name must be **distinct** and can't be duplicated, even across different resource groups |
- | Alert me when the usage % reaches | **Adjust** the slider to select your desired usage percentage for **triggering** alerts. For example, at the default 80%, you receive an alert when your quota reaches 80% capacity.|
- | Severity | Select the **severity** of the alert when the **ruleΓÇÖs condition** is met.|
- | [Frequency of evaluation](../azure-monitor/alerts/alerts-overview.md#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, frequency of evaluation results in sliding window evaluation. |
- | [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Resource Group is a collection of resources that share the same lifecycles, permissions, and policies. Select a resource group similar to other quotas in your subscription, or create a new resource group. |
- | [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal) | A workspace within the subscription that is being **monitored** and is used as the **scope for rule execution**. Select from the dropdown or create a new workspace. If you create a new workspace, use it for all alerts in your subscription. |
- | [Managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | Select from the dropdown, or **Create New**. Managed Identity should have **read permissions** to the Subscription (to read Usage data from ARG) and Log Analytics workspace that is chosen(to read the log alerts). |
- | Notify me by | There are three notifications methods and you can check one or all three check boxes, depending on your notification preference. |
- | [Use an existing action group](../azure-monitor/alerts/action-groups.md) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create Action Group to automatically Increase the Quota whenever possible. |
- | [Dimensions](../azure-monitor/alerts/alerts-types.md#dimensions-in-log-alert-rules) | Here are the options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.|
- | [Estimated cost](https://azure.microsoft.com/pricing/details/monitor/) |Estimated cost is automatically calculated cost associated with running this **new alert rule** against your quota. Each alert creation costs $0.50 USD, and each additional dimension adds $0.05 USD to the cost. |
-
- > [!TIP]
- > We advise using the **same Resource Group, Log Analytics Workspace,** and **Managed Identity** data that were initially employed when creating your first alert rule for quotas within the same subscription.
-
-3. After completing entering the fields, click the **Create Alert** button
-
- - If **Successful**, you receive the following notification: 'We successfully created 'alert rule name' and 'Action Group 'name' was successfully created.'
-
- - If the **Alert fails**, you receive an 'Alert rule failed to create' notification. Ensure that you verify the necessary access **permissions** given for the Log Analytics or Managed Identity. Refer to the prerequisites."
+ | Alert rule name | The alert rule name must be distinct and can't be duplicated, even across different resource groups. |
+ | Alert me when the usage % reaches | Adjust the slider to select your desired usage percentage for triggering alerts. For example, at the default 80%, you receive an alert when your quota reaches 80% capacity.|
+ | Severity | Select the severity of the alert when the ruleΓÇÖs condition is met.|
+ | [Frequency of evaluation](../azure-monitor/alerts/alerts-overview.md#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, the frequency of evaluation results in sliding window evaluation. |
+ | [Resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Select a resource group similar to other quotas in your subscription, or create a new resource group. |
+ | [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal) | A workspace within the subscription that is being monitored and is used as the scope for rule execution. Select from the dropdown or create a new workspace. If you create a new workspace, use it for all alerts in your subscription. |
+ | [Managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) | Select from the dropdown, or create a new managed identity. This managed identity must have **Reader** access to the subscription (to read usage data) and to the selected Log Analytics workspace (to read the log alerts). |
+ | Notify me by | Select one or more of the three check boxes, depending on your notification preferences. |
+ | [Use an existing action group](../azure-monitor/alerts/action-groups.md) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create an action group to automatically increase the quota whenever possible. |
+ | [Dimensions](../azure-monitor/alerts/alerts-types.md#dimensions-in-log-alert-rules) | Options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.|
+ | [Estimated cost](https://azure.microsoft.com/pricing/details/monitor/) |The estimated cost is automatically calculated, based on running this **new alert rule** against your quota. Each alert creation costs $0.50 USD, and each additional dimension adds $0.05 USD to the cost. |
+ > [!TIP]
+ > Within the same subscription, we advise using the same **Resource group**, **Log Analytics workspace,** and **Managed identity** values for all alert rules.
-### Create Alerts using API
+1. After you've made your selections, select **Create Alert**. You'll see a confirmation if the rule was successfully created, or a message if any problems occurred.
-Alerts can be created programmatically using existing [**Monitoring API**]
-(https://learn.microsoft.com/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update?tabs=HTTP).
+### Create alerts using API
-Monitoring API helps to **create or update log search rule**.
+Alerts can be created programmatically using the [**Monitoring API**](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update?tabs=HTTP). This API can be used to create or update a log search rule.
`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Insights/scheduledQueryRules/{ruleName}?api-version=2018-04-16`
-#### Sample Request body
-
-```json
-{
- "location": "westus2",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/<SubscriptionId>/resourcegroups/<ResourceGroupName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ManagedIdentityName>": {}
- }
- },
- "properties": {
- "severity": 4,
- "enabled": true,
- "evaluationFrequency": "PT15M",
- "scopes": ["/subscriptions/<SubscriptionID>/resourcegroups/<rg>/providers/microsoft.operationalinsights/workspaces/<LogAnalyticsWorkspace>"],
- "windowSize": "PT15M",
- "criteria": {
- "allOf": [{
- "query": "arg(\"\").QuotaResources \n| where subscriptionId =~ '<SubscriptionId'\n| where type =~ 'microsoft.compute/locations/usages'\n| where isnotempty(properties)\n| mv-expand propertyJson = properties.value limit 400\n| extend\n usage = propertyJson.currentValue,\n quota = propertyJson.['limit'],\n quotaName = tostring(propertyJson.['name'].value)\n| extend usagePercent = toint(usage)*100 / toint(quota)| project-away properties| where location in~ ('westus2')| where quotaName in~ ('cores')",
- "timeAggregation": "Maximum",
- "metricMeasureColumn": "usagePercent",
- "operator": "GreaterThanOrEqual",
- "threshold": 3,
- "dimensions": [{
- "name": "type",
- "operator": "Include",
- "values": ["microsoft.compute/locations/usages"]
- }, {
- "name": "location",
- "operator": "Include",
- "values": ["westus2"]
- }, {
- "name": "quotaName",
- "operator": "Include",
- "values": ["cores"]
- }],
- "failingPeriods": {
- "numberOfEvaluationPeriods": 1,
- "minFailingPeriodsToAlert": 1
- }
- }]
- },
- "actions": {
- "actionGroups": ["/subscriptions/<SubscriptionId>/resourcegroups/argintrg/providers/microsoft.insights/actiongroups/<ActionGroupName>"]
- }
- }
-}
-```
+For a sample request body, see the [API documentation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update?tabs=HTTP)
+### Create alerts using Azure Resource Graph query
-### Create Alerts using ARG Query
+You can use the **Azure Monitor Alerts** blade to [create alerts using a query](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=log). Resource Graph Explorer lets you run and test queries before using them to create an alert. To learn more, see the [Configure Azure alerts](/training/modules/configure-azure-alerts/) training module.
-Use existing **Azure Monitor Alerts** blade to [create alerts using query](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=log). **Resource Graph Explorer** allows you to run and test queries before using them to create an alert. To learn on how to create Alerts using Alerts page visit this [Tutorial](/training/modules/configure-azure-alerts/?source=recommendations).
+For quota alerts, make sure the **Scope** is your Log analytics workspace and the **Signal type** is the customer query log. Add a sample query for quota usages. Follow the remaining steps as described in the [Create or edit an alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=log).
-For Quota alerts, make sure Scope is selected as the Log analytics workspace that is created and the signal type is Customer Query log. Add **Sample Query** for Quota usages. Follow the remaining steps as mentioned in the [create alerts](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=log).
+The following example shows a query that creates quota alerts.
->[!Note]
->Our **recommendation** for creating alerts in the Portal is to use the **Quota Alerts page**, as it offers the simplest and most user-friendly approach.
-
-#### Sample Query to create Alerts
```kusto arg("").QuotaResources | where subscriptionId =~ '<SubscriptionId>'
arg("").QuotaResources
| extend usagePercent = toint(usage)*100 / toint(quota)| project-away properties| where location in~ ('westus2')| where quotaName in~ ('cores') ```
-## Manage Quota Alerts
+## Manage quota alerts
+
+Once you've created your alert rule, you can view and edit the alerts.
-### View Alert Rules
+### View alert rules
-Select **Quotas** | **Alert Rules** to see all the rules create for a given subscription. Here, you have the option to edit, enable, or disable them as needed.
+Select **Quotas > Alert rules** to see all quota alert rules that have been created for a given subscription. You can edit, enable, or disable rules from this page.
- :::image type="content" source="media/monitoring-alerting/view-alert-rules-inline.png" alt-text="Screenshot showing how to navigate to Alert rule screen." lightbox="media/monitoring-alerting/view-alert-rules-expanded.png":::
+ :::image type="content" source="media/monitoring-alerting/view-alert-rules-inline.png" alt-text="Screenshot showing how the quota alert rule screen in the Azure portal." lightbox="media/monitoring-alerting/view-alert-rules-expanded.png":::
### View Fired Alerts
-Select **Quotas** | **Fired Alert Rules** to see all the alerts that have been fired create for a given subscription. This page displays an overview of all the alert rules that have been triggered. You can click on each alert to view its details, including the history of how many times it was triggered and the status of each occurrence.
+Select **Quotas > Fired Alert Rules** to see all the alerts that have been triggered for a given subscription. Select an alert to view its details, including the history of how many times it was triggered and the status of each occurrence.
- :::image type="content" source="media/monitoring-alerting/view-fired-alerts-inline.png" alt-text="Screenshot showing how to navigate to Fired Alert screen." lightbox="media/monitoring-alerting/view-fired-alerts-expanded.png":::
+ :::image type="content" source="media/monitoring-alerting/view-fired-alerts-inline.png" alt-text="Screenshot showing the Fired Alert screen in the Azure portal." lightbox="media/monitoring-alerting/view-fired-alerts-expanded.png":::
-### Edit, Update, Enable, Disable Alerts
+### Edit, update, enable, or disable alerts
-Multiple ways we can manage the create alerts
-1. Expand the options below the dots and select appropriate action.
+You can make changes from within an alert rule by expanding the options below the dots, then selecting an action.
- :::image type="content" source="media/monitoring-alerting/edit-enable-disable-delete-inline.png" alt-text="Screenshot showing how to edit , enable, disable or delete alert rules." lightbox="media/monitoring-alerting/edit-enable-disable-delete-expanded.png":::
- By using the 'Edit' action, users can also add multiple quotas or locations for the same alert rule.
+When you select **Edit**, you can add multiple quotas or locations for the same alert rule.
- :::image type="content" source="media/monitoring-alerting/edit-dimension-inline.png" alt-text="Screenshot showing how to add dimensions while editing a quota rule." lightbox="media/monitoring-alerting/edit-dimension-expanded.png":::
+ :::image type="content" source="media/monitoring-alerting/edit-dimension-inline.png" alt-text="Screenshot showing how to add dimensions while editing a quota rule in the Azure portal." lightbox="media/monitoring-alerting/edit-dimension-expanded.png":::
-2. Go to **Alert Rules**, then click on the specific alert rule you want to change.
+You can also make changes by navigating to the **Alert rules** page, then select the specific alert rule you want to change.
- :::image type="content" source="media/monitoring-alerting/alert-rule-edit-inline.png" alt-text="Screenshot showing how to edit rules from Alert Rule screen." lightbox="media/monitoring-alerting/alert-rule-edit-expanded.png":::
+ :::image type="content" source="media/monitoring-alerting/alert-rule-edit-inline.png" alt-text="Screenshot showing how to edit rules from the Alert rule screen in the Azure portal." lightbox="media/monitoring-alerting/alert-rule-edit-expanded.png":::
+## Respond to alerts
-## Respond to Alerts
-
-For the created alerts, an action group can be established to automate quota increases. By utilizing existing action groups, users can invoke the Quota API to automatically increase quotas wherever possible, eliminating the need for manual intervention.
+For created alerts, an action group can be established to automate quota increases. By using an existing action group, you can invoke the Quota API to automatically increase quotas wherever possible, eliminating the need for manual intervention.
-Refer the following link for detailed instructions on how to utilize functions to call the Quota API and request for more quota
-
-GitHub link to call [Quota API](https://github.com/allison-inman/azure-sdk-for-net/blob/main/sdk/quota/Microsoft.Azure.Management.Quota/tests/ScenarioTests/QuotaTests.cs)
-
-Use `Test_SetQuota()` code to write an Azure function to set the Quota.
+You can use functions to call the Quota API and request for more quota. Use `Test_SetQuota()` code to write an Azure function to set the quota. For more information, see this [example on GitHub](https://github.com/allison-inman/azure-sdk-for-net/blob/main/sdk/quota/Microsoft.Azure.Management.Quota/tests/ScenarioTests/QuotaTests.cs).
## Query using Resource Graph Explorer
-Using [Azure Resource Graph](../governance/resource-graph/overview.md), Alerts can be [Managed programatically](../azure-monitor/alerts/alerts-manage-alert-instances.md#manage-your-alerts-programmatically) where you can query your alerts instances and analyze your alerts to identify patterns and trends.
-For Usages, the **QuotaResources** table in [Azure Resource Graph](../governance/resource-graph/overview.md) explorer provides **usage and limit/quota data** for a given resource x region x subscription. Customers can query usage and quota data across multiple subscriptions with Azure Resource Graph queries.
+Using [Azure Resource Graph](../governance/resource-graph/overview.md), alerts can be [managed programatically](../azure-monitor/alerts/alerts-manage-alert-instances.md#manage-your-alerts-programmatically). This allows you to query your alert instances and analyze your alerts to identify patterns and trends.
+
+The **QuotaResources** table in [Azure Resource Graph](../governance/resource-graph/overview.md) explorer provides usage and limit/quota data for a given resource, region, and/or subscription. You can also query usage and quota data across multiple subscriptions with Azure Resource Graph queries.
-As a **prerequisite**, users must have at least a **Subscription Reader** role for the subscription.
+You must have at least the **Reader** role for the subscription to query this data using Resource Graph Explorer.
-#### Sample Query
+### Sample queries
-1. Query Compute resources current usages, quota/limit, and usage percentage for a subscription(s) x region x VM family
+Query to view current usages, quota/limit, and usage percentage for a subscription, region, and VCM family:
```kusto QuotaResources
QuotaResources
| order by ['usagePercent'] desc ```
-2. Query to Summarize total vCPUs (On-demand, Low Priority/Spot) per subscription per region
+Query to summarize total vCPUs (On-demand, Low Priority/Spot) per subscription per region:
```kusto QuotaResources
QuotaResources
| order by ['usagePercent'] desc ```
-## Provide Feedback
+## Provide feedback
-User can find **Feedback** button on every Quota page and can use to share thoughts, questions, or concerns with our team. Additionally, Users can submit a support ticket if they encounter any problem while creating alert rules for quotas.
+We encourage you to use the **Feedback** button on every Azure Quotas page to share your thoughts, questions, or concerns with our team.
:::image type="content" source="media/monitoring-alerting/alert-feedback-inline.png" alt-text="Screenshot showing user can provide feedback." lightbox="media/monitoring-alerting/alert-feedback-expanded.png":::
+If you encounter problems while creating alert rules for quotas, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
## Next steps -- Learn about [Monitoring and Alerting](monitoring-alerting.md)-- Learn more about [Quota overview](quotas-overview.md) and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
+- Learn about [quota monitoring and alerting](monitoring-alerting.md)
+- Learn more about [quotas](quotas-overview.md) and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), [spot vCPU quotas](spot-quota.md), and [storage accounts](storage-account-quota-requests.md).
quotas Monitoring Alerting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/monitoring-alerting.md
Title: Quota monitoring & alerting
-description: Monitoring and Alerting for Quota Usages.
Previously updated : 10/11/2023
+description: Learn about monitoring and alerting for quota usage.
Last updated : 11/29/2023
-# Quota Monitoring and Alerting
+# Quota monitoring and alerting
-**Monitoring and Alerting** in Azure provides real-time insights into resource utilization, enabling proactive issue resolution and resource optimization.It helps detect anomalies and potential issues before they impact services, ensuring uninterrupted operations.
+Monitoring and alerting in Azure provides real-time insights into resource utilization, enabling proactive issue resolution and resource optimization. Use monitoring and alerting to help detect anomalies and potential issues before they impact services.
To view the features on **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. > [!NOTE]
-> When Monitoring & Alerting is enabled for your account, the Quotas in **MyQuotas** will be highlighted and clickable.
+> When monitoring and alerting is enabled for your account, the Quotas in **MyQuotas** will be highlighted and clickable.
-## Monitoring
+## Monitoring
-**Monitoring for quotas** empowers users to proactively manage their resources in Azure. Azure sets predefined limits, or quotas, for various resources like **Compute**, **Azure Machine Learning**, and **HPC Cache**. This monitoring involves continuous tracking of resource usage to ensure it remains within allocated limits, with users receiving notifications when these limits are approached or reached.
+Monitoring for quotas lets you proactively manage your Azure resources. Azure sets predefined limits, or quotas, for various resources like **Compute**, **Azure Machine Learning**, and **HPC Cache**. This monitoring involves continuous tracking of resource usage to ensure it remains within allocated limits, including notifications when these limits are approached or reached.
-## Alerting
+## Alerting
-**Quota alerts** in Azure are notifications triggered when the usage of a specific Azure resource nears the **predefined quota limit**. These alerts are crucial for informing Azure users and administrators about resource consumption, facilitating proactive resource management. AzureΓÇÖs alert rule capabilities allow you to create multiple alert rules for a given quota or across quotas in your subscription.
+Quota alerts in Azure are notifications triggered when the usage of a specific Azure resource nears the **predefined quota limit**. These alerts are crucial for informing Azure users and administrators about resource consumption, facilitating proactive resource management. AzureΓÇÖs alert rule capabilities allow you to create multiple alert rules for a given quota or across quotas in your subscription.
+
+For more information, see [Create alerts for quotas](how-to-guide-monitoring-alerting.md).
> [!NOTE] > [General Role based access control](../azure-monitor/alerts/alerts-overview.md#azure-role-based-access-control-for-alerts) applies while creating alerts. - ## Next steps -- Learn [how to Create Quota alert](how-to-guide-monitoring-alerting.md).-- Learn more about [Alerts](../azure-monitor/alerts/alerts-overview.md)
+- Learn [how to create quota alerts](how-to-guide-monitoring-alerting.md).
+- Learn more about [alerts](../azure-monitor/alerts/alerts-overview.md)
- Learn about [Azure Resource Graph](../governance/resource-graph/overview.md)
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure NetApp Files](../azure-netapp-files/use-availability-zones.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Spring Apps](reliability-spring-apps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | ### ![An icon that signifies this service is non-regional.](media/icon-always-available.svg) Non-regional services (always-available services)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Private Link](../private-link/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Route Server](../route-server/route-server-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Spring Apps](reliability-spring-apps.md) |
[Azure Virtual WAN](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+### ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic services
+
+| **Products** |
+|--|
+| [Azure Spring Apps](reliability-spring-apps.md) |
+ ## Azure Service Manager Retirement Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations, and has been in use since 2011. ASM is retiring in August 2024, and customers can now migrate to [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview).
role-based-access-control Delegate Role Assignments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-overview.md
Previously updated : 09/20/2023 Last updated : 11/28/2023 #Customer intent: As a dev, devops, or it admin, I want to delegate the Azure role assignment task to other users who are closer to the decision, but want to limit the scope of the role assignments.
Instead of assigning the Owner or User Access Administrator roles, a more secure
Delegating role assignments with conditions is a way to restrict the role assignments a user can create. In the preceding example, Alice can allow Dara to create some role assignments on her behalf, but not all role assignments. For example, Alice can constrain the roles that Dara can assign and constrain the principals that Dara can assign roles to. This delegation with conditions is sometimes referred to as *constrained delegation* and is implemented with [Azure attribute-based access control (Azure ABAC) conditions](conditions-overview.md).
-To watch an overview video, see [Delegate Azure role assignments with conditions](https://youtu.be/3eDf2thqeO4?si=rBPW9BxRNtISkAGG).
+This video provides an overview of delegating role assignments with conditions.
+
+>[!VIDEO https://www.youtube.com/embed/3eDf2thqeO4]
## Why delegate role assignments with conditions?
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Record the URL of the project.
### Import the repository
-Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
+Start by importing the SAP Deployment Automation Framework Bootstrap GitHub repository into Azure Repos.
Go to the **Repositories** section and select **Import a repository**. Import the `https://github.com/Azure/sap-automation-bootstrap.git` repository into Azure DevOps. For more information, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true).
-If you're unable to import a repository, you can create the repository manually. Then you can import the content from the SAP Deployment Automation Framework GitHub repository to it.
+If you're unable to import a repository, you can create the repository manually. Then you can import the content from the SAP Deployment Automation Framework GitHub Bootstrap repository to it.
### Create the repository for manual import
Copy the content from the .zip file to the root folder of your local clone.
Open the local folder in Visual Studio Code. You should see that changes need to be synchronized by the indicator by the source control icon shown here. Select the source control icon and provide a message about the change. For example, enter **Import from GitHub** and select Ctrl+Enter to commit the changes. Next, select **Sync Changes** to synchronize the changes back to the repository.
You can either run the SAP Deployment Automation Framework code directly from Gi
If you want to run the SAP Deployment Automation Framework code from the local Azure DevOps project, you need to create a separate code repository and a configuration repository in the Azure DevOps project:
+- **Name of configuration repository**: `Same as the DevOps Project name`. Source is `https://github.com/Azure/sap-automation-bootstrap.git`.
- **Name of code repository**: `sap-automation`. Source is `https://github.com/Azure/sap-automation.git`. - **Name of sample and template repository**: `sap-samples`. Source is `https://github.com/Azure/sap-automation-samples.git`.
del manifest.json
Save the app registration ID and password values for later use.
-## Create Azure pipelines
+## Create Azure Pipelines
-Azure pipelines are implemented as YAML files. They're stored in the *deploy/pipelines* folder in the repository.
+Azure Pipelines are implemented as YAML files. They're stored in the *deploy/pipelines* folder in the repository.
## Control plane deployment pipeline
Create the control plane deployment pipeline. Under the **Pipelines** section, s
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/01-deploy-control-plane.yml` |
+| Path | `pipelines/01-deploy-control-plane.yml` |
| Name | Control plane deployment | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane deployment**.
Create the SAP workload zone pipeline. Under the **Pipelines** section, select *
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/02-sap-workload-zone.yml` |
+| Path | `pipelines/02-sap-workload-zone.yml` |
| Name | SAP workload zone deployment | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP workload zone deployment**.
Create the SAP system deployment pipeline. Under the **Pipelines** section, sele
| Setting | Value | | - | |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/03-sap-system-deployment.yml` |
+| Path | `pipelines/03-sap-system-deployment.yml` |
| Name | SAP system deployment (infrastructure) | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP system deployment (infrastructure)**.
Create the SAP software acquisition pipeline. Under the **Pipelines** section, s
| Setting | Value | | - | |
+| Repo | "Root repo" (same as project name) |
| Branch | main | | Path | `deploy/pipelines/04-sap-software-download.yml` | | Name | SAP software acquisition |
Create the SAP configuration and software installation pipeline. Under the **Pip
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/05-DB-and-SAP-installation.yml` |
+| Path | `pipelines/05-DB-and-SAP-installation.yml` |
| Name | Configuration and SAP installation | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP configuration and software installation**.
Create the deployment removal pipeline. Under the **Pipelines** section, select
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/10-remover-terraform.yml` |
+| Path | `pipelines/10-remover-terraform.yml` |
| Name | Deployment removal | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal**.
Create the control plane deployment removal pipeline. Under the **Pipelines** se
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/12-remove-control-plane.yml` |
+| Path | `pipelines/12-remove-control-plane.yml` |
| Name | Control plane removal | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane removal**.
Create the deployment removal Azure Resource Manager pipeline. Under the **Pipel
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/11-remover-arm-fallback.yml` |
-| Name | Deployment removal using ARM processor |
+| Path | `pipelines/11-remover-arm-fallback.yml` |
+| Name | Deployment removal using Azure Resource Manager |
Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal using ARM processor**.
Create the repository updater pipeline. Under the **Pipelines** section, select
| Setting | Value | | - | -- |
+| Repo | "Root repo" (same as project name) |
| Branch | main |
-| Path | `deploy/pipelines/20-update-ado-repository.yml` |
+| Path | `pipelines/20-update-ado-repository.yml` |
| Name | Repository updater | Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Repository updater**. This pipeline should be used when there's an update in the sap-automation repository that you want to use.
-## Import the Ansible task from Visual Studio Marketplace
-
-The pipelines use a custom task to run Ansible. You can install the custom task from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before you run the **Configuration and SAP installation** or **SAP software acquisition** pipelines.
- ## Import the cleanup task from Visual Studio Marketplace The pipelines use a custom task to perform cleanup activities post deployment. You can install the custom task from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before you run the pipelines.
Create a new variable group named `SDAF-General` by using the **Library** page i
| Branch | main | | | S-Username | `<SAP Support user account name>` | | | S-Password | `<SAP Support user password>` | Change the variable type to secret by selecting the lock icon. |
-| `tf_version` | 1.3.0 | The Terraform version to use. See [Terraform download](https://www.terraform.io/downloads). |
+| `tf_version` | 1.6.0 | The Terraform version to use. See [Terraform download](https://www.terraform.io/downloads). |
Save the variables.
Create a new variable group named `SDAF-MGMT` for the control plane environment
| Variable | Value | Notes | | - | | -- | | Agent | `Azure Pipelines` or the name of the agent pool | This pool is created in a later step. |
-| CP_ARM_CLIENT_ID | `Service principal application ID` | |
-| CP_ARM_OBJECT_ID | `Service principal object ID` | |
-| CP_ARM_CLIENT_SECRET | `Service principal password` | Change the variable type to secret by selecting the lock icon. |
-| CP_ARM_SUBSCRIPTION_ID | `Target subscription ID` | |
-| CP_ARM_TENANT_ID | `Tenant ID` for the service principal | |
+| CP_ARM_CLIENT_ID | `Service principal application ID` | |
+| CP_ARM_OBJECT_ID | `Service principal object ID` | |
+| CP_ARM_CLIENT_SECRET | `Service principal password` | Change the variable type to secret by selecting the lock icon. |
+| CP_ARM_SUBSCRIPTION_ID | `Target subscription ID` | |
+| CP_ARM_TENANT_ID | `Tenant ID` for the service principal | |
| AZURE_CONNECTION_NAME | Previously created connection name | | | sap_fqdn | SAP fully qualified domain name, for example, `sap.contoso.net` | Only needed if Private DNS isn't used. | | FENCING_SPN_ID | `Service principal application ID` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. |
Enter a **Service connection name**, for instance, use `Connection to MGMT subsc
## Permissions
-Most of the pipelines add files to the Azure repos and therefore require pull permissions. On **Project Settings**, under the **Repositories** section, select the **Security** tab of the source code repository and assign Contribute permissions to the `Build Service`.
+Most of the pipelines add files to the Azure Repos and therefore require pull permissions. On **Project Settings**, under the **Repositories** section, select the **Security** tab of the source code repository and assign Contribute permissions to the `Build Service`.
:::image type="content" source="./media/devops/automation-repo-permissions.png" alt-text="Screenshot that shows repository permissions.":::
Selecting the `deploy the web app infrastructure` parameter when you run the con
Wait for the deployment to finish. Select the **Extensions** tab and follow the instructions to finalize the configuration. Update the `reply-url` values for the app registration.
-As a result of running the control plane pipeline, part of the web app URL that's needed is stored in a variable named `WEBAPP_URL_BASE` in your environment-specific variable group. At any time, you can update the URLs of the registered application web app by using the following command.
+As a result of running the control plane pipeline, part of the web app URL that is needed is stored in a variable named `WEBAPP_URL_BASE` in your environment-specific variable group. At any time, you can update the URLs of the registered application web app by using the following command.
# [Linux](#tab/linux)
$webapp_url_base="<WEBAPP_URL_BASE>"
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ```
-You also need to grant reader permissions to the app service system-assigned managed identity. Go to the app service resource. On the left side, select **Identity**. On the **System assigned** tab, select **Azure role assignments** > **Add role assignment**. Select **Subscription** as the scope and **Reader** as the role. Then select **Save**. Without this step, the web app dropdown functionality won't work.
+You also need to grant reader permissions to the app service system-assigned managed identity. Go to the app service resource. On the left side, select **Identity**. On the **System assigned** tab, select **Azure role assignments** > **Add role assignment**. Select **Subscription** as the scope and **Reader** as the role. Then select **Save**. Without this step, the web app dropdown functionality will not work.
You should now be able to visit the web app and use it to deploy SAP workload zones and SAP system infrastructure.
sap Provider Ha Pacemaker Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ha-pacemaker-cluster.md
When the provider settings validation operation fails with the code `PrometheusU
1. Restart the HA cluster exporter agent. ```bash
- sstemctl start pmproxy
+ systemctl start pmproxy
``` 1. Reenable the HA cluster exporter agent.
search Index Add Custom Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-custom-analyzers.md
- ignite-2023 Previously updated : 07/19/2023 Last updated : 11/28/2023 # Add custom analyzers to string fields in an Azure AI Search index
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
api-key: {{admin-api-key}}
"count": true, "select": "HotelName, Tags, Description", "filter": "Tags/any(tag: tag eq 'free wifi')",
- "vectorFilterMode": "PreFilter",
+ "vectorFilterMode": "preFilter",
"vectorQueries": [ { "vector": [ VECTOR OMITTED ],
api-key: {{admin-api-key}}
"kind": "vector", "exhaustive": true },
- ]
+ ]n
} ```
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
In some cases, like with a free text field, simply selecting the correct [langua
This tutorial uses Postman and Azure AI Search's [REST APIs](/rest/api/searchservice/) to: > [!div class="checklist"]
-> * Explain how analyzers work
+> * Show how analyzers work
> * Define a custom analyzer for searching phone numbers > * Test how the custom analyzer tokenizes text > * Create separate analyzers for indexing and searching to further improve results
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
In this section, you deploy the data connector agent. After you deploy the agent
:::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment."::: 1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**.+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+
1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step. The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
service-bus-messaging Service Bus Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-cli.md
ms.devlang: azurecli
# Use the Azure CLI to create a Service Bus namespace and a queue This quickstart shows you how to create a Service Bus namespace and a queue using the Azure CLI. It also shows you how to get authorization credentials that a client application can use to send/receive messages to/from the queue. ## Prerequisites If you don't have an Azure subscription, you can create a [free account][free account] before you begin.
service-bus-messaging Service Bus Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-portal.md
Title: Use the Azure portal to create a Service Bus queue
description: In this quickstart, you learn how to create a Service Bus namespace and a queue in the namespace by using the Azure portal. Previously updated : 10/20/2022 Last updated : 11/28/2023
This quickstart shows you how to create a Service Bus namespace and a queue using the [Azure portal]. It also shows you how to get authorization credentials that a client application can use to send/receive messages to/from the queue. ## Prerequisites
service-bus-messaging Service Bus Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-powershell.md
# Use Azure PowerShell to create a Service Bus namespace and a queue This quickstart shows you how to create a Service Bus namespace and a queue using the Azure PowerShell. It also shows you how to get authorization credentials that a client application can use to send/receive messages to/from the queue. ## Prerequisites
service-bus-messaging Service Bus Quickstart Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md
Title: Use the Azure portal to create Service Bus topics and subscriptions
description: 'Quickstart: In this quickstart, you learn how to create a Service Bus topic and subscriptions to that topic by using the Azure portal.' Previously updated : 10/28/2022 Last updated : 11/28/2023 #Customer intent: In a retail scenario, how do I update inventory assortment and send a set of messages from the back office to the stores?
In this quickstart, you use the Azure portal to create a Service Bus topic and then create subscriptions to that topic. ## What are Service Bus topics and subscriptions?
-Service Bus topics and subscriptions support a *publish/subscribe* messaging communication model. When using topics and subscriptions, components of a distributed application do not communicate directly with
-each other; instead they exchange messages via a topic, which acts as an intermediary.
+Service Bus topics and subscriptions support a *publish/subscribe* messaging communication model. When you use topics and subscriptions, components of a distributed application don't communicate directly with each other; instead they exchange messages via a topic, which acts as an intermediary.
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/sb-topics-01.png" alt-text="Image showing how topics and subscriptions work.":::
-In contrast with Service Bus queues, in which each message is processed by a single consumer, topics and subscriptions provide a one-to-many form of communication, using a publish/subscribe pattern. It is possible to
-register multiple subscriptions to a topic. When a message is sent to a topic, it is then made available to each subscription to handle/process independently. A subscription to a topic resembles a virtual queue that receives copies of the messages that were sent to the topic. You can optionally register filter rules for a topic on a per-subscription basis, which allows you to filter or restrict which messages to a topic are received by which topic subscriptions.
+In contrast with Service Bus queues, in which each message is processed by a single consumer, topics and subscriptions provide a one-to-many form of communication, using a publish/subscribe pattern. It's possible to register multiple subscriptions to a topic. When a message is sent to a topic, it's then made available to each subscription to handle/process independently. A subscription to a topic resembles a virtual queue that receives copies of the messages that were sent to the topic. You can optionally register filter rules for a topic on subscriptions, which allows you to filter or restrict which messages to a topic are received by which topic subscriptions.
Service Bus topics and subscriptions enable you to scale to process a large number of messages across a large number of users and applications.
Service Bus topics and subscriptions enable you to scale to process a large numb
[!INCLUDE [service-bus-create-topics-three-subscriptions-portal](./includes/service-bus-create-topics-three-subscriptions-portal.md)]
-> [!NOTE]
-> You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to connect to a Service Bus namespace and administer messaging entities in an easy manner. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
- ## Next steps In this article, you created a Service Bus namespace, a topic in the namespace, and three subscriptions to the topic. To learn how to publish messages to the topic and subscribe for messages from a subscription, see one of the following quickstarts in the **Publish and subscribe for messages** section.
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-samples.md
description: Azure Service Bus messaging samples or examples that demonstrate ke
Previously updated : 10/19/2022 Last updated : 11/28/2023
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
Title: Azure Service Bus access control with Shared Access Signatures description: Overview of Service Bus access control using Shared Access Signatures overview, details about SAS authorization with Azure Service Bus. Previously updated : 11/01/2022 Last updated : 11/28/2023 ms.devlang: csharp # Service Bus access control with Shared Access Signatures
-This article discusses *Shared Access Signatures* (SAS), how they work, and how to use them in a platform-agnostic way.
+This article discusses *Shared Access Signatures* (SAS), how they work, and how to use them in a platform-agnostic way with Azure Service Bus.
SAS guards access to Service Bus based on authorization rules that are configured either on a namespace, or a messaging entity (queue, or topic). An authorization rule has a name, is associated with specific rights, and carries a pair of cryptographic keys. You use the rule's name and key via the Service Bus SDK or in your own code to generate a SAS token. A client can then pass the token to Service Bus to prove authorization for the requested operation.
SAS guards access to Service Bus based on authorization rules that are configure
## Overview of SAS
-Shared Access Signatures are a claims-based authorization mechanism using simple tokens. Using SAS, keys are never passed on the wire. Keys are used to cryptographically sign information that can later be verified by the service. SAS can be used similar to a username and password scheme where the client is in immediate possession of an authorization rule name and a matching key. SAS can also be used similar to a federated security model, where the client receives a time-limited and signed access token from a security token service without ever coming into possession of the signing key.
+Shared Access Signatures are a claims-based authorization mechanism using simple tokens. When you use SAS, keys are never passed on the wire. Keys are used to cryptographically sign information that can later be verified by the service. SAS can be used similar to a username and password scheme where the client is in immediate possession of an authorization rule name and a matching key. SAS can also be used similar to a federated security model, where the client receives a time-limited and signed access token from a security token service without ever coming into possession of the signing key.
-SAS authentication in Service Bus is configured with named [Shared Access Authorization Policies](#shared-access-authorization-policies) having associated access rights, and a pair of primary and secondary cryptographic keys. The keys are 256-bit values in Base64 representation. You can configure rules at the namespace level, on Service Bus [queues](service-bus-messaging-overview.md#queues) and [topics](service-bus-messaging-overview.md#topics).
+SAS authentication in Service Bus is configured with named [shared access authorization policies](#shared-access-authorization-policies) having associated access rights, and a pair of primary and secondary cryptographic keys. The keys are 256-bit values in Base 64 representation. You can configure rules at the namespace level, on Service Bus [queues](service-bus-messaging-overview.md#queues) and [topics](service-bus-messaging-overview.md#topics).
> [!NOTE]
-> These keys are plain text strings using a Base64 representation, and must not be decoded before they are used.
+> These keys are plain text strings using a Base 64 representation, and must not be decoded before they are used.
The Shared Access Signature token contains the name of the chosen authorization policy, the URI of the resource that shall be accessed, an expiry instant, and an HMAC-SHA256 cryptographic signature computed over these fields using either the primary or the secondary cryptographic key of the chosen authorization rule.
-## Shared Access Authorization Policies
+## Shared access authorization policies
-Each Service Bus namespace and each Service Bus entity has a Shared Access Authorization policy made up of rules. The policy at the namespace level applies to all entities inside the namespace, irrespective of their individual policy configuration.
+Each Service Bus namespace and each Service Bus entity has a shared access authorization policy made up of rules. The policy at the namespace level applies to all entities in the namespace, irrespective of their individual policy configuration.
For each authorization policy rule, you decide on three pieces of information: **name**, **scope**, and **rights**. The **name** is just that; a unique name within that scope. The scope is easy enough: it's the URI of the resource in question. For a Service Bus namespace, the scope is the fully qualified namespace, such as `https://<yournamespace>.servicebus.windows.net/`. The rights conferred by the policy rule can be a combination of:
-* 'Send' - Confers the right to send messages to the entity
-* 'Listen' - Confers the right to receive (queue, subscriptions) and all related message handling
-* 'Manage' - Confers the right to manage the topology of the namespace, including creating and deleting entities
+* Send - Grants the right to send messages to the entity
+* Listen - Grants the right to receive (queue, subscriptions) and all related message handling
+* Manage - Grants the right to manage the topology of the namespace, including creating and deleting entities
-The 'Manage' right includes the 'Send' and 'Listen' rights.
+The **Manage** right includes the Send and Listen rights.
-A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights and the combination of Send and Listen. This limit is per entity, meaning the namespace and each entity can have up to 12 Shared Access Authorization rules. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
+A namespace or entity policy can hold up to 12 Shared Access Authorization rules, providing room for three sets of rules, each covering the basic rights, and the combination of Send and Listen. This limit is per entity, meaning the namespace and each entity can have up to 12 Shared Access Authorization rules. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Service Bus based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
-An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These keys are cryptographically strong keys. Don't lose them or leak them - they'll always be available in the [Azure portal]. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens will continue to work until the token expires.
+An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These keys are cryptographically strong keys. Don't lose them or leak them - they'll always be available in the [Azure portal]. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens continue to work until the token expires.
-When you create a Service Bus namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has Manage permissions for the entire namespace. It's recommended that you treat this rule like an administrative **root** account and don't use it in your application. You can create more policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
+When you create a Service Bus namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has Manage permissions for the entire namespace. It's recommended that you treat this rule like an administrative **root** account and don't use it in your application. You can create more policy rules in the **Shared access policies** tab for the namespace in the portal, via PowerShell or Azure CLI.
-It is recommended that you periodically regenerate the keys used in the [SharedAccessAuthorizationRule](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule) object. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
+We recommend that you periodically regenerate the keys used in the [SharedAccessAuthorizationRule](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule) object. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the [PrimaryKey](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule.primarykey) and the [SecondaryKey](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule.secondarykey) of a [SharedAccessAuthorizationRule](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule), replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
If you know or suspect that a key is compromised and you have to revoke the keys
When you use shared access signatures in your applications, you need to be aware of two potential risks: - If a SAS is leaked, it can be used by anyone who obtains it, which can potentially compromise your Service Bus resources.-- If a SAS provided to a client application expires and the application is unable to retrieve a new SAS from your service, then applicationΓÇÖs functionality may be hindered.
+- If a SAS provided to a client application expires and the application is unable to retrieve a new SAS from your service, then applicationΓÇÖs functionality might be hindered.
The following recommendations for using shared access signatures can help mitigate these risks: -- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a few immediate, short-lived operations that are expected to be completed within the expiration period, then it may be unnecessary as the SAS isn't expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal).-- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you may observe up to 15 minutes of clock skew in either direction on any request.
+- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a few immediate, short-lived operations that are expected to be completed within the expiration period, then it might be unnecessary as the SAS isn't expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal).
+- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), you might see failures intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you might observe up to 15 minutes of clock skew in either direction on any request.
- **Be specific with the resource to be accessed**: A security best practice is to provide user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. It also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker. - **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Service Bus outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your Service Bus after business rule validation, authentication, and auditing. - **Always use HTTPs**: Always use Https to create or distribute a SAS. If a SAS is passed over HTTP and intercepted, an attacker performing a man-in-the-middle attach is able to read the SAS and then use it just as the intended user could have, potentially compromising sensitive data or allowing for data corruption by the malicious user.
The following recommendations for using shared access signatures can help mitiga
You can configure the Shared Access Authorization Policy on Service Bus namespaces, queues, or topics. Configuring it on a Service Bus subscription is currently not supported, but you can use rules configured on a namespace or topic to secure access to subscriptions.
-![SAS](./media/service-bus-sas/service-bus-namespace.png)
In this figure, the *manageRuleNS*, *sendRuleNS*, and *listenRuleNS* authorization rules apply to both queue Q1 and topic T1, while *listenRuleQ* and *sendRuleQ* apply only to queue Q1 and *sendRuleT* applies only to topic T1.
A SAS token is valid for all resources prefixed with the `<resourceURI>` used in
## Regenerating keys
-It's recommended that you periodically regenerate the keys used in the Shared Access Authorization Policy. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
+We recommend that you periodically regenerate the keys used in the Shared Access Authorization Policy. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the primary key and the secondary key of a Shared Access Authorization Policy, replacing them with new keys. This procedure invalidates all tokens signed with the old keys. To regenerate primary and secondary keys in the **Azure portal**, follow these steps: 1. Navigate to the Service Bus namespace in the [Azure portal](https://portal.azure.com).
-2. Select **Shared Access Policies** on the left menu.
-3. Select the policy from the list. In the following example, **RootManageSharedAccessKey** is selected.
-4. On the **SAS Policy: RootManageSharedAccessKey** page, select **...** from the command bar, and then select **Regenerate Primary Keys** or **Regenerate Secondary Keys**.
+1. Select **Shared Access Policies** on the left menu.
+1. Select the policy from the list. In the following example, **RootManageSharedAccessKey** is selected.
+1. To regenerate the primary key, on the **SAS Policy: RootManageSharedAccessKey** page, select **Regenerate primary key** on the command bar.
+
+ :::image type="content" source="./media/service-bus-sas/regenerate-primary-key.png" alt-text="Screenshot that shows how to regenerate a primary key.":::
+1. To regenerate the secondary key, on the **SAS Policy: RootManageSharedAccessKey** page, select **...** from the command bar, and then select **Regenerate secondary key**.
:::image type="content" source="./media/service-bus-sas/regenerate-keys.png" alt-text="Screenshot of SAS Policy page with Regenerate options selected.":::
-If you are using **Azure PowerShell**, use the [`New-AzServiceBusKey`](/powershell/module/az.servicebus/new-azservicebuskey) cmdlet to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `-KeyValue` parameter.
+If you're using **Azure PowerShell**, use the [`New-AzServiceBusKey`](/powershell/module/az.servicebus/new-azservicebuskey) cmdlet to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `-KeyValue` parameter.
-If you are using **Azure CLI**, use the [`az servicebus namespace authorization-rule keys renew`](/cli/azure/servicebus/namespace/authorization-rule/keys#az-servicebus-namespace-authorization-rule-keys-renew) command to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `--key-value` parameter.
+If you're using **Azure CLI**, use the [`az servicebus namespace authorization-rule keys renew`](/cli/azure/servicebus/namespace/authorization-rule/keys#az-servicebus-namespace-authorization-rule-keys-renew) command to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `--key-value` parameter.
## Shared Access Signature authentication with Service Bus
Use the get/update operation on queues or topics in of the [management libraries
## Use Shared Access Signature authorization
-Applications using any of the Service Bus SDK in any of the officially supported languages like .NET, Java, JavaScript and Python can make use of SAS authorization through the connection strings passed to the client constructor.
+Applications using any of the Service Bus SDK in any of the officially supported languages like .NET, Java, JavaScript, and Python can make use of SAS authorization through the connection strings passed to the client constructor.
Connection strings can include a rule name (*SharedAccessKeyName*) and rule key (*SharedAccessKey*) or a previously issued token (*SharedAccessSignature*). When those are present in the connection string passed to any constructor or factory method accepting a connection string, the SAS token provider is automatically created and populated.
In the previous section, you saw how to use the SAS token with an HTTP POST requ
Before starting to send data to Service Bus, the publisher must send the SAS token inside an AMQP message to a well-defined AMQP node named **$cbs** (you can see it as a "special" queue used by the service to acquire and validate all the SAS tokens). The publisher must specify the **ReplyTo** field inside the AMQP message; it's the node in which the service replies to the publisher with the result of the token validation (a simple request/reply pattern between publisher and service). This reply node is created "on the fly," speaking about "dynamic creation of remote node" as described by the AMQP 1.0 specification. After checking that the SAS token is valid, the publisher can go forward and start to send data to the service.
-The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. It's useful if you can't use the official Service Bus SDK (for example, on WinRT, .NET Compact Framework, .NET Micro Framework and Mono) developing in C#. This library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK in any of the supported languages like .NET, Java, JavaScript, Python and Go, which will do it for you.
+The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. It's useful if you can't use the official Service Bus SDK (for example, on WinRT, .NET Compact Framework, .NET Micro Framework, and Mono) developing in C#. This library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK in any of the supported languages like .NET, Java, JavaScript, Python, and Go, which will do it for you.
### C&#35;
private bool PutCbsToken(Connection connection, string sasToken)
} }
- // the sender/receiver may be kept open for refreshing tokens
+ // the sender/receiver might be kept open for refreshing tokens
cbsSender.Close(); cbsReceiver.Close(); session.Close();
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Service Fabric
site-recovery Vmware Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-failback.md
This article describes how to fail back Azure VMs to an on-premises site, follow
1. Make sure that Azure VMs are reprotected and replicating to the on-premises site. - A VM needs at least one recovery point in order to fail back. - If you fail back a recovery plan, then all machines in the plan should have at least one recovery point.
-2. In the vault > **Replicated items**, select the VM. Right-click the VM > **Unplanned Failover**.
+2. In the vault > **Replicated items**, select the VM. Right-click the VM > **Failover**.
3. In **Confirm Failover**, verify the failover direction (from Azure). 4. Select the recovery point that you want to use for the failover. - We recommend that you use the **Latest** recovery point. The app-consistent point is behind the latest point in time, and causes some data loss.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
See information about [upgrading the mobility services](upgrade-mobility-service
- Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md). - [Locate the installer](#locate-installer-files) for the server's operating system.
+- Copy the installer corresponding to the source machineΓÇÖs operating system and place it on your source machine in a local folder, such as C:\Program Files (x86)\Microsoft Azure Site Recovery.
>[!IMPORTANT] > Don't use the UI installation method if you're replicating an Azure Infrastructure as a Service (IaaS) VM from one Azure region to another. Use the [command prompt](#install-the-mobility-service-using-command-prompt-classic) installation.
spring-apps Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-manage-monitor-app-spring-boot-actuator.md
This article assumes that you have a Spring Boot 2.x application that can be suc
>[!TIP] > * If the app returns a front-end page and references other files through relative path, confirm that your test endpoint ends with a slash (/). This will ensure that the CSS file is loaded correctly.
-> * If you view your API from a brower and your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the form "https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/\<app-name>/\<deployment-name>". Use this form to access your endpoint.
+> * If you view your API from a brower and your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the form "https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/\<app-name>/\<deployment-name>". Use this form to access your endpoint.
## Add actuator dependency
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
Tanzu Build Service allows at most one pool-sized build task to build and twice
When you create a new Azure Spring Apps Enterprise service instance using the Azure portal, you can use the **VMware Tanzu settings** tab to configure the number of resources given to the build agent pool.
-The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size here after you've created the service instance.
+The following image shows the resources given to the Tanzu Build Service Agent Pool after you successfully provision the service instance. You can also update the configured agent pool size here after you create the service instance.
## Build service on demand
Use the following steps to enable Tanzu Build Service when provisioning an Azure
1. Select **Next: VMware Tanzu settings**. 1. On the **VMware Tanzu settings** tab, select **Enable Build Service**. For **Container registry**, the default setting is **Use a managed Azure Container Registry to store built images**.
- :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with default Build Service settings highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png":::
+ :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png" alt-text="Screenshot of the Azure portal that shows the V M ware Tanzu Settings for the Azure Spring Apps Create page with default Build Service settings highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png":::
1. If you select **Use your own container registry to store built images (preview)** for **Container registry**, provide your container registry's server, username, and password.
- :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with use your own container registry highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png":::
+ :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png" alt-text="Screenshot of the Azure portal that shows V M ware Tanzu Settings for the Azure Spring Apps Create page with use your own container registry highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png":::
1. If you disable **Enable Build Service**, the container registry options aren't provided but you can deploy applications with container images.
- :::image type="content" source="media/how-to-enterprise-build-service/disable-build-service.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with the Enable Build Service not selected." lightbox="media/how-to-enterprise-build-service/disable-build-service.png":::
+ :::image type="content" source="media/how-to-enterprise-build-service/disable-build-service.png" alt-text="Screenshot of the Azure portal that shows V M ware Tanzu Settings for the Azure Spring Apps Create page with the Enable Build Service not selected." lightbox="media/how-to-enterprise-build-service/disable-build-service.png":::
1. Select **Review and create**.
Use the following steps to enable Tanzu Build Service when provisioning an Azure
az provider register --namespace Microsoft.SaaS ```
-1. Use the following command to accept the legal terms and privacy statements for the Azure Spring Apps Enterprise plan. This step is necessary only if your subscription has never been used to create an Enterprise plan instance.
+1. Use the following command to accept the legal terms and privacy statements for the Azure Spring Apps Enterprise plan. This step is necessary only if you never used your subscription to create an Enterprise plan instance.
```azurecli az term accept \
By using Tanzu Partner Buildpacks and CA Certificates Buildpack, the Azure Sprin
A build task is triggered when an application is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information about using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+## Build history
+
+You can see all the build resources in the **Builds** section of the Azure Spring Apps Build Service page.
++
+The table in the **Builds** section contains the following columns:
+
+- **Build Name**: The name of the build.
+- **Provisioning State**: The provisioning state of the build. The values are `Succeeded`, `Failed`, `Updating`, and `Creating`. Provisioning states `Updating` and `Creating` mean the build can't be updated until the current build finishes. Provisioning state `Failed` means your latest source code build has failed to generate a new build result.
+- **Resource Quota**: The resource quota in build pod of the build.
+- **Builder**: The builder used in the build.
+- **Latest Build Result**: The latest build result image tag of the build.
+- **Latest Build Result Provisioning State**: The latest build result provisioning state of the build. The values are `Queuing`, `Building`, `Succeeded`, and `Failed`.
+- **Latest Build Result Last Transition Time**: The last transition time for the latest build result of the build.
+- **Latest Build Result Last Transition Reason**: The last transition reason for the latest build result of the build. The values are `CONFIG`, `STACK`, and `BUILDPACK`. `CONFIG` means the build result is changed by builder updates or by a new source code deploy operation. `STACK` means the build result is changed by a stack upgrade. `BUILDPACK` means the build result is changed by a buildpack upgrade.
+- **Latest Build Result Last Transition Status**: The last transition status for the latest build result of the build. The values are `True` and `False`.
+
+For **Provisioning State**, when the value is `Failed`, deploy the source code again. If the error persists, create a support ticket.
+
+For **Latest Build Result Provisioning State**, when the value is `Failed`, check the build logs. For more information, see [Troubleshoot common build issues in Azure Spring Apps](troubleshoot-build-exit-code.md).
+
+For **Latest Build Result Last Transition Status**, when the value is `Failed`, see the **Latest Build Result Last Transition Reason** column. If the reason is `BUILDPACK` or `STACK`, no action is necessary. If the reason is `CONFIG`, deploy the source code again. If the error persists, create a support ticket.
+ ## Next steps - [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md)
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
Your application must listen on port 8080. Spring Boot applications override the
The following table indicates the features supported for each language.
-| Feature | Java | Python | Node | .NET Core | Go | [Static Files](how-to-enterprise-deploy-static-file.md) | Java Native Image |
-|--||--||--|-||-|
-| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | |
-| Out of box APM integration | ✔️ | | | | | | |
-| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | |
-| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ |
-| API portal for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Spring Cloud Gateway for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Application Configuration Service for VMware Tanzu | ✔️ | | | | | | ✔️ |
-| VMware Tanzu Service Registry | ✔️ | | | | | | ✔️ |
-| App Live View for VMware Tanzu | ✔️ | | | | | | ✔️ |
-| Virtual network | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | | | | | | |
-| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Integrate service binding with Resource Connector | ✔️ | | | | | | ✔️ |
-| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Customize health probes | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Web shell connect for troubleshooting | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ ✔️ |
-| Remote debugging | ✔️ | | | | ️ | ️ | ️ |
+| Feature | Java | Python | Node | .NET Core | Go | [Static Files](how-to-enterprise-deploy-static-file.md) | Java Native Image | PHP |
+|--||--||--|-||-|--|
+| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ |
+| Out of box APM integration | ✔️ | | | | | | | |
+| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ |
+| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ | ✔️ |
+| API portal for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Spring Cloud Gateway for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Application Configuration Service for VMware Tanzu | ✔️ | | | | | | ✔️ | |
+| VMware Tanzu Service Registry | ✔️ | | | | | | ✔️ | |
+| App Live View for VMware Tanzu | ✔️ | | | | | | ✔️ | |
+| Virtual network | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | | | | | | | |
+| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Integrate service binding with Resource Connector | ✔️ | | | | | | ✔️ | |
+| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Customize health probes | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Web shell connect for troubleshooting | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ ✔️ | ✔️ |
+| Remote debugging | ✔️ | | | | ️ | ️ | ️ | |
For more information about the supported configurations for different language apps, see the corresponding section later in this article.
The following table lists the features supported in Azure Spring Apps:
| | Indicates whether to autoconfigure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have already been installed at build time or it does nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` | | Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` | | Support building Gradle-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_GRADLE_BUILT_MODULE` | `--build-env BP_GRADLE_BUILT_MODULE=./gateway` |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM executes before JProfiler gets attached. The default value is *true*.) | | | Indicates whether to enable JProfiler support at runtime. The default value is *false*. | `BPL_JPROFILER_ENABLED` | `--env BPL_JPROFILER_ENABLED=false` | | | Indicates which port the JProfiler agent listens on. The default value is *8849*. | `BPL_JPROFILER_PORT` | `--env BPL_JPROFILER_PORT=8849` |
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-staging-environment.md
Use the following steps to view deployed apps.
:::image type="content" source="media/how-to-staging-environment/running-staging-app.png" lightbox="media/how-to-staging-environment/running-staging-app.png" alt-text="Screenshot that shows the URL of the staging app."::: >[!TIP]
-> Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly. If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format `https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/demo/green`. Use this format to access your endpoint.
+> Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly. If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format `https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/demo/green`. Use this format to access your endpoint.
>[!NOTE]
-> Configuration server settings apply to both your staging environment and your production environment. For example, if you set the context path (*server.servlet.context-path*) for your app demo in the configuration server as *somepath*, the path to your green deployment changes to `https://\<username>:\<password>@\<cluster-name>.test.azureapps.io/demo/green/somepath/...`.
+> Configuration server settings apply to both your staging environment and your production environment. For example, if you set the context path (*server.servlet.context-path*) for your app demo in the configuration server as *somepath*, the path to your green deployment changes to `https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/demo/green/somepath/...`.
If you visit your public-facing app demo at this point, you should see the old page without your new change.
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
But if you try to set up the Azure Spring Apps service instance by using the [Az
If you want to set up the Azure Spring Apps service instance by using the Resource Manager template, first refer to [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-The name of the Azure Spring Apps service instance is used for requesting a subdomain name under `azureapps.io`, so the setup fails if the name conflicts with an existing one. You might find more details in the activity logs.
+The name of the Azure Spring Apps service instance is used for requesting a subdomain name under `azuremicroservices.io`, so the setup fails if the name conflicts with an existing one. You might find more details in the activity logs.
### I can't deploy a .NET Core app
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Previously updated : 08/02/2023 Last updated : 11/29/2023 ms.devlang: python
In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
+To learn about managing properties and metadata using asynchronous APIs, see [Set blob metadata asynchronously](#set-blob-metadata-asynchronously).
+ ## Prerequisites - This article assumes you already have a project set up to work with the Azure Blob Storage client library for Python. To learn about setting up your project, including package installation, adding `import` statements, and creating an authorized client object, see [Get started with Azure Blob Storage and Python](storage-blob-python-get-started.md).
Any properties not explicitly set are cleared. To preserve any existing properti
The following code example sets the `content_type` and `content_language` system properties on a blob, while preserving the existing properties: To retrieve properties on a blob, use the following method:
To retrieve properties on a blob, use the following method:
The following code example gets a blob's system properties and displays some of the values: ## Set and retrieve metadata
You can specify metadata as one or more name-value pairs on a blob or container
The following code example sets metadata on a blob: To retrieve metadata, call the [get_blob_properties](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-get-blob-properties) method on your blob to populate the metadata collection, then read the values, as shown in the example below. The `get_blob_properties` method retrieves blob properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation. The following code example reads metadata on a blob and prints each key/value pair: +
+## Set blob metadata asynchronously
+
+The Azure Blob Storage client library for Python supports managing blob properties and metadata asynchronously. To learn more about project setup requirements, see [Asynchronous programming](storage-blob-python-get-started.md#asynchronous-programming).
+
+Follow these steps to set blob metadata using asynchronous APIs:
+
+1. Add the following import statements:
+
+ ```python
+ import asyncio
+
+ from azure.identity.aio import DefaultAzureCredential
+ from azure.storage.blob.aio import BlobServiceClient
+ ```
+
+1. Add code to run the program using `asyncio.run`. This function runs the passed coroutine, `main()` in our example, and manages the `asyncio` event loop. Coroutines are declared with the async/await syntax. In this example, the `main()` coroutine first creates the top level `BlobServiceClient` using `async with`, then calls the method that sets the blob metadata. Note that only the top level client needs to use `async with`, as other clients created from it share the same connection pool.
+
+ ```python
+ async def main():
+ sample = BlobSamples()
+
+ # TODO: Replace <storage-account-name> with your actual storage account name
+ account_url = "https://<storage-account-name>.blob.core.windows.net"
+ credential = DefaultAzureCredential()
+
+ async with BlobServiceClient(account_url, credential=credential) as blob_service_client:
+ await sample.set_metadata(blob_service_client, "sample-container")
+
+ if __name__ == '__main__':
+ asyncio.run(main())
+ ```
+
+1. Add code to set the blob metadata. The code is the same as the synchronous example, except that the method is declared with the `async` keyword and the `await` keyword is used when calling the `get_blob_properties` and `set_blob_metadata` methods.
+
+ :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags-async.py" id="Snippet_set_blob_metadata":::
+
+With this basic setup in place, you can implement other examples in this article as coroutines using async/await syntax.
## Resources
The Azure SDK for Python contains libraries that build on top of the Azure REST
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py)
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags-async.py) code samples from this article (GitHub)
[!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Previously updated : 08/02/2023 Last updated : 11/29/2023 ms.devlang: python
This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
+To learn about setting blob index tags using asynchronous APIs, see [Set blob index tags asynchronously](#set-blob-index-tags-asynchronously).
+ ## Prerequisites - This article assumes you already have a project set up to work with the Azure Blob Storage client library for Python. To learn about setting up your project, including package installation, adding `import` statements, and creating an authorized client object, see [Get started with Azure Blob Storage and Python](storage-blob-python-get-started.md).
You can set tags by using the following method:
The specified tags in this method will replace existing tags. If old values must be preserved, they must be downloaded and included in the call to this method. The following example shows how to set tags: You can delete all tags by passing an empty `dict` object into the `set_blob_tags` method: ## Get tags
You can get tags by using the following method:
The following example shows how to retrieve and iterate over the blob's tags: ## Filter and find data with blob index tags
You can find data by using the following method:
The following example finds and lists all blobs tagged as an image: +
+## Set blob index tags asynchronously
+
+The Azure Blob Storage client library for Python supports working with blob index tags asynchronously. To learn more about project setup requirements, see [Asynchronous programming](storage-blob-python-get-started.md#asynchronous-programming).
+
+Follow these steps to set blob index tags using asynchronous APIs:
+
+1. Add the following import statements:
+
+ ```python
+ import asyncio
+
+ from azure.identity.aio import DefaultAzureCredential
+ from azure.storage.blob.aio import BlobServiceClient
+ ```
+
+1. Add code to run the program using `asyncio.run`. This function runs the passed coroutine, `main()` in our example, and manages the `asyncio` event loop. Coroutines are declared with the async/await syntax. In this example, the `main()` coroutine first creates the top level `BlobServiceClient` using `async with`, then calls the method that sets the blob index tags. Note that only the top level client needs to use `async with`, as other clients created from it share the same connection pool.
+
+ ```python
+ async def main():
+ sample = BlobSamples()
+
+ # TODO: Replace <storage-account-name> with your actual storage account name
+ account_url = "https://<storage-account-name>.blob.core.windows.net"
+ credential = DefaultAzureCredential()
+
+ async with BlobServiceClient(account_url, credential=credential) as blob_service_client:
+ await sample.set_blob_tags(blob_service_client, "sample-container")
+
+ if __name__ == '__main__':
+ asyncio.run(main())
+ ```
+
+1. Add code to set the blob index tags. The code is the same as the synchronous example, except that the method is declared with the `async` keyword and the `await` keyword is used when calling the `get_blob_tags` and `set_blob_tags` methods.
+
+ :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags-async.py" id="Snippet_set_blob_tags":::
+
+With this basic setup in place, you can implement other examples in this article as coroutines using async/await syntax.
## Resources
The Azure SDK for Python contains libraries that build on top of the Azure REST
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py)
+- View [synchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags.py) or [asynchronous](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs-properties-metadata-tags-async.py) code samples from this article (GitHub)
[!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
description: An overview of Azure Elastic SAN Preview networking options, includ
Previously updated : 11/06/2023 Last updated : 11/29/2023
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: Learn how to configure access to an Azure Elastic SAN Preview.
Previously updated : 11/06/2023 Last updated : 11/29/2023
There are no extra registration steps required.
You enable public Internet access to your Elastic SAN endpoints at the SAN level. Enabling public network access for an Elastic SAN allows you to configure public access to individual volume groups over storage service endpoints. By default, public access to individual volume groups is denied even if you allow it at the SAN level. You must explicitly configure your volume groups to permit access from specific IP address ranges and virtual network subnets.
-You can enable public network access when you create an elastic SAN, or enable it for an existing SAN using the Azure portal, PowerShell, or the Azure CLI.
+You can enable public network access when you create an elastic SAN, or enable it for an existing SAN using the Azure PowerShell module or the Azure CLI.
# [Portal](#tab/azure-portal)
-To enable public network access when you create a new Elastic SAN, proceed through the deployment. On the **Networking** tab, select **Enable from virtual networks** as shown in this image:
--
-To enable it for an existing Elastic SAN, navigate to **Networking** under **Settings** for the Elastic SAN then select **Enable public access from selected virtual networks** as shown in this image:
-
+Use the Azure PowerShell module or the Azure CLI to enable public network access.
# [PowerShell](#tab/azure-powershell)
$NewEsanArguments = @{
ExtendedCapacitySizeTiB = $ExtendedSize Location = $Location SkuName = $SkuName
- PublicNetworkAccess = Enabled
+ PublicNetworkAccess = "Enabled"
} # Create the Elastic San. New-AzElasticSan @NewEsanArguments
storage Authorize Oauth Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-oauth-rest.md
namespace FilesOAuthSample
string aadEndpoint = ""; string accountUri = ""; string connectionString = "";
- string shareName = "testShare";
+ string shareName = "test-share";
string directoryName = "testDirectory"; string fileName = "testFile";
namespace FilesOAuthSample
AuthorityHost = new Uri(aadEndpoint) });
- ShareClientOptions clientOptions = new ShareClientOptions(ShareClientOptions.ServiceVersion.V2021_12_02);
+ ShareClientOptions clientOptions = new ShareClientOptions(ShareClientOptions.ServiceVersion.V2023_05_03);
// Set Allow Trailing Dot and Source Allow Trailing Dot. clientOptions.AllowTrailingDot = true;
- clientOptions.SourceAllowTrailingDot = true;
+ clientOptions.AllowSourceTrailingDot = true;
// x-ms-file-intent=backup will automatically be applied to all APIs // where it is required in derived clients.
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
We recommend the following limits for the indicators.
| Total number of resource associations to a schedule | 3,000 | | Resource associations on each dynamic scope | 1,000 | | Number of dynamic scopes per resource group or subscription per region | 250 |
+| Number of dynamic scopes per schedule | 30 |
+| Total number of subscriptions attached to all dynamic scopes per schedule | 30 |
For more information, see the [service limits for Dynamic scope](dynamic-scope-overview.md#service-limits).
virtual-machine-scale-sets Alert Rules Automatic Repairs Service State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/alert-rules-automatic-repairs-service-state.md
+
+ Title: Use Azure Alert Rules to monitor changes in Automatic Instance Repairs ServiceState
+description: Learn how to use Azure Alert Rules to get notified of changes to Automatic Instance Repairs ServiceState.
++++ Last updated : 11/14/2023++
+# Use Azure Alert Rules to monitor changes in Automatic Instance Repairs ServiceState
+
+This article shows you how to use [Alert Rules from Azure Monitor](../azure-monitor/alerts/alerts-overview.md) to receive custom notifications every time the ServiceState for Automatic Repairs is updated on your scale set. This will help track if Automatic Repairs become _Suspended_ due to VM instances remaining unhealthy after multiple repair operations. To learn more about Azure Monitor alerts, see the [alerts overview](../azure-monitor/alerts/alerts-overview.md).
+
+To follow this tutorial, ensure that you have a Virtual Machine scale set with [Automatic Repairs](./virtual-machine-scale-sets-automatic-instance-repairs.md) enabled.
+
+## Azure portal
+1. In the [portal](https://portal.azure.com/), navigate to your VM scale set resource
+2. Select **Alerts** from the left pane, and then select **+ Create > Alert rule**. :::image type="content" source="media/alert-rules-automatic-repairs-service-state/picture-1.png" alt-text="Create monitoring alert in the Azure portal":::
+3. Under the **Condition** tab, select **See all signals** and choose the signal name called ΓÇ£Sets the state of an orchestration service in a Virtual Machine Scale setΓÇ¥. Select **Apply**. :::image type="content" source="media/alert-rules-automatic-repairs-service-state/picture-2.png" alt-text="Select alert signal to monitor scale set orchestration service state":::
+4. Set **Event Level** to ΓÇ£InformationalΓÇ¥ and **Status** to ΓÇ£SucceededΓÇ¥. :::image type="content" source="media/alert-rules-automatic-repairs-service-state/picture-3.png" alt-text="Configure event level and status for alert rule":::
+5. Under the **Actions** tab, select an existing action group or see [Create action group](#creating-an-action-group)
+6. Under the **Details** tab > **Alert rule name**, set a name for your alert. Then select **Review + create** > **Create** to create your alert.
+
+Once the alert is created and enabled on your scale set, you'll receive a notification every time a change to the ServiceState is detected on your scale set.
+
+### Sample email notification from alert rule
+Below is an example of an email notification created from a configured alert rule.
+
+## Creating an action group
+1. Under the **Actions** tab, select **Create action group**.
+2. In the **Basics** tab, provide an **Action group name** and **Display name**.
+3. Under the **Notifications** tab **> Notification type**, select ΓÇ£Email/SMS message/Push/VoiceΓÇ¥. Select the **edit** button to configure how youΓÇÖd like to be notified.
+4. Select **Review + Create > Create**
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 11/21/2023 Last updated : 11/29/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Automatic repairs currently do not support scenarios where a VM instance is mark
Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, the scale set will perform a preconfigured repair action on the unhealthy instance. Automatic instance repairs can be enabled in the Virtual Machine Scale Set model by using the `automaticRepairsPolicy` object.
+The automatic instance repairs process goes as follows:
+
+1. [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) ping the application endpoint inside each virtual machine in the scale set to get application health status for each instance.
+2. If the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy".
+3. When an instance is found to be unhealthy, the scale set applies the configured repair action (default is *Replace*) to the unhealthy instance.
+4. Instance repairs are performed in batches. At any given time, no more than 5% of the total instances in the scale set are repaired. If a scale set has fewer than 20 instances, the repairs are done for one unhealthy instance at a time.
+5. The above process continues until all unhealthy instance in the scale set are repaired.
+ ### Available repair actions > [!CAUTION]
Virtual Machine Scale Sets provide the capability to temporarily suspend automat
If newly created instances for replacing the unhealthy ones in a scale set continue to remain unhealthy even after repeatedly performing repair operations, then as a safety measure the platform updates the *serviceState* for automatic repairs to *Suspended*. You can resume the automatic repairs again by setting the value of *serviceState* for automatic repairs to *Running*. Detailed instructions are provided in the section on [viewing and updating the service state of automatic repairs policy](#viewing-and-updating-the-service-state-of-automatic-instance-repairs-policy) for your scale set.
-The automatic instance repairs process works as follows:
-
-1. [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) ping the application endpoint inside each virtual machine in the scale set to get application health status for each instance.
-2. If the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy".
-3. When an instance is found to be unhealthy, the scale set applies the configured repair action (default is *Replace*) to the unhealthy instance.
-4. Instance repairs are performed in batches. At any given time, no more than 5% of the total instances in the scale set are repaired. If a scale set has fewer than 20 instances, the repairs are done for one unhealthy instance at a time.
-5. The above process continues until all unhealthy instance in the scale set are repaired.
+You can also set up Azure Alert Rules to monitor *serviceState* changes and get notified if automatic repairs becomes suspended on your scale set. For details, see [Use Azure alert rules to monitor changes in automatic instance repairs service state](./alert-rules-automatic-repairs-service-state.md).
## Instance protection and automatic repairs
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
From this example accumulation of Minutes Not Available, here's the calculation
- At VM deployment, Fault Domain (FD) count of up to 3 may be set as desired using Virtual Machine Scale Sets. A deployment with more than 3 FDs will fail to deploy against a Capacity Reservation. - Support for below VM Series for Capacity Reservation is in Public Preview: - Lsv2
+ - NC-series,v3 and newer
+ - NV-series,v2 and newer
- At VM deployment, Fault Domain (FD) count of 1 can be set using Virtual Machine Scale Sets. A deployment with more than 1 FD will fail to deploy against a Capacity Reservation. - Support for other VM Series isn't currently available: - M series, any version
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
description: This article helps you troubleshoot common problems and errors you
Previously updated : 11/10/2023 Last updated : 11/27/2023
VM Image Builder failures can happen in two areas:
- During image template submission - During image building
+> [!NOTE]
+> CIS-hardened images (Linux or Windows) on Azure marketplace, managed by CIS, can cause build failures with Azure Image Builder service due to their configurations. For instance:
+> - CIS-Hardened Windows images might disrupt WinRM connectivity, a prerequisite for AIB build.
+> - CIS Linux images can fail due to `chmod +x` permission issues.
+ ## Troubleshoot image template submission errors Image template submission errors are returned at submission only. There isn't an error log for image template submission errors. If there's an error during submission, you can return the error by checking the status of the template, specifically by reviewing `ProvisioningState` and `ProvisioningErrorMessage`/`provisioningError`.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
The following are the recommended limits for the mentioned indicators
| Total number of Resource associations to a schedule | 3000 | | Resource associations on each dynamic scope | 1000 | | Number of dynamic scopes per Resource Group or Subscription per Region | 250 |
-| Number of dynamic scopes per Maintenance Configuration | 50 |
+| Number of dynamic scopes per schedule | 30 |
+| Total number of subscriptions attached to all dynamic scopes per schedule | 30 |
The following are the Dynamic Scope recommended limits for **each dynamic scope**
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
virtual-machines Reserved Vm Instance Size Flexibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/reserved-vm-instance-size-flexibility.md
You buy a reserved VM instance with the size Standard_DS4_v2 where the ratio or
- Scenario 3: Run one Standard_DS5_v2 with a ratio of 16. Your reservation discount applies to half that VM's compute cost. - Scenario 4: Run one Standard_DS5_v2 with a ratio of 16 and purchase an additional Standard_DS4_v2 reservation with a ratio of 8. Both reservations combine and apply the discount to entire VM.
-The following sections show what sizes are in the same size series group when you buy a reserved VM instance optimized for instance size flexibility.
+The following sections show what sizes are in the same size series group when you buy a reserved VM instance optimized for instance size flexibility. SKUs that have same ratio and are in same size series group, would have no additional cost if you purchase reservation for any of those SKUs for same number of VMs running.
## Instance size flexibility ratio for VMs
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Extended Update Support (EUS) repositories are available to customers who might
> [!NOTE] > EUS is not supported on RHEL Extras. This means that if you install a package that is usually available from the RHEL Extras channel, you can't install while on EUS. For more information, see [Red Hat Enterprise Linux Extras Product Life Cycle](https://access.redhat.com/support/policy/updates/extras/).
-Currently, EUS support has ended for RHEL <= 7.7. For more information, see [Red Hat Enterprise Linux Extended Maintenance](https://access.redhat.com/support/policy/updates/errata/#Long_Support).
+Support for EUS RHEL7 ended in August 30, 2021. For more information, see [Red Hat Enterprise Linux Extended Maintenance](https://access.redhat.com/support/policy/updates/errata/#Long_Support).
-- RHEL 7.4 EUS support ends August 31, 2019-- RHEL 7.5 EUS support ends April 30, 2020-- RHEL 7.6 EUS support ends May 31, 2021-- RHEL 7.7 EUS support ends August 30, 2021-- RHEL 8.4 EUS support ends May 31, 2023
+- RHEL 7.4 EUS support ended August 31, 2019
+- RHEL 7.5 EUS support ended April 30, 2020
+- RHEL 7.6 EUS support ended May 31, 2021
+- RHEL 7.7 EUS support ended August 30, 2021
+- RHEL 8.4 EUS support ended May 31, 2023
- RHEL 8.6 EUS support ends May 31, 2024 - RHEL 9.0 EUS support ends May 31, 2024
-### Switch a RHEL VM 8.x to EUS
+
+### Switch a RHEL Server to EUS Repositories.
+
+#### [Switching to EUS repositories on RHEL7](#tab/rhel7)
+
+>[!NOTE]
+>Support for RHEL7 EUS ended in August 30, 2021. It is not recommended to switch to EUS repositories in RHEL7 anymore.
+
+#### [Switching to EUS repositories on RHEL8](#tab/rhel8)
Use the following procedure to lock a RHEL 8.x VM to a particular minor release. Run the commands as `root`: >[!NOTE]
-> This procedure only applies for RHEL 8.x versions for which EUS is available. Currently, this includes RHEL 8.1, 8.2, 8.4, 8.6, and 8.8. For more information, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata).
+> This procedure only applies for RHEL 8.x versions for which EUS is available. This includes RHEL 8.1, 8.2, 8.4, 8.6, and 8.8. For more information, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata).
1. Disable non-EUS repositories. ```bash
- sudo yum --disablerepo='*' remove 'rhui-azure-rhel8'
+ sudo dnf --disablerepo='*' remove 'rhui-azure-rhel8'
```
-1. Get the EUS repository `config` file.
+1. Add EUS repositories.
```bash
- curl -O https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
+ sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config' install rhui-azure-rhel8-eus
+ ```
+
+
+1. Lock the `releasever` level, it has to be one of 8.1, 8.2, 8.4, 8.6 or 8.8.
++
+ ```bash
+ sudo sh -c 'echo 8.8 > /etc/dnf/vars/releasever'
+ ```
+
+ If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file.
+
+ > [!NOTE]
+ > This instruction locks the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` locks your RHEL version to RHEL 8.1.
+
+1. Update your RHEL VM.
+
+ ```bash
+ sudo dnf update
+ ```
+
+#### [Switching to EUS repositories on RHEL9](#tab/rhel9)
+
+Use the following procedure to lock a RHEL 9.x VM to a particular minor release. Run the commands as `root`:
+
+>[!NOTE]
+> This procedure only applies for RHEL 9.x versions for which EUS is available. Currently, this includes RHEL 9.0 and 9.2. For more information, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata).
+
+1. Disable non-EUS repositories.
+
+ ```bash
+ sudo dnf --disablerepo='*' remove 'rhui-azure-rhel9'
``` 1. Add EUS repositories. ```bash
- sudo yum --config=rhui-microsoft-azure-rhel8-eus.config install rhui-azure-rhel8-eus
+ sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel9-eus.config' install rhui-azure-rhel9-eus
```
+
+
+1. Lock the `releasever` level, currently it has to be one of 9.0 and 9.2.
-1. Lock the `releasever` variable. Be sure to run the command as `root`.
```bash
- sudo sh -c 'echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever'
+ sudo sh -c 'echo 9.2 > /etc/dnf/vars/releasever'
``` If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file. > [!NOTE]
- > This instruction locks the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` locks your RHEL version to RHEL 8.1.
+ > This instruction locks the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 9.2 > /etc/yum/vars/releasever` locks your RHEL version to RHEL 9.2.
1. Update your RHEL VM. ```bash
- sudo yum update
+ sudo dnf update
```+
+### Switch a RHEL Server to non-EUS Repositories.
-### Switch a RHEL 8.x VM back to non-EUS
+#### [Switching to non-EUS repositories on RHEL7](#tab/rhel7)
To remove the version lock, use the following commands. Run the commands as `root`.
To remove the version lock, use the following commands. Run the commands as `roo
```bash sudo rm /etc/yum/vars/releasever
- ```
+ ```
1. Disable EUS repositories. ```bash
- sudo yum --disablerepo='*' remove 'rhui-azure-rhel8-eus'
+ sudo yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
+ ```
+
+1. Add non-EUS repository.
+
+ ```bash
+ sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install rhui-azure-rhel7
+ ```
+
+1. Update your RHEL VM.
+
+ ```bash
+ sudo yum update
```
-1. Get the regular repositories `config` file.
+#### [Switching to non-EUS repositories on RHEL8](#tab/rhel8)
- ```bash
- curl -O https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
- ```
+To remove the version lock, use the following commands. Run the commands as `root`.
+
+1. Remove the `releasever` file.
+
+ ```bash
+ sudo rm /etc/dnf/vars/releasever
+ ```
+
+1. Disable EUS repositories.
+
+ ```bash
+ sudo dnf --disablerepo='*' remove 'rhui-azure-rhel8-eus'
+ ```
1. Add non-EUS repository. ```bash
- sudo yum --config=rhui-microsoft-azure-rhel8.config install rhui-azure-rhel8
+ sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config' install rhui-azure-rhel8
``` 1. Update your RHEL VM. ```bash
- sudo yum update
+ sudo dnf update
```
-### Switch a RHEL 7.x VM back to non-EUS (remove a version lock)
-Run the following commands as root:
-1. Remove the `releasever` file:
- ```bash
- rm /etc/yum/vars/releasever
- ```
-1. Disable EUS repos:
- ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
+#### [Switching to non-EUS repositories on RHEL9](#tab/rhel9)
+
+To remove the version lock, use the following commands. Run the commands as `root`.
+
+1. Remove the `releasever` file.
+
+ ```bash
+ sudo rm /etc/dnf/vars/releasever
+ ```
+
+1. Disable EUS repositories.
+
+ ```bash
+ sudo dnf --disablerepo='*' remove 'rhui-azure-rhel9-eus'
+ ```
+
+1. Add non-EUS repository.
+
+ ```bash
+ sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel9.config' install rhui-azure-rhel8
+ ```
+
+1. Update your RHEL VM.
+
+ ```bash
+ sudo dnf update
```
-1. Configure RHEL VM
- ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
- ```
-1. Update your RHEL VM
- ```bash
- sudo yum update
- ```
+ ## The IPs for the RHUI content delivery servers RHUI is available in all regions where RHEL on-demand images are available. Availability currently includes all public regions listed in the [Azure status dashboard](https://azure.microsoft.com/status/), Azure US Government, and Microsoft Azure Germany regions.
southeastasia - 20.24.186.80
> - As of October 12, 2023, all pay-as-you-go (PAYG) clients will be directed to the Red Hat Update Infrastructure (RHUI) 4 IPs in phase over the next two months. During this time, the RHUI3 IPs will remain for continued updates but will be removed at a future time. Existing routes and rules allowing access to RHUI3 IPs must be updated to also include RHUI4 IP addresses for uninterrupted access to packages and updates. Do not remove RHUI3 IPs to continue receiving updates during the transition period. > > - Also, the new Azure US Government images, as of January 2020, uses Public IP mentioned previously under the Azure Global header.- >
-> Also, Azure Germany is deprecated in favor of public Germany regions. We recommend for Azure Germany customers to start pointing to public RHUI by using the steps in [Manual update procedure to use the Azure RHUI servers](#manual-update-procedure-to-use-the-azure-rhui-servers).
+> - Also, Azure Germany is deprecated in favor of public Germany regions. We recommend for Azure Germany customers to start pointing to public RHUI by using the steps in [Manual update procedure to use the Azure RHUI servers](#manual-update-procedure-to-use-the-azure-rhui-servers).
+ ## Azure RHUI Infrastructure ### Update expired RHUI client certificate on a VM
If you experience problems connecting to Azure RHUI from your Azure RHEL PAYG VM
1. Inspect the VM configuration for the Azure RHUI endpoint:
- - Check whether the `/etc/yum.repos.d/rh-cloud.repo` file contains a reference to `rhui-[1-3].microsoft.com` in the `baseurl` of the `[rhui-microsoft-azure-rhel*]` section of the file. If it does, you're using the new Azure RHUI.
+ - Check whether the `/etc/yum.repos.d/rh-cloud.repo` file contains a reference to `rhui-[1-4].microsoft.com` in the `baseurl` of the `[rhui-microsoft-azure-rhel*]` section of the file. If it does, you're using the new Azure RHUI.
- If the reference points to a location with the following pattern, `mirrorlist.*cds[1-4].cloudapp.net`, a configuration update is required. You're using the old VM snapshot, and you need to update it to point to the new Azure RHUI.
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - To modify dynamic network groups, you must be [granted access via Azure RBAC role](concept-network-groups.md#network-groups-and-azure-policy) assignment only. Classic Admin/legacy authorization is not supported.
-## Create a Virtual Network Manager instance
-
-Deploy a Virtual Network Manager instance with the defined scope and access that you need:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Select **+ Create a resource** and search for **Network Manager**. Then select **Network Manager** > **Create** to begin setting up Virtual Network Manager.
-
-1. On the **Basics** tab, enter or select the following information, and then select **Review + create**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics-thumbnail.png" alt-text="Screenshot of basic information for creating a network manager." lightbox="./media/create-virtual-network-manager-portal/network-manager-basics-thumbnail.png":::
-
- | Setting | Value |
- | - | -- |
- | **Subscription** | Select the subscription where you want to deploy Virtual Network Manager. |
- | **Resource group** | Select **Create new** and enter **rg-learn-eastus-001**.
- | **Name** | Enter **vnm-learn-eastus-001**. |
- | **Region** | Enter **eastus** or a region of your choosing. Virtual Network Manager can manage virtual networks in any region. The selected region is where the Virtual Network Manager instance will be deployed. |
- | **Description** | *(Optional)* Provide a description about this Virtual Network Manager instance and the task it's managing. |
- | [Scope](concept-network-manager-scope.md#scope) | Choose **Select scopes** and then select your subscription.</br> Select **Add to selected scope** > **Select**. </br> Scope information defines the resources that Virtual Network Manager can manage. You can choose subscriptions and management groups.
- | [Features](concept-network-manager-scope.md#features) | Select **Connectivity** and **Security Admin** from the dropdown list. </br> **Connectivity** enables the creation of a full mesh or hub-and-spoke network topology between virtual networks within the scope. </br> **Security Admin** enables the creation of global network security rules. |
-
-1. Select **Create** after your configuration passes validation.
## Create virtual networks
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
TX packets 9103233 bytes 2183731687 (2.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ```
-The synthetic interface always has a name in the form `eth\<n\>`. Depending on the Linux distribution, the VF interface might have a name in the form `eth\<n\>`. Or it might have a name in a different form because of a udev rule that does renaming.
+The synthetic interface always has a name in the form `eth\<n\>`. Depending on the Linux distribution, the VF interface might have a name in the form `eth\<n\>`. Or it might have a different name in the form of `enP\<n\>` because of a udev rule that does renaming.
You can determine whether a particular interface is the synthetic interface or the VF interface by using the shell command line that shows the device driver that the interface uses:
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/21/2023 Last updated : 11/29/2023
virtual-wan Scenario Secured Hub App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-secured-hub-app-gateway.md
Currently, routes that are advertised from the Virtual WAN route table to spoke
* To ensure the application gateway is able to send traffic directly to the Internet, specify the following UDR:
- * **Address Prefix:** 0.0.0.0.0/0
+ * **Address Prefix:** 0.0.0.0/0
* **Next Hop:** Internet * To ensure the application gateway is able to send traffic to the backend pool via Azure Firewall in the Virtual WAN hub, specify the following UDR:
vpn-gateway Gateway Sku Resize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-resize.md
description: Learn how to resize a gateway SKU.
Previously updated : 10/25/2023 Last updated : 11/29/2023 # Resize a gateway SKU
-This article helps you resize a VPN Gateway virtual network gateway SKU. Resizing a gateway SKU is a relatively fast process. You don't need to delete and recreate your existing VPN gateway to resize. However, there are certain limitations and restrictions for resizing and not all SKUs are available when resizing.
+This article helps you resize a VPN Gateway virtual network gateway SKU. Resizing a gateway SKU is a relatively fast process. You don't need to delete and recreate your existing VPN gateway to resize. However, there are certain limitations and restrictions for resizing and not all SKUs are available to resize.
[!INCLUDE [changing vs. resizing](../../includes/vpn-gateway-sku-about-change-resize.md)] When using the portal to resize your SKU, notice that the dropdown list of available SKUs is based on the SKU you currently have. If you don't see the SKU you want to resize to, instead of resizing, you have to change to a new SKU. For more information, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md).
-> [!NOTE]
-> The steps in this article apply to current Resource Manager deployments and not to legacy classic (service management) deployments.
- ## Considerations There are a number of things to consider when moving to a new gateway SKU. This section outlines the main items and also provides a table that helps you select the best method to use.
The following table helps you understand the required method to move from one SK
## Resize a SKU
-Resizing a SKU takes about 45 minutes to complete.
+The following steps apply to current Resource Manager deployments and not to legacy classic (service management) deployments. Resizing a SKU takes about 45 minutes to complete.
1. Go to the **Configuration** page for your virtual network gateway. 1. On the right side of the page, click the dropdown arrow to show a list of available SKUs. The options listed are based on the starting SKU and SKU Generation.
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
Title: Legacy Azure virtual network VPN gateway SKUs
+ Title: VPN Gateway legacy SKUs
description: How to work with the old virtual network gateway SKUs; Basic, Standard, and High Performance. Previously updated : 11/28/2023 Last updated : 11/29/2023
-# Working with virtual network gateway SKUs (legacy SKUs)
+# Working with VPN Gateway legacy SKUs
This article contains information about the legacy (old) virtual network gateway SKUs. The legacy SKUs still work in both deployment models for VPN gateways that have already been created. Classic VPN gateways continue to use the legacy SKUs, both for existing gateways, and for new gateways. When creating new Resource Manager VPN gateways, use the new gateway SKUs. For information about the new SKUs, see [About VPN Gateway](vpn-gateway-about-vpngateways.md).
-## <a name="gwsku"></a>Gateway SKUs
+## <a name="gwsku"></a>Legacy gateway SKUs
[!INCLUDE [Legacy gateway SKUs](../../includes/vpn-gateway-gwsku-legacy-include.md)]
-You can view legacy gateway pricing in the **Virtual Network Gateways** section, which is located in on the [ExpressRoute pricing page](https://azure.microsoft.com/pricing/details/expressroute).
+You can view legacy gateway pricing in the **Virtual Network Gateways** section, which is located on the [ExpressRoute pricing page](https://azure.microsoft.com/pricing/details/expressroute).
## SKU deprecation The Standard and High Performance SKUs will be deprecated September 30, 2025. The product team will make a migration path available for these SKUs by November 30, 2024. **At this time, there's no action that you need to take**.
+When the migration path becomes available, you can migrate your legacy SKUs to the following SKUs:
+
+* **Standard SKU:** -> **VpnGw1**
+* **High Performance SKU:** -> **VpnGw2**
+ There are no [price](https://azure.microsoft.com/pricing/details/vpn-gateway/) changes if you migrate to Standard (VpnGw1) and High Performance (VpnGw2) gateways. As a benefit, there's a performance improvement after migrating:
-* **Standard** 6.5x
-* **High Performance** 5x
+* **Standard SKU:** 6.5x
+* **High Performance SKU:** 5x
-If you don't migrate your gateway by September 30, 2025, your gateway will be automatically upgraded to AZ gateways: VpnGw1AZ (Standard) or VpnGw2AZ (High Performance).
+If you don't migrate your gateway SKUs by September 30, 2025, your gateway will be automatically migrated and upgraded to an AZ gateway SKU:
+
+* **Standard SKU:** -> **VpnGw1AZ**
+* **High Performance SKU:** -> **VpnGw2AZ**
Important Dates:
Important Dates:
[!INCLUDE [Table requirements for old SKUs](../../includes/vpn-gateway-table-requirements-legacy-sku-include.md)]
-## <a name="resize"></a>Resize a gateway
+## Resize, migrate, and change SKUs
+
+### <a name="resize"></a>Resize a gateway SKU
-Except for the Basic SKU, you can resize your gateway to a gateway SKU within the same SKU family. For example, if you have a Standard SKU, you can resize to a High Performance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. For example, you can't go from a Standard SKU to a VpnGw2 SKU, or a Basic SKU to VpnGw1.
+Resizing a gateway SKU incurs less downtime and fewer configuration changes than the process to change to a new SKU. However, there are limitations. You can only resize your gateway to a gateway SKU within the same SKU family (except for the Basic SKU).
-### Resource Manager
+For example, if you have a Standard SKU, you can resize to a High Performance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. You can't go from a Standard SKU to a VpnGw2 SKU, or from a Basic SKU to VpnGw1 by resizing. For more information, see [Resize a gateway SKU](gateway-sku-resize.md).
+
+**Resource Manager**
You can resize a gateway for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) using the Azure portal or PowerShell. For PowerShell, use the following command:
$gw = Get-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg
Resize-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -GatewaySku HighPerformance ```
-### <a name="classicresize"></a>Classic
+**Classic**
To resize a gateway for the [classic deployment model](../azure-resource-manager/management/deployment-models.md), you must use the Service Management PowerShell cmdlets. Use the following command:
To resize a gateway for the [classic deployment model](../azure-resource-manager
Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerformance ```
-## <a name="change"></a>Change to the new gateway SKUs
+### <a name="migrate"></a>Migrate a gateway SKU
+
+A gateway SKU migration process is similar to a resize. It requires fewer steps and configuration changes than changing to a new gateway SKU. At this time, gateway SKU migration isn't available. You can migrate a deprecated legacy gateway SKU December 2024 through September 30, 2025. We'll make a migration path available along with detailed documentation.
+
+### <a name="change"></a>Change to the new gateway SKUs
-> [!NOTE]
-> Standard and High Performance SKUs will be deprecated September 30, 2025. While you can choose to change to the new gateway SKUs at any point, there is no requirement to do so at this time. The product team will make a migration path available for these SKUs by November 30, 2024. See [Legacy SKU deprecation](#sku-deprecation) for more information.
+Standard and High Performance SKUs will be deprecated September 30, 2025. The product team will make a migration path available for legacy SKUs. See the [Legacy SKU deprecation](#sku-deprecation) section for more information. You can choose to change from a legacy SKU to one of the new SKUs at any point. However, changing to a new SKU requires more steps than migrating and incurs more downtime.
[!INCLUDE [Change to the new SKUs](../../includes/vpn-gateway-gwsku-change-legacy-sku-include.md)]
Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerfor
For more information about the new Gateway SKUs, see [Gateway SKUs](vpn-gateway-about-vpngateways.md#gwsku).
-For more information about configuration settings, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+For more information about configuration settings, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
vpn-gateway Vpn Gateway Howto Openvpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-openvpn.md
This article helps you set up **OpenVPN® Protocol** on Azure VPN Gateway. This
* [PowerShell - Create point-to-site](vpn-gateway-howto-point-to-site-rm-ps.md)
-* If you already have a VPN gateway, verify that it doesn't use the Basic SKU. The Basic SKU isn't supported for OpenVPN. For more information about SKUs, see [VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). To resize a Basic SKU, see [Resize a legacy gateway](vpn-gateway-about-skus-legacy.md#resource-manager).
+* If you already have a VPN gateway, verify that it doesn't use the Basic SKU. The Basic SKU isn't supported for OpenVPN. For more information about SKUs, see [VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). To resize a Basic SKU, see [Resize a legacy gateway](vpn-gateway-about-skus-legacy.md).
## Portal
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
If you're having trouble connecting, see the **Troubleshoot** section of the tab
Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways. For steps, see [Reset a VPN gateway](./reset-gateway.md#resetclassic).
-## <a name="changesku"></a>How to change a gateway SKU
+## <a name="changesku"></a>How to resize a gateway SKU
-For steps to change a gateway SKU, see [Resize a gateway SKU](vpn-gateway-about-SKUS-legacy.md#classicresize).
+To resize a gateway for the [classic deployment model](../azure-resource-manager/management/deployment-models.md), you must use the Service Management PowerShell cmdlets. Use the following command:
+
+```powershell
+Resize-AzureVirtualNetworkGateway -GatewayId <Gateway ID> -GatewaySKU HighPerformance
+```
## Next steps