Updates from: 04/03/2024 01:08:39
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor Resiliency Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-resiliency-reviews.md
You can find resiliency reviews created by your account team in the left navigat
If there's a new review available to you, you see a notification banner on top of the Advisor pages. A **New** review is one with all recommendations in the *Pending* state.
-1. Open the Azure portal and navigate to [Advisor](https://aka.ms/Advisor_Reviews).
+1. Open the Azure portal and navigate to [Advisor](https://aka.ms/azureadvisordashboard).
Select **Manage** > **Reviews (Preview)** in the left navigation pane. A list of reviews opens. At the top of the page, you see the number of **Total Reviews** and review **Recommendations**, and a graph of **Reviews by status**. 1. Use search, filters, and sorting to find the review you need. You can filter reviews by one of the **Status equals** states shown next, or choose *All* (the default) to see all reviews. If you donΓÇÖt see a review for your subscription, make sure the review subscription is included in the global portal filter. You might need to update the filter to see the reviews for a subscription.
Select **Manage** > **Reviews (Preview)** in the left navigation pane. A list of
At the top of the reviews page, use **Feedback** to tell us about your experience. Use the **Refresh** button to refresh the page as needed. > [!NOTE]
-> If you have no reviews, the **Reviews** menu item in the left navigation is greyed out.
+> If you have no reviews, the **Reviews** menu item in the left navigation is hidden.
### Review recommendations
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The following Embeddings models are available with [Azure Government](/azure/azu
| | | :: | :: | | `babbage-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `davinci-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 |
-| `gpt-35-turbo` (0613) | North Central US <br> Sweden Central | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (1106) | North Central US <br> Sweden Central | Input: 16,385<br> Output: 4,096 | Sep 2021|
-| `gpt-35-turbo` (0125) | North Central US <br> Sweden Central | 16,385 | Sep 2021 |
+| `gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central | 4,096 | Sep 2021 |
+| `gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central | Input: 16,385<br> Output: 4,096 | Sep 2021|
+| `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central | 16,385 | Sep 2021 |
### Whisper models
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 03/12/2024 Last updated : 04/02/2024 recommendations: false # What's new in Azure OpenAI Service
+## April 2024
+
+### Fine-tuning is now supported in East US 2
+
+Fine-tuning is now available in East US 2 with support for:
+
+- `gpt-35-turbo` (0613)
+- `gpt-35-turbo` (1106)
+- `gpt-35-turbo` (0125)
+
+Check the [models page](concepts/models.md#fine-tuning-models), for the latest information on model availability and fine-tuning support in each region.
+ ## March 2024 ### Risks & Safety monitoring in Azure OpenAI Studio
ai-services Get Started Stt Diarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md
keywords: speech to text, speech to text software
#customer intent: As a developer, I want to create speech to text applications that use diarization to improve readability of multiple person conversations.
-# Quickstart: Create real-time diarization (Preview)
+# Quickstart: Create real-time diarization
::: zone pivot="programming-language-csharp" [!INCLUDE [C# include](includes/quickstarts/stt-diarization/csharp.md)]
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
Now that you have your evaluation dataset, you can evaluate your flow by followi
> [!NOTE] > Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a GPT-4 or gpt-35-turbo-16k model. If you didn't previously deploy a GPT-4 or gpt-35-turbo-16k model, you can deploy another model by following the steps in [Deploy a chat model](#deploy-a-chat-model). Then return to this step and select the model you deployed.
+ > The evaluation process may take up lots of tokens, so it's recommended to use a model which can support >=16k tokens.
1. Select **Add new dataset**. Then select **Next**.
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
description: Deploy a Java application with Open Liberty/WebSphere Liberty on an
Previously updated : 01/16/2024 Last updated : 04/02/2024 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
The Open Liberty Operator simplifies the deployment and management of applicatio
For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
-This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-* You can use Azure Cloud Shell or a local terminal.
- [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
-* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
- > [!NOTE] > You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker. > > :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
-* If running the commands in this guide locally (instead of Azure Cloud Shell):
- * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
- * Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
- * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
- * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+* Prepare a local machine with a Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
+* This article requires at least version 2.31.0 of Azure CLI.
+* Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
+* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+* Install [Docker](https://docs.docker.com/get-docker/) for your OS.
* Make sure you're assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group). ## Create a Liberty on AKS deployment using the portal
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Previously updated : 12/05/2023 Last updated : 04/02/2024 #Customer intent: As a cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
For information on how to override Azure's default system routes or add addition
This section covers three network rules and an application rule you can use to configure on your firewall. You may need to adapt these rules based on your deployment. * The first network rule allows access to port 9000 via TCP.
-* The second network rule allows access to port 1194 and 123 via UDP. If you're deploying to Microsoft Azure operated by 21Vianet, see the [Azure operated by 21Vianet required network rules](./outbound-rules-control-egress.md#microsoft-azure-operated-by-21vianet-required-network-rules). Both these rules will only allow traffic destined to the Azure Region CIDR in this article, which is East US.
-* The third network rule opens port 123 to `ntp.ubuntu.com` FQDN via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, so you'll need to adapt it when using your own options.
+* The second network rule allows access to port 1194 via UDP. If you're deploying to Microsoft Azure operated by 21Vianet, see the [Azure operated by 21Vianet required network rules](./outbound-rules-control-egress.md#microsoft-azure-operated-by-21vianet-required-network-rules). Both these rules will only allow traffic destined to the Azure Region CIDR in this article, which is East US.
* The fourth and fifth network rules allow access to pull containers from GitHub Container Registry (ghcr.io) and Docker Hub (docker.io). 1. Create the network rules using the [`az network firewall network-rule create`][az-network-firewall-network-rule-create] command.
This section covers three network rules and an application rule you can use to c
az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000
- az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
- az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'ghcr' --protocols 'TCP' --source-addresses '*' --destination-fqdns ghcr.io pkg-containers.githubusercontent.com --destination-ports '443' az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'docker' --protocols 'TCP' --source-addresses '*' --destination-fqdns docker.io registry-1.docker.io production.cloudflare.docker.com --destination-ports '443'
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
To run the application locally:
pip install -r requirements.txt ```
+1. Integrate a database:
+
+ ```Python
+
+ from azure.cosmos.aio import CosmosClient
+ from azure.cosmos import exceptions
+ from azure.cosmos.partition_key import PartitionKey
+
+ from configs.credential import HOST, MASTER_KEY, DATABASE_ID
++
+ def get_database_client():
+ # Initialize the Cosmos client
+ client = CosmosClient(HOST, MASTER_KEY)
+
+ # Create or get a reference to a database
+ try:
+ database = client.create_database_if_not_exists(id=DATABASE_ID)
+ print(f'Database "{DATABASE_ID}" created or retrieved successfully.')
+
+ except exceptions.CosmosResourceExistsError:
+ database = client.get_database_client(DATABASE_ID)
+ print('Database with id \'{0}\' was found'.format(DATABASE_ID))
+
+ return database
++
+ def get_container_client(container_id):
+ database = get_database_client()
+ # Create or get a reference to a container
+ try:
+ container = database.create_container(id=container_id, partition_key=PartitionKey(path='/partitionKey'))
+ print('Container with id \'{0}\' created'.format(container_id))
+
+ except exceptions.CosmosResourceExistsError:
+ container = database.get_container_client(container_id)
+ print('Container with id \'{0}\' was found'.format(container_id))
+
+ return container
+
+ async def create_item(container_id, item):
+ async with CosmosClient(HOST, credential=MASTER_KEY) as client:
+ database = client.get_database_client(DATABASE_ID)
+ container = database.get_container_client(container_id)
+ await container.upsert_item(body=item)
+
+ async def get_items(container_id):
+ items = []
+ try:
+ async with CosmosClient(HOST, credential=MASTER_KEY) as client:
+ database = client.get_database_client(DATABASE_ID)
+ container = database.get_container_client(container_id)
+ async for item in container.read_all_items():
+ items.append(item)
+ except Exception as e:
+ print(f"An error occurred: {e}")
+
+ return items
+ ```
+ 1. Run the app: ```Console
azure-app-configuration Enable Dynamic Configuration Dotnet Background Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-background-service.md
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
1. Create a new folder for your project.
-2. In the new folder, run the following command to create a new .NET background service project:
+1. In the new folder, run the following command to create a new .NET background service project:
```dotnetcli dotnet new worker
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
## Reload data from App Configuration
-1. Add references to the `Microsoft.Extensions.Configuration.AzureAppConfiguration` NuGet package by running the following commands:
+1. Add references to the `Microsoft.Extensions.Configuration.AzureAppConfiguration` NuGet package by running the following command:
```dotnetcli dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
-1. Run the following command to build the console app.
+1. Run the following command to build the app.
```dotnetcli dotnet build
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
|-|--| | *TestApp:Settings:Message* | *Data from Azure App Configuration - Updated* |
-1. Wait for about 30 seconds. You should see the console outputs changed.
+1. Wait a few moments for the refresh interval time window to pass. You will see the console outputs changed.
![Screenshot of the refreshed background service.](./media/dotnet-background-service-refresh.png)
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
The `ProcessPushNotification` method takes in a `PushNotification` object contai
||| | TestApp:Settings:Message | Data from Azure App Configuration - Updated |
-1. Wait for 30 seconds to allow the event to be processed and configuration to be updated.
+1. Wait for a few moments to allow the event to be processed. You will see the updated configuration.
![Push refresh run after updated](./media/dotnet-core-app-pushrefresh-final.png)
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
You can use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to crea
1. Create a new folder for your project.
-2. In the new folder, run the following command to create a new .NET console app project:
+1. In the new folder, run the following command to create a new .NET console app project:
```dotnetcli dotnet new console
You can use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to crea
dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration ```
-2. Run the following command to restore packages for your project:
+1. Run the following command to restore packages for your project:
```dotnetcli dotnet restore ```
-3. Open *Program.cs*, and add a reference to the .NET App Configuration provider.
+1. Open *Program.cs*, and add the following statements:
```csharp using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; ```
-4. Use App Configuration by calling the `AddAzureAppConfiguration` method in the `Program.cs` file.
+1. Use App Configuration by calling the `AddAzureAppConfiguration` method in the `Program.cs` file.
```csharp var builder = new ConfigurationBuilder();
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
The feature management support extends the dynamic configuration feature in App
## Prerequisites Follow the documents to create an ASP.NET Core app with dynamic configuration.+ - [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) - [Tutorial: Use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md)
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
``` > [!TIP]
- > When no parameter is passed to the `UseFeatureFlags` method, it loads *all* feature flags with *no label* in your App Configuration store. The default refresh expiration of feature flags is 30 seconds. You can customize this behavior via the `FeatureFlagOptions` parameter. For example, the following code snippet loads only feature flags that start with *TestApp:* in their *key name* and have the label *dev*. The code also changes the refresh expiration time to 5 minutes. Note that this refresh expiration time is separate from that for regular key-values.
+ > When no parameter is passed to the `UseFeatureFlags` method, it loads *all* feature flags with *no label* in your App Configuration store. The default refresh interval of feature flags is 30 seconds. You can customize this behavior via the `FeatureFlagOptions` parameter. For example, the following code snippet loads only feature flags that start with *TestApp:* in their *key name* and have the label *dev*. The code also changes the refresh interval time to 5 minutes. Note that this refresh interval time is separate from that for regular key-values.
> > ```csharp > options.UseFeatureFlags(featureFlagOptions =>
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
1. Select **Feature manager** and locate the **Beta** feature flag. Enable the flag by selecting the checkbox under **Enabled**.
-1. Refresh the browser a few times. When the cache expires after 30 seconds, the page shows with updated content.
+1. Refresh the browser a few times. When the refresh interval time window passes, the page will show with updated content.
![Feature flag after enabled](./media/quickstarts/aspnet-core-feature-flag-local-after.png)
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
This project will use [dependency injection in .NET Azure Functions](../azure-fu
- [Microsoft.FeatureManagement](https://www.nuget.org/packages/Microsoft.FeatureManagement/) version 2.2.0 or later - [Microsoft.Azure.Functions.Extensions](https://www.nuget.org/packages/Microsoft.Azure.Functions.Extensions/) version 1.1.0 or later
-2. Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
+1. Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
```csharp using System;
This project will use [dependency injection in .NET Azure Functions](../azure-fu
```
-3. Update the `ConfigureAppConfiguration` method, and add Azure App Configuration provider as an extra configuration source by calling `AddAzureAppConfiguration()`.
+1. Update the `ConfigureAppConfiguration` method, and add Azure App Configuration provider as an extra configuration source by calling `AddAzureAppConfiguration()`.
The `UseFeatureFlags()` method tells the provider to load feature flags. All feature flags have a default cache expiration of 30 seconds before rechecking for changes. The expiration interval can be updated by setting the `FeatureFlagsOptions.CacheExpirationInterval` property passed to the `UseFeatureFlags` method.
This project will use [dependency injection in .NET Azure Functions](../azure-fu
> [!TIP] > If you don't want any configuration other than feature flags to be loaded to your application, you can call `Select("_")` to only load a nonexisting dummy key `"_"`. By default, all configuration key-values in your App Configuration store will be loaded if no `Select` method is called.
-4. Update the `Configure` method to make Azure App Configuration services and feature manager available through dependency injection.
+1. Update the `Configure` method to make Azure App Configuration services and feature manager available through dependency injection.
```csharp public override void Configure(IFunctionsHostBuilder builder)
This project will use [dependency injection in .NET Azure Functions](../azure-fu
} ```
-5. Open *Function1.cs*, and add the following namespaces.
+1. Open *Function1.cs*, and add the following namespaces.
```csharp using System.Linq;
This project will use [dependency injection in .NET Azure Functions](../azure-fu
} ```
-6. Update the `Run` method to change the value of the displayed message depending on the state of the feature flag.
+1. Update the `Run` method to change the value of the displayed message depending on the state of the feature flag.
The `TryRefreshAsync` method is called at the beginning of the Functions call to refresh feature flags. It will be a no-op if the cache expiration time window isn't reached. Remove the `await` operator if you prefer the feature flags to be refreshed without blocking the current Functions call. In that case, later Functions calls will get updated value.
This project will use [dependency injection in .NET Azure Functions](../azure-fu
1. Select **Feature manager**, and change the state of the **Beta** key to **On**.
-1. Refresh the browser a few times. When the cached feature flag expires after 30 seconds, the page should have changed to indicate the feature flag `Beta` is turned on, as shown in the image below.
+1. Refresh the browser a few times. When the refresh interval time window passes, the page will change to indicate the feature flag `Beta` is turned on, as shown in the image below.
![Quickstart Function feature flag enabled](./media/quickstarts/functions-launch-ff-enabled.png)
azure-app-configuration Quickstart Feature Flag Dotnet Background Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet-background-service.md
+
+ Title: Quickstart for adding feature flags to .NET background service
+
+description: A quickstart for adding feature flags to .NET background services and managing them in Azure App Configuration
+++
+ms.devlang: csharp
++
+ .NET
Last updated : 2/19/2024+
+#Customer intent: As a .NET background service developer, I want to use feature flags to control feature availability quickly and confidently.
+
+# Quickstart: Add feature flags to a .NET background service
+
+In this quickstart, you incorporate the feature management capability from Azure App Configuration into a .NET background service. You use App Configuration to centrally store and manage your feature flags.
+
+## Prerequisites
+
+Feature management support extends the dynamic configuration feature in App Configuration. The example in this quickstart builds on the .NET background service app introduced in the dynamic configuration tutorial. Before you continue, finish the following tutorial to create a .NET background service app with dynamic configuration first.
+
+- [Tutorial: Use dynamic configuration in a .NET background service](./enable-dynamic-configuration-dotnet-background-service.md)
+
+## Add a feature flag
+
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md).
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing fields to enable a feature flag named Beta.](media/add-beta-feature-flag.png)
+
+## Use the feature flag
+
+1. Add references to the `Microsoft.FeatureManagement` NuGet package by running the following command:
+
+ ```dotnetcli
+ dotnet add package Microsoft.FeatureManagement
+ ```
+
+1. Run the following command to restore packages for your project:
+
+ ```dotnetcli
+ dotnet restore
+ ```
+
+1. Open *Program.cs* and add the following statement:
+
+ ```csharp
+ using Microsoft.FeatureManagement;
+ ```
+
+1. Add a call to the `UseFeatureFlags` method inside the `AddAzureAppConfiguration` call and register feature management services.
+
+ ```csharp
+ // Existing code in Program.cs
+ // ... ...
+
+ builder.Configuration.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(Environment.GetEnvironmentVariable("ConnectionString"));
+
+ // Use feature flags
+ options.UseFeatureFlags();
+
+ // Register the refresher so that the Worker service can consume it through dependency injection
+ builder.Services.AddSingleton(options.GetRefresher());
+ });
+
+ // Register feature management services
+ builder.Services.AddFeatureManagement();
+
+ // The rest of existing code in Program.cs
+ // ... ...
+ ```
+
+ > [!TIP]
+ > When no parameter is passed to the `UseFeatureFlags` method, it loads *all* feature flags with *no label* in your App Configuration store. The default refresh interval of feature flags is 30 seconds. You can customize this behavior via the `FeatureFlagOptions` parameter. For example, the following code snippet loads only feature flags that start with *TestApp:* in their *key name* and have the label *dev*. The code also changes the refresh interval time to 5 minutes. Note that this refresh interval time is separate from that for regular key-values.
+ >
+ > ```csharp
+ > options.UseFeatureFlags(featureFlagOptions =>
+ > {
+ > featureFlagOptions.Select("TestApp:*", "dev");
+ > featureFlagOptions.CacheExpirationInterval = TimeSpan.FromMinutes(5);
+ > });
+ > ```
+
+1. Open *Worker.cs* and add the following statement:
+
+ ```csharp
+ using Microsoft.FeatureManagement;
+ ```
+
+1. Update the constructor of the `Worker` service to obtain instances of `IConfigurationRefresher` and `IFeatureManager` through dependency injection.
+
+ ```csharp
+ public class Worker : BackgroundService
+ {
+ private readonly ILogger<Worker> _logger;
+ private readonly IConfigurationRefresher _refresher;
+ private readonly IFeatureManager _featureManager;
+
+ public Worker(ILogger<Worker> logger, IConfigurationRefresher refresher, IFeatureManager featureManager)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _refresher = refresher ?? throw new ArgumentNullException(nameof(refresher));
+ _featureManager = featureManager ?? throw new ArgumentNullException(nameof(featureManager));
+ }
+
+ // ... ...
+ }
+ ```
+
+1. Update the `ExecuteAsync` method to log a message depending on the state of the feature flag.
+
+ The `TryRefreshAsync` method is called at the beginning of every iteration of the task execution to refresh the feature flag. It will be a no-op if the refresh interval time window isn't reached. The `await` operator is not used so that the feature flags are refreshed without blocking the current iteration of the task execution. In that case, later iterations of the task execution will get updated value.
+
+ ```csharp
+ protected override async Task ExecuteAsync(CancellationToken stoppingToken)
+ {
+ while (!stoppingToken.IsCancellationRequested)
+ {
+ // Intentionally not await TryRefreshAsync to avoid blocking the execution.
+ _refresher.TryRefreshAsync(stoppingToken);
+
+ if (_logger.IsEnabled(LogLevel.Information))
+ {
+ if (await _featureManager.IsEnabledAsync("Beta"))
+ {
+ _logger.LogInformation("[{time}]: Worker is running with Beta feature.", DateTimeOffset.Now);
+ }
+ else
+ {
+ _logger.LogInformation("[{time}]: Worker is running.", DateTimeOffset.Now);
+ }
+ }
+
+ await Task.Delay(TimeSpan.FromSeconds(30), stoppingToken);
+ }
+ }
+ ```
+
+## Build and run the app locally
+
+1. Run the following command to build the app:
+
+ ```dotnetcli
+ dotnet build
+ ```
+
+1. After the build successfully completes, run the following command to run the app locally:
++
+ ```dotnetcli
+ dotnet run
+ ```
+
+1. You should see the following outputs in the console.
+
+ ![Screenshot of the console with background service running with feature flag disabled.](./media/quickstarts/dotnet-background-service-feature-flag-disabled.png)
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created previously.
+
+1. Select **Feature manager** and locate the **Beta** feature flag. Enable the flag by selecting the checkbox under **Enabled**.
+
+1. Wait a few moments for the refresh interval time window to pass. You will see the updated log message.
+
+ ![Screenshot of the console with background service running with feature flag enabled.](./media/quickstarts/dotnet-background-service-feature-flag.png)
+
+## Clean up resources
++
+## Next steps
+
+To enable feature management capability for other types of apps, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET console apps](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md)
+
+To learn more about managing feature flags in Azure App Configuration, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Manage feature flags in Azure App Configuration](./manage-feature-flags.md)
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
# Quickstart: Add feature flags to a .NET/.NET Framework console app
-In this quickstart, you incorporate Azure App Configuration into a .NET console app to create an end-to-end implementation of feature management. You can use the App Configuration service to centrally store all your feature flags and control their states.
+In this quickstart, you incorporate Azure App Configuration into a .NET console app to create an end-to-end implementation of feature management. You can use App Configuration to centrally store all your feature flags and control their states.
The .NET Feature Management libraries extend the framework with feature flag support. These libraries are built on top of the .NET configuration system. They integrate with App Configuration through its .NET configuration provider.
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
# Quickstart: Add feature flags to a Spring Boot app
-In this quickstart, you incorporate Azure App Configuration into a Spring Boot web app to create an end-to-end implementation of feature management. You can use the App Configuration service to centrally store all your feature flags and control their states.
+In this quickstart, you incorporate Azure App Configuration into a Spring Boot web app to create an end-to-end implementation of feature management. You can use App Configuration to centrally store all your feature flags and control their states.
The Spring Boot Feature Management libraries extend the framework with comprehensive feature flag support. These libraries do **not** have a dependency on any Azure libraries. They seamlessly integrate with App Configuration through its Spring Boot configuration provider.
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md
After you've verified that the function runs correctly on your local computer, i
::: zone-end ::: zone pivot="nodejs-model-v3"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
::: zone-end ::: zone pivot="nodejs-model-v4"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator`
::: zone-end 2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md
After you've verified that the function runs correctly on your local computer, i
## Test your function in Azure
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
After you've verified that the function runs correctly on your local computer, i
## Test your function in Azure ::: zone pivot="python-mode-configuration"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
::: zone-end ::: zone pivot="python-mode-decorators"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/hello_orchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/hello_orchestrator`
::: zone-end
azure-functions Quickstart Ts Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md
After you've verified that the function runs correctly on your local computer, i
::: zone-end ::: zone pivot="nodejs-model-v3"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
::: zone-end ::: zone pivot="nodejs-model-v4"
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator`
::: zone-end 2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
By default, the version settings for function apps are specific to each slot. Th
## WEBSITE\_RUN\_FROM\_PACKAGE
-Enables your function app to run from a mounted package file.
+Enables your function app to run from a package file, which can be locally mounted or deployed to an external URL.
|Key|Sample value| ||| |WEBSITE\_RUN\_FROM\_PACKAGE|`1`|
-Valid values are either a URL that resolves to the location of a deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When you use zip deployment with `WEBSITE_RUN_FROM_PACKAGE` enabled, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
+Valid values are either a URL that resolves to the location of an external deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When you use zip deployment with `WEBSITE_RUN_FROM_PACKAGE` enabled, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
+
+When you deploy from an external package URL, you must also manually sync triggers. For more information, see [Trigger syncing](functions-deployment-technologies.md#trigger-syncing).
## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
The trigger input type is declared as either `HttpRequest` or a custom type. If
By default when you create a function for an HTTP trigger, the function is addressable with a route of the form: ```http
-http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
+https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
``` You can customize this route using the optional `route` property on the HTTP trigger's input binding. You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters.
def main(req: func.HttpRequest) -> func.HttpResponse:
Using this configuration, the function is now addressable with the following route instead of the original route. ```
-http://<APP_NAME>.azurewebsites.net/api/products/electronics/357
+https://<APP_NAME>.azurewebsites.net/api/products/electronics/357
```
-This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](/aspnet/core/fundamentals/routing#route-constraint-reference).
+This configuration allows the function code to support two parameters in the address, _category_ and _ID_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](/aspnet/core/fundamentals/routing#route-constraint-reference).
By default, all function routes are prefixed with *api*. You can also customize or remove the prefix using the `extensions.http.routePrefix` property in your [host.json](functions-host-json.md) file. The following example removes the *api* route prefix by using an empty string for the prefix in the *host.json* file.
azure-functions Functions Bindings Mobile Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-mobile-apps.md
The following table explains the binding configuration properties that you set i
| **name**| n/a | Name of input parameter in function signature.| |**tableName** |**TableName**|Name of the mobile app's data table| | **id**| **Id** | The identifier of the record to retrieve. Can be static or based on the trigger that invokes the function. For example, if you use a queue trigger for your function, then `"id": "{queueTrigger}"` uses the string value of the queue message as the record ID to retrieve.|
-|**connection**|**Connection**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://<appname>.azurewebsites.net`.
+|**connection**|**Connection**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `https://<appname>.azurewebsites.net`.
|**apiKey**|**ApiKey**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app, or implement an API key in your .NET mobile app. To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
The following table explains the binding configuration properties that you set i
| **direction**| n/a |Must be set to "out"| | **name**| n/a | Name of output parameter in function signature.| |**tableName** |**TableName**|Name of the mobile app's data table|
-|**connection**|**MobileAppUriSetting**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://<appname>.azurewebsites.net`.
+|**connection**|**MobileAppUriSetting**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `https://<appname>.azurewebsites.net`.
|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app backend, or implement an API key in your .NET mobile app backend. To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
ms.devlang: csharp # ms.devlang: csharp, javascript, python Previously updated : 03/12/2024 Last updated : 04/02/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" - [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] [!INCLUDE [functions-in-process-model-retirement-note](../../includes/functions-in-process-model-retirement-note.md)]
The following sample shows a C# function that receives a message event from clie
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalRTriggerFunctions.cs" id="snippet_on_message":::
+> [!IMPORTANT]
+> Class based model of SignalR Service bindings in C# isolated worker doesn't optimize how you write SignalR triggers due to the limitation of C# worker model. For more information about class based model, see [Class based model](../azure-signalr/signalr-concept-serverless-development-config.md#class-based-model).
# [In-process model](#tab/in-process)
-SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
+SignalR Service trigger binding for C# in-process model has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
### With class-based model
See the [Example section](#example) for complete examples.
### Payloads
-The trigger input type is declared as either `InvocationContext` or a custom type. If you choose `InvocationContext` you get full access to the request content. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
+The trigger input type is declared as either `InvocationContext` or a custom type. If you choose `InvocationContext`, you get full access to the request content. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
### InvocationContext
The trigger input type is declared as either `InvocationContext` or a custom typ
||| |Arguments| Available for *messages* category. Contains *arguments* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding)| |Error| Available for *disconnected* event. It can be Empty if the connection closed with no error, or it contains the error messages.|
-|Hub| The hub name which the message belongs to.|
+|Hub| The hub name that the message belongs to.|
|Category| The category of the message.| |Event| The event of the message.|
-|ConnectionId| The connection ID of the client which sends the message.|
-|UserId| The user identity of the client which sends the message.|
+|ConnectionId| The connection ID of the client that sends the message.|
+|UserId| The user identity of the client that sends the message.|
|Headers| The headers of the request.| |Query| The query of the request when clients connect to the service.| |Claims| The claims of the client.|
After you set `parameterNames`, the names you defined correspond to the argument
[SignalRTrigger(parameterNames: new string[] {"arg1, arg2"})] ```
-Then, the `arg1` will contain the content of `message1`, and `arg2` will contain the content of `message2`.
+Then, the `arg1` contains the content of `message1`, and `arg2` contains the content of `message2`.
### `ParameterNames` considerations For the parameter binding, the order matters. If you're using `ParameterNames`, the order in `ParameterNames` matches the order of the arguments you invoke in the client. If you're using attribute `[SignalRParameter]` in C#, the order of arguments in Azure Function methods matches the order of arguments in clients.
-`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you will get an exception.
+`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you'll get an exception.
### SignalR Service integration
SignalR Service needs a URL to access Function App when you're using SignalR Ser
:::image type="content" source="../azure-signalr/media/concept-upstream/upstream-portal.png" alt-text="Upstream Portal":::
-When using SignalR Service trigger, the URL can be simple and formatted as shown below:
+When using SignalR Service trigger, the URL can be simple and formatted as follows:
```http <Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY> ```
-The `Function_App_URL` can be found on Function App's Overview page and The `API_KEY` is generated by Azure Function. You can get the `API_KEY` from `signalr_extension` in the **App keys** blade of Function App.
+The `Function_App_URL` can be found on Function App's Overview page and the `API_KEY` is generated by Azure Function. You can get the `API_KEY` from `signalr_extension` in the **App keys** blade of Function App.
:::image type="content" source="media/functions-bindings-signalr-service/signalr-keys.png" alt-text="API key"::: If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md).
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
Title: Azure Functions SignalR Service bindings
description: Understand how to use SignalR Service bindings with Azure Functions. Previously updated : 03/04/2022 Last updated : 04/02/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
This set of articles explains how to authenticate and send real-time messages to
||| | Handle messages from SignalR Service | [Trigger binding](./functions-bindings-signalr-service-trigger.md) | | Return the service endpoint URL and access token | [Input binding](./functions-bindings-signalr-service-input.md) |
-| Send SignalR Service messages |[Output binding](./functions-bindings-signalr-service-output.md) |
+| Send SignalR Service messages and manage groups |[Output binding](./functions-bindings-signalr-service-output.md) |
::: zone pivot="programming-language-csharp" ## Install extension
-The extension NuGet package you install depends on the C# mode you're using in your function app:
+The extension NuGet package you install depends on the C# mode you're using in your function app:
# [Isolated worker model](#tab/isolated-process)
Add the extension to your project by installing this [NuGet package].
-## Install bundle
+## Install bundle
The SignalR Service extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 3.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
-## Add dependency
+## Add dependency
To use the SignalR Service annotations in Java functions, you need to add a dependency to the *azure-functions-java-library-signalr* artifact (version 1.0 or higher) to your *pom.xml* file.
To use the SignalR Service annotations in Java functions, you need to add a depe
<version>1.0.0</version> </dependency> ``` ## Connection string settings
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
# [Isolated process](#tab/isolated-process)
-isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
+Isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
|Parameter | Description| ||-|
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
-Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
+Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#memory-usage-and-concurrency).
If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process).
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
-Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
+Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#memory-usage-and-concurrency).
If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process).
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
-Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
+Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#memory-usage-and-concurrency).
If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process).
If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we
- BlobName - ETag (a blob version identifier, for example: `0x8D1DC6E70A277EF`)
-## Concurrency and memory usage
+## Memory usage and concurrency
-The blob trigger uses a queue internally, so the maximum number of concurrent function invocations is controlled by the [queues configuration in host.json](functions-host-json.md#queues). The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger.
+When you bind to an [output type](#usage) that doesn't support steaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types).
+At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs.
+Memory usage can be further impacted when multiple function instances are concurrently processing blob data. If you are having memory issues using a Blob trigger, consider reducing the number of concurrent executions permitted. Of course, reducing the concurrency can have the side effect of increasing the backlog of blobs waiting to be processed. The memory limits of your function app depends on the plan. For more information, see [Service limits](functions-scale.md#service-limits).
+
+The way that you can control the number of concurrent executions depends on the version of the Storage extension you are using.
+
+### [Extension 5.x and higher](#tab/extensionv5)
-> [!NOTE]
-> For apps using the 5.0.0 or higher version of the Storage extension, the queues configuration in host.json only applies to queue triggers. The blob trigger concurrency is instead controlled by [blobs configuration in host.json](functions-host-json.md#blobs).
+When using version 5.0.0 of the Storage extension or a later version, you control trigger concurrency by using the `maxDegreeOfParallelism` setting in the [blobs configuration in host.json](functions-bindings-storage-blob.md#hostjson-settings).
-[The Consumption plan](event-driven-scaling.md) limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum blob size. For example, a function app with three blob-triggered functions and the default settings would have a maximum per-VM concurrency of 3*24 = 72 function invocations.
+### [Pre-extension 5.x](#tab/extensionv4)
-JavaScript and Java functions load the entire blob into memory, and C# functions do that if you bind to `string`, or `Byte[]`.
+Because the blob trigger uses a queue internally, the maximum number of concurrent function invocations is controlled by the [queues configuration in host.json](functions-bindings-storage-queue.md#host-json).
++
-Due to the existing architecture, we load the blob into memory several times so you should expect the memory usage to be two to three times the size of the blob.
+Limits apply separately to each function that uses a blob trigger.
## host.json properties
azure-functions Functions Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-continuous-deployment.md
Title: Continuous deployment for Azure Functions
-description: Use the continuous deployment features of Azure App Service to publish your functions.
+description: Use the continuous deployment features of Azure App Service when publishing to Azure Functions.
ms.assetid: 361daf37-598c-4703-8d78-c77dbef91643 Previously updated : 09/25/2019 Last updated : 04/01/2024 #Customer intent: As a developer, I want to learn how to set up a continuous integration environment so that function app updates are deployed automatically when I check in my code changes. # Continuous deployment for Azure Functions
-You can use Azure Functions to deploy your code continuously by using [source control integration](functions-deployment-technologies.md#source-control). Source control integration enables a workflow in which a code update triggers deployment to Azure. If you're new to Azure Functions, get started by reviewing the [Azure Functions overview](functions-overview.md).
+You can use Azure Functions to deploy your code continuously by using [source control integration](functions-deployment-technologies.md#source-control). Source control integration enables a workflow in which a code update triggers build, packaging, and deployment from your project to Azure.
-Continuous deployment is a good option for projects where you integrate multiple and frequent contributions. When you use continuous deployment, you maintain a single source of truth for your code, which allows teams to easily collaborate. You can configure continuous deployment in Azure Functions from the following source code locations:
+Continuous deployment is a good option for projects where you integrate multiple and frequent contributions. When you use continuous deployment, you maintain a single source of truth for your code, which allows teams to easily collaborate.
-* [Azure Repos](https://azure.microsoft.com/services/devops/repos/)
-* [GitHub](https://github.com)
-* [Bitbucket](https://bitbucket.org/)
+Steps in this article show you how to configure continuous code deployments to your function app in Azure by using the Deployment Center in the Azure portal. You can also configure continuous integration using the Azure CLI.
-The unit of deployment for functions in Azure is the function app. All functions in a function app are deployed at the same time. After you enable continuous deployment, access to function code in the Azure portal is configured as *read-only* because the source of truth is set to be elsewhere.
+Functions supports these sources for continuous deployment to your app:
-## Requirements for continuous deployment
+### [Azure Repos](#tab/azure-repos)
+
+Maintain your project code in [Azure Repos](https://azure.microsoft.com/services/devops/repos/), one of the services in Azure DevOps. Supports both Git and Team Foundation Version Control. Used with the [Azure Pipelines build provider](functions-continuous-deployment.md?tabs=azure-repos%2azure-pipelines#build-providers)). For more information, see [What is Azure Repos?](/azure/devops/repos/get-started/what-is-repos)
+
+### [GitHub](#tab/github)
+
+Maintain your project code in [GitHub](https://github.com). Supported by all [build providers](functions-continuous-deployment.md?tabs=github%2Cgithub-actions#build-providers). For more information, see [GitHub docs](https://docs.github.com/en/get-started).
+
+GitHub is the only continuous deployment source supported for apps running on Linux in a [Consumption plan](./consumption-plan.md), which includes serverless Python apps.
+
+### [Bitbucket](#tab/bitbucket)
+
+Maintain your project code in [Bitbucket](https://bitbucket.org/). Requires the [App Service build provider](functions-continuous-deployment.md?tabs=bitbucket%2Capp-service#build-providers).
+
+### [Local Git](#tab/local-git)
+
+Maintain your project code in a dedicated Git server hosted in the same App Service plan with your function app. Requires the [App Service build provider](functions-continuous-deployment.md?tabs=local-git%2Capp-service#build-providers). For more information, see [Local Git deployment to Azure App Service](../app-service/deploy-local-git.md).
+
+
+
+You can also connect your function app to an external Git repository, but this requires a manual synchronization. For more information about deployment options, see [Deployment technologies in Azure Functions](functions-deployment-technologies.md).
+
+>[!NOTE]
+>Continuous deployment options covered in this article are specific to code-only deployments. For containerized function app deployments, see [Enable continuous deployment of containers to Azure](functions-how-to-custom-container.md#enable-continuous-deployment-to-azure).
+
+## Requirements
For continuous deployment to succeed, your directory structure must be compatible with the basic folder structure that Azure Functions expects. [!INCLUDE [functions-folder-structure](../../includes/functions-folder-structure.md)]
->[!NOTE]
-> Continuous deployment is not yet supported for Linux apps running on a Consumption plan.
+## Build providers
+
+Building your code project is part of the deployment process. The specific build process depends on your specific language stack, operating system, and hosting plan. Builds can be done locally or remotely, again depending on your specific hosting. For more information, see [Remote build](functions-deployment-technologies.md#remote-build).
+
+Functions supports these build providers:
+
+### [Azure Pipelines](#tab/azure-pipelines)
+
+Azure Pipelines is one of the services in Azure DevOps and the default build provider for Azure Repos projects. You can also use Pipelines to build projects from GitHub. In Pipelines, there's an `AzureFunctionApp` task designed specifically for deploying to Azure Functions. This task provides you with control over how the project gets built, packaged, and deployed.
+
+### [GitHub Actions](#tab/github-actions)
+
+GitHub Actions is the default build provider for GitHub projects. GitHub Actions provides you with control over how the project gets built, packaged, and deployed.
+
+### [App Service (Kudu) service](#tab/app-service)
+
+The App Service platform maintains a native deployment service ([Project Kudu](https://github.com/projectkudu/kudu/wiki)) to support local Git deployment, some container deployments, and other deployment sources not supported by either Pipelines or GitHub Actions. Remote builds, packaging, and other maintainence tasks are performed in a subdomain of `scm.azurewebsites.net` dedicated to your app, such as `https://myfunctionapp.scm.azurewebsites.net`. This build service can only be used when the `scm` site is accessible to your app. For more information, see [Secure the scm endpoint](security-concepts.md#secure-the-scm-endpoint).
+++
+Your options for which of these build providers you can use depend on the specific code deployment source.
+
+## <a name="credentials"></a>Deployment center
+
+The [Azure portal](https://portal.azure.com) provides a **Deployment center** for your function apps, which makes it easier to configure continuous deployment. The way that you configure continuous deployment depends both on the specific source control in which your code resides and the [build provider](#build-providers) you choose.
+
+In the [Azure portal](https://portal.azure.com), browse to your function app page and select **Deployment Center** under **Deployment** in the left pane.
++
+Select the **Source** repository type where your project code is being maintained from one of these supported options:
+
+### [Azure Repos](#tab/azure-repos/azure-pipelines)
+
+Deployments from Azure Repos that use Azure Pipelines are defined in the [Azure DevOps portal](https://go.microsoft.com/fwlink/?linkid=2245703) and not from your function app. For a step-by-step guide for creating a Pipelines-based deployment from Azure Repos, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md).
+
+### [GitHub](#tab/github/azure-pipelines)
+
+Deployments from GitHub that use Azure Pipelines are defined in the [Azure DevOps portal](https://go.microsoft.com/fwlink/?linkid=2245703) and not from your function app. For a step-by-step guide for creating a Pipelines-based deployment from GitHub, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md).
-## <a name="credentials"></a>Set up continuous deployment
+### [Bitbucket](#tab/bitbucket/azure-pipelines)
-To configure continuous deployment for an existing function app, complete these steps. The steps demonstrate integration with a GitHub repository, but similar steps apply for Azure Repos or other source code repositories.
+You can't deploy from Bitbucket using Azure Pipelines. Instead choose the [App Service build provider](functions-continuous-deployment.md?tabs=bitbucket%2Capp-service#build-providers).
-1. In your function app in the [Azure portal](https://portal.azure.com), select **Deployment Center**, select **GitHub**, and then select **Authorize**. If you've already authorized GitHub, select **Continue** and skip the next step.
+### [Local Git](#tab/local-git/azure-pipelines)
- :::image type="content" source="./media/functions-continuous-deployment/github.png" alt-text="Azure App Service Deployment Center":::
+You can't deploy from local git using Azure Pipelines. Instead choose the [App Service build provider](functions-continuous-deployment.md?tabs=local-git%2Capp-service#build-providers).
-3. In GitHub, select **Authorize AzureAppService**.
+### [Azure Repos](#tab/azure-repos/github-actions)
- :::image type="content" source="./media/functions-continuous-deployment/authorize.png" alt-text="Authorize Azure App Service":::
+You can't deploy from Azure Repos using GitHub Actions. Choose a different [build provider](#build-providers).
- Enter your GitHub password and then select **Continue**.
+### [GitHub](#tab/github/github-actions)
++
+To learn more about GitHub Action deployments, including other ways to generate the workflow configuration file, see [Continuous delivery by using GitHub Actions](functions-how-to-github-actions.md).
+
+### [Bitbucket](#tab/bitbucket/github-actions)
+
+You can't deploy from Bitbucket using GitHub Actions. Instead choose the [App Service build provider](functions-continuous-deployment.md?tabs=bitbucket%2Capp-service#build-providers).
+
+### [Local Git](#tab/local-git/github-actions)
+
+You can't deploy from local git using GitHub Actions. Instead choose the [App Service build provider](functions-continuous-deployment.md?tabs=local-git%2Capp-service#build-providers).
+
+### [Azure Repos](#tab/azure-repos/app-service)
+
+1. Navigate to your function app in the [Azure portal](https://portal.azure.com) and select **Deployment Center**.
+
+1. For **Source** select **Azure Repos**. If **App Service build service** provider isn't the default, select **Change provider** choose **App Service build service** and select **OK**.
+
+1. Select values for **Organization**, **Project**, **Repository**, and **Branch**. Only organizations that belong to your Azure account are displayed.
+
+1. Select **Save** to create the webhook in your repository.
+
+### [GitHub](#tab/github/app-service)
+
+1. Navigate to your function app in the [Azure portal](https://portal.azure.com) and select **Deployment Center**.
+
+1. For **Source** select **GitHub**. If **App Service build service** provider isn't the default, select **Change provider** choose **App Service build service** and select **OK**.
+
+1. If you haven't already authorized GitHub access, select **Authorize**. Provide your GitHub credentials and select **Sign in**. If you need to authorize a different GitHub account, select **Change Account** and sign in with another account.
+
+1. Select values for **Organization**, **Repository**, and **Branch**. The values are based on the location of your code.
+
+1. Review all details and select **Save**. A webhook is placed in your chosen repository.
+
+When a new commit is pushed to the selected branch, the service pulls your code, builds your application, and deploys it to your function app.
+
+### [Bitbucket](#tab/bitbucket/app-service)
+
+1. Navigate to your function app in the [Azure portal](https://portal.azure.com) and select **Deployment Center**.
+
+1. For **Source** select **Bitbucket**.
+
+1. If you haven't already authorized Bitbucket access, select **Authorize** and then **Grant access**. If requested, provide your Bitbucket credentials and select **Sign in**. If you need to authorize a different Bitbucket account, select **Change Account** and sign in with another account.
+
+1. Select values for **Organization**, **Repository**, and **Branch**. The values are based on the location of your code.
+
+1. Review all details and select **Save**. A webhook is placed in your chosen repository.
+
+When a new commit is pushed to the selected branch, the service pulls your code, builds your application, and deploys it to your function app.
+
+### [Local Git](#tab/local-git/app-service)
+
+1. Navigate to your function app in the [Azure portal](https://portal.azure.com) and select **Deployment Center**.
+
+1. For **Source** select **Local Git** and select **Save**.
+
+1. A local repository is created in your existing App Service plan, which is accessed from the `scm` domain. Copy the **Git clone URI** and use it to create a clone of this new repository on your local computer.
+
+When a new commit is pushed to the local git repository, the service pulls your code, builds your application, and deploys it to your function app.
++
-4. Select one of the following build providers:
+After deployment completes, all code from the specified source is deployed to your app. At that point, changes in the deployment source trigger a deployment of those changes to your function app in Azure.
- * **App Service build service**: Best when you don't need a build or if you need a generic build.
- * **Azure Pipelines (Preview)**: Best when you need more control over the build. This provider currently is in preview.
+## Considerations
- Select **Continue**.
+You should keep these considerations in mind when planning for a continuous deployment strategy:
-5. Configure information specific to the source control option you specified. For GitHub, you must enter or select values for **Organization**, **Repository**, and **Branch**. The values are based on the location of your code. Then, select **Continue**.
++ GitHub is the only source that currently supports continuous deployment for Linux apps running on a Consumption plan, which is a popular hosting option for Python apps.
- :::image type="content" source="./media/functions-continuous-deployment/github-specifics.png" alt-text="Configure GitHub":::
++ The unit of deployment for functions in Azure is the function app. All functions in a function app are deployed at the same time and in the same package.
-6. Review all details, and then select **Finish** to complete your deployment configuration.
++ After you enable continuous deployment, access to function code in the Azure portal is configured as *read-only* because the _source of truth_ is known to reside elsewhere.
-When the process is finished, all code from the specified source is deployed to your app. At that point, changes in the deployment source trigger a deployment of those changes to your function app in Azure.
++ You should always configure continuous deployment for a staging slot and not for the production slot. When you use the production slot, code updates are pushed directly to production without being verified in Azure. Instead, enable continuous deployment to a staging slot, verify updates in the staging slot, and after everything runs correctly you can [swap the staging slot code into production](./functions-deployment-slots.md#swap-slots).
-> [!NOTE]
-> After you configure continuous integration, you can no longer edit your source files in the Functions portal. If you originally published your code from your local computer, you may need to change the `WEBSITE_RUN_FROM_PACKAGE` setting in your function app to a value of `0`.
++ The Deployment Center doesn't support enabling continuous deployment for a function app with inbound network restrictions. You need instead configure the build provider workflow directly in GitHub or Azure Pipelines. These workflows also require you to use a virtual machine in the same virtual network as the function app as either a [self-hosted agent (Pipelines)](/azure/devops/pipelines/agents/agents#self-hosted-agents) or a [self-hosted runner (GitHub)](https://docs.github.com/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). ## Next steps
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-serverless-api.md
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
## Customize your HTTP function
-By default, your HTTP trigger function is configured to accept any HTTP method. You can also use the default URL, `http://<yourapp>.azurewebsites.net/api/<funcname>?code=<functionkey>`. In this section, you modify the function to respond only to GET requests with `/api/hello`.
+By default, your HTTP trigger function is configured to accept any HTTP method. You can also use the default URL, `https://<yourapp>.azurewebsites.net/api/<funcname>?code=<functionkey>`. In this section, you modify the function to respond only to GET requests with `/api/hello`.
1. Navigate to your function in the Azure portal. Select **Integration** in the left menu, and then select **HTTP (req)** under **Trigger**.
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Visual Studio can publish your local project to Azure. Before you can publish yo
The URL that calls your HTTP trigger function is in the following format:
- `http://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
+ `https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
1. Go to this URL and you see a response in the browser to the remote GET request returned by the function, which looks like the following example:
azure-functions Functions Deploy Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deploy-container-apps.md
Title: Create your first containerized Azure Functions on Azure Container Apps description: Get started with Azure Functions on Azure Container Apps by deploying your first function app from a Linux image in a container registry. Previously updated : 03/07/2024 Last updated : 03/28/2024 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
# Create your first containerized functions on Azure Container Apps
-In this article, you create a function app running in a Linux container and deploy it to an Azure Container Apps environment from a container registry. By deploying to Container Apps, you are able to integrate your function apps into cloud-native microservices. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+In this article, you create a function app running in a Linux container and deploy it to an Azure Container Apps environment from a container registry. By deploying to Container Apps, you're able to integrate your function apps into cloud-native microservices. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
-This article shows you how to use Functions tools to create your first function running in a Linux container, verify the functions locally, and then deploy the container to a Container Apps environment.
+This article shows you how to create functions running in a Linux container and deploy the container to a Container Apps environment.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account, which you can minimize by [cleaning-up resources](#clean-up-resources) when you're done.
Before you can deploy your container to Azure, you need to create three resource
Use the following commands to create these items.
-1. If you haven't done already, sign in to Azure.
+1. If you haven't done so already, sign in to Azure.
The [`az login`](/cli/azure/reference-index#az-login) command signs you into your Azure account. Use `az account set` when you have more than one subscription associated with your account.
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --envir
``` ::: zone-end
-In the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command, the `--environment` parameter specifies the Container Apps environment and the `--image` parameter specifies the image to use for the function app. In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also, replace `<APP_NAME>` with a globally unique name appropriate to you, `<LOGIN_SERVER>` with your fully qualified Container Registry server, `<REGISTRY_NAME>` with your registry name for the login, and `<ADMIN_PASSWORD>` with the password to your admin account.
+In the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command, the `--environment` parameter specifies the Container Apps environment and the `--image` parameter specifies the image to use for the function app. In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also, replace `<APP_NAME>` with a globally unique name appropriate to you, `<LOGIN_SERVER>` with your fully qualified Container Registry server, `<REGISTRY_NAME>` with your registry name for the account, and `<ADMIN_PASSWORD>` with the password to your admin account.
> [!IMPORTANT] > The admin account username and password are important credentials. Make sure to store them securely and never in an accessible location like a public repository.
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --envir
``` ::: zone-end
-In the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command, the `--environment` parameter specifies the Container Apps environment and the `--image` parameter specifies the image to use for the function app. In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also, replace `<APP_NAME>` with a globally unique name appropriate to you and `<DOCKER_ID>` with your Docker Hub account ID. If you are using a private registry, you also need to supply `--registry-username`, `--registry-password`, and `--registry-server`.
+In the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command, the `--environment` parameter specifies the Container Apps environment and the `--image` parameter specifies the image to use for the function app. In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also, replace `<APP_NAME>` with a globally unique name appropriate to you and `<DOCKER_ID>` with your Docker Hub account ID. If you're using a private registry, you also need to supply `--registry-username`, `--registry-password`, and `--registry-server`.
Specifying `--workload-profile-name "Consumption"` creates your app in an enviro
<! CI/CD isn't yet supported: You can also [Enable continuous deployment](./functions-how-to-custom-container.md#enable-continuous-deployment-to-azure) to Azure from Docker Hub.-->
-At this point, your functions are running in a Container Apps environment, with the required application settings already added. When you need to add other settings in your functions app, you can do this in the standard way for Functions. For more information, see [Use application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+At this point, your functions are running in a Container Apps environment, with the required application settings already added. When needed, you can add other settings in your functions app in the standard way for Functions. For more information, see [Use application settings](functions-how-to-use-azure-function-app-settings.md#settings).
>[!TIP] > When you make subsequent changes to your function code, you need to rebuild the container, republish the image to the registry, and update the function app with the new image version. For more information, see [Update an image in the registry](functions-how-to-custom-container.md#update-an-image-in-the-registry)
At this point, your functions are running in a Container Apps environment, with
The request URL should look something like this: ::: zone pivot="programming-language-java,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-`http://myacafunctionapp.kindtree-796af82b.eastus.azurecontainerapps.io/api/httpexample?name=functions`
+`https://myacafunctionapp.kindtree-796af82b.eastus.azurecontainerapps.io/api/httpexample?name=functions`
::: zone-end ::: zone pivot="programming-language-csharp"
-`http://myacafunctionapp.kindtree-796af82b.eastus.azurecontainerapps.io/api/httpexample`
+`https://myacafunctionapp.kindtree-796af82b.eastus.azurecontainerapps.io/api/httpexample`
::: zone-end ## Clean up resources
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Title: Deployment technologies in Azure Functions
description: Learn the different ways you can deploy code to Azure Functions. Previously updated : 06/22/2023 Last updated : 03/29/2024 # Deployment technologies in Azure Functions
Each plan has different behaviors. Not all deployment technologies are available
| [External package URL](#external-package-url)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Zip deploy](#zip-deploy) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Docker container](#docker-container) | | | | |Γ£ö|Γ£ö|
-| [Web Deploy](#web-deploy-msdeploy) |Γ£ö|Γ£ö|Γ£ö| | | |
| [Source control](#source-control) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | [Local Git](#local-git)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | [FTPS](#ftps)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-| [In-portal editing](#portal-editing)<sup>2</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>3</sup>|Γ£ö<sup>3</sup>|
+| [In-portal editing](#portal-editing)<sup>2</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
<sup>1</sup> Deployment technologies that require you to [manually sync triggers](#trigger-syncing) aren't recommended.
-<sup>2</sup> In-portal editing is disabled when code is deployed to your function app from outside the portal. For more information, including language support details for in-portal editing, see [Language support details](supported-languages.md#language-support-details).
-<sup>3</sup> In-portal editing is enabled only for HTTP and Timer triggered functions running on Linux in Premium and Dedicated plans.
+<sup>2</sup> In-portal editing is disabled when code is deployed to your function app from outside the portal. For more information, including language support details for in-portal editing, see [Language support details](supported-languages.md#language-support-details).
## Key concepts
Some key concepts are critical to understanding how deployments work in Azure Fu
### Trigger syncing
-When you change any of your triggers, the Functions infrastructure must be aware of the changes. Synchronization happens automatically for many deployment technologies. However, in some cases, you must manually sync your triggers. When you deploy your updates by referencing an external package URL, local Git, cloud sync, or FTP, you must manually sync your triggers. You can sync triggers in one of three ways:
+When you change any of your triggers, the Functions infrastructure must be aware of the changes. Synchronization happens automatically for many deployment technologies. However, in some cases, you must manually sync your triggers.
+
+You must manually sync triggers when using these deploymention options:
+++ [External package URL](#external-package-url)++ [Local Git](#local-git)++ [FTPS](#ftps) +
+You can sync triggers in one of three ways:
+ Restart your function app in the Azure portal. + Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). + Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app. This request requires an [access token](/rest/api/azure/#acquire-an-access-token) in the [`Authorization` request header](/rest/api/azure/#request-header).
-When you deploy using an external package URL, you need to manually restart your function app to fully sync your updates when the package changes without changing the URL.
+When you deploy using an external package URL, you need to manually restart your function app to fully sync your deployment when the package changes without changing the URL, which includes initial deployment.
+
+When your function app is secured by inbound network restrictions, the sync triggers endpoint can only be called from a client inside the virtual network.
### Remote build
-Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds behave slightly differently depending on whether your app is running on Windows or Linux.
+Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds differ depending on whether your app is running on Windows or Linux.
#### [Windows](#tab/windows)
-All function apps running on Windows have a small management app, the SCM site provided by [Kudu](https://github.com/projectkudu/kudu). This site handles much of the deployment and build logic for Azure Functions.
+All function apps running on Windows have a small management app, the `scm` site provided by [Kudu](https://github.com/projectkudu/kudu). This site handles much of the deployment and build logic for Azure Functions.
When an app is deployed to Windows, language-specific commands, like `dotnet restore` (C#) or `npm install` (JavaScript) are run.
When apps are built remotely on Linux, they [run from the deployment package](ru
The following considerations apply when using remote builds during deployment:
-+ Remote builds are supported for function apps running on Linux in the Consumption plan, however they don't have an SCM/Kudu site, which limits deployment options.
-+ Function apps running on Linux a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an SCM/Kudu site, but it's limited compared to Windows.
-+ Remote builds aren't performed when an app has previously been set to run in [run-from-package](run-functions-from-deployment-package.md) mode. To learn how to use remote build in these cases, see [Zip deploy](#zip-deploy).
++ Remote builds are supported for function apps running on Linux in the Consumption plan. However, deployment options are limited for these apps because they don't have an `scm` (Kudu) site. ++ Function apps running on Linux a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an `scm` (Kudu) site, but it's limited compared to Windows.++ Remote builds aren't performed when an app is using [run-from-package](run-functions-from-deployment-package.md). To learn how to use remote build in these cases, see [Zip deploy](#zip-deploy). + You may have issues with remote build when your app was created before the feature was made available (August 1, 2019). For older apps, either create a new function app or run `az functionapp update --resource-group <RESOURCE_GROUP_NAME> --name <APP_NAME>` to update your function app. This command might take two tries to succeed. ### App content storage
You can use an external package URL to reference a remote package (.zip) file th
> >If you use Azure Blob storage, use a private container with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to give Functions access to the package. Any time the application restarts, it fetches a copy of the content. Your reference must be valid for the lifetime of the application.
->__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
+>__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. Whenever you deploy the package file that a function app references, you must [manually sync triggers](#trigger-syncing), including the initial deployment. When you change the contents of the package file and not the URL itself, you must also restart your function app to sync triggers.
>__Where app content is stored:__ App content is stored at the URL specified. This could be on Azure Blobs, possibly in the storage account specified by the `AzureWebJobsStorage` connection. Some client tools may default to deploying to a blob in this account. For example, for Linux Consumption apps, the Azure CLI will attempt to deploy through a package stored in a blob on the account specified by `AzureWebJobsStorage`.
You can deploy a function app running in a Linux container.
>__Where app content is stored:__ App content is stored in the specified container registry as a part of the image.
-### Web Deploy (MSDeploy)
-
-Web Deploy packages and deploys your Windows applications to any IIS server, including your function apps running on Windows in Azure.
-
->__How to use it:__ Use [Visual Studio tools for Azure Functions](functions-create-your-first-function-visual-studio.md). Clear the **Run from package file (recommended)** check box.
->
->You can also download [Web Deploy 3.6](https://www.iis.net/downloads/microsoft/web-deploy) and call `MSDeploy.exe` directly.
-
->__When to use it:__ Web Deploy is supported and has no issues, but the preferred mechanism is [zip deploy with Run From Package enabled](#zip-deploy). To learn more, see the [Visual Studio development guide](functions-develop-vs.md#publish-to-azure).
-
->__Where app content is stored:__ App content is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created.
- ### Source control
-Use source control to connect your function app to a Git repository. An update to code in that repository triggers deployment. For more information, see the [Kudu Wiki](https://github.com/projectkudu/kudu/wiki/VSTS-vs-Kudu-deployments).
+You can enable continuous integration between your function app and a source code repository. With source control enabled, an update to code in the connected source repository triggers deployment of the latest code from the repository. For more information, see the [Continuous deployment for Azure Functions](functions-continuous-deployment.md).
->__How to use it:__ Use Deployment Center in the Functions area of the portal to set up publishing from source control. For more information, see [Continuous deployment for Azure Functions](functions-continuous-deployment.md).
+>__How to use it:__ The easiest way to set up publishing from source control is from the Deployment Center in the Functions area of the portal. For more information, see [Continuous deployment for Azure Functions](functions-continuous-deployment.md).
->__When to use it:__ Using source control is the best practice for teams that collaborate on their function apps. Source control is a good deployment option that enables more sophisticated deployment pipelines.
+>__When to use it:__ Using source control is the best practice for teams that collaborate on their function apps. Source control is a good deployment option that enables more sophisticated deployment pipelines. Source control is usually enabled on a staging slot, which can be swapped into production after validation of updates from the repository. For more information, see [Azure Functions deployment slots](functions-deployment-slots.md).
>__Where app content is stored:__ The app content is in the source control system, but a locally cloned and built app content from is stored on the app file system, which may be backed by Azure Files from the storage account specified when the function app was created.
The following table shows the operating systems and languages that support in-po
| Language | Windows Consumption | Windows Premium | Windows Dedicated | Linux Consumption | Linux Premium | Linux Dedicated | |-|:--: |:-:|:--:|:--:|:-:|::|
-| C# | | | | | |
-| C# Script |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>\*</sup> |Γ£ö<sup>\*</sup>|
-| F# | | | | | | |
+| C#<sup>1</sup> | | | | | |
| Java | | | | | | |
-| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>1</sup>|Γ£ö<sup>1</sup>|
-| Python<sup>2</sup> | | | |Γ£ö |Γ£ö<sup>1</sup> |Γ£ö<sup>1</sup> |
+| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
+| Python<sup>2</sup> | | | |Γ£ö |Γ£ö |Γ£ö |
| PowerShell |Γ£ö|Γ£ö|Γ£ö| | | | | TypeScript (Node.js) | | | | | | |
-<sup>1</sup> In-portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and Dedicated plans.
+<sup>1</sup> In-portal editing is only supported for C# script files, which run in-process with the host. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
<sup>2</sup> In-portal editing is only supported for the [v1 Python programming model](functions-reference-python.md?pivots=python-mode-configuration). ## Deployment behaviors
If you need more control over this transition, you should use deployment slots.
## Deployment slots
-When you deploy your function app to Azure, you can deploy to a separate deployment slot instead of directly to production. For more information on deployment slots, see the [Azure Functions Deployment Slots](functions-deployment-slots.md) documentation for details.
+When you deploy your function app to Azure, you can deploy to a separate deployment slot instead of directly to production. Deploying to a deployment slot and then swapping into production after verification is the recommended way to configure [continuous deployment](./functions-continuous-deployment.md).
+
+The way that you deploy to a slot depends on the specific deployment tool you use. For example, when using Azure Functions Core Tools, you include the`--slot` option to indicate the name of a specific slot for the [`func azure functionapp publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command.
+
+For more information on deployment slots, see the [Azure Functions Deployment Slots](functions-deployment-slots.md) documentation for details.
## Next steps
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 03/06/2024 Last updated : 03/14/2024 zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
The Azure Functions extension provides these benefits:
* Write your functions in various languages while taking advantage of the benefits of Visual Studio Code. ::: zone pivot="programming-language-csharp"
->You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-csharp.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](create-first-function-vs-code-csharp.md).
::: zone-end ::: zone pivot="programming-language-java"
->You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-java.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](create-first-function-vs-code-java.md).
::: zone-end ::: zone pivot="programming-language-javascript"
->You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-node.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](create-first-function-vs-code-node.md).
::: zone-end ::: zone pivot="programming-language-powershell"
->You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-powershell.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](create-first-function-vs-code-powershell.md).
::: zone-end ::: zone pivot="programming-language-python"
->You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](create-first-function-vs-code-python.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](create-first-function-vs-code-python.md).
::: zone-end ::: zone pivot="programming-language-typescript"
->You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+>You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the start of the article.
-If you want to get started right away, complete the [Visual Studio Code quickstart article](./create-first-function-vs-code-typescript.md).
+If you're new to Functions, you might want to first complete the [Visual Studio Code quickstart article](./create-first-function-vs-code-typescript.md).
::: zone-end > [!IMPORTANT]
If you want to get started right away, complete the [Visual Studio Code quicksta
* An active [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't yet have an account, you can create one from the extension in Visual Studio Code.
-### Run local requirements
+You also need these prerequisites to [run and debug your functions locally](#run-functions-locally). They aren't required to just create or publish projects to Azure Functions.
-These prerequisites are only required to [run and debug your functions locally](#run-functions-locally). They aren't required to create or publish projects to Azure Functions.
-
-+ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When you have the Azure Functions extension installed, the easiest way to install or update Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command pallet.
++ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When you have the Azure Functions extension installed, the easiest way to install or update Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command palette. ::: zone pivot="programming-language-csharp" + The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
These prerequisites are only required to [run and debug your functions locally](
## Create an Azure Functions project
-The Functions extension lets you create a function app project, along with your first function. The following steps show how to create an HTTP-triggered function in a new Functions project. [HTTP trigger](functions-bindings-http-webhook.md) is the simplest function trigger template to demonstrate.
+The Functions extension lets you create the required function app project at the same time you create your first function. Use these steps to create an HTTP-triggered function in a new project. An [HTTP trigger](functions-bindings-http-webhook.md) is the simplest function trigger template to demonstrate.
+
+1. In the Activity bar, select the Azure icon. In the **Workspace (Local)** area, open the **+** list, and select **Create Function**.
-1. choose the Azure icon in the Activity bar, then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
- :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
+1. When prompted, select **Create new project**. Select the directory location for your project workspace, and then choose **Select**.
-1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+ You can either create a new folder or choose an empty folder for the project workspace, but don't choose a project folder that's already part of a workspace.
-1. When prompted, **Select a language** for your project, and if necessary choose a specific language version.
+1. When prompted, **Select a language** for your project. If necessary, choose a specific language version.
1. Select the **HTTP trigger** function template, or you can select **Skip for now** to create a project without a function. You can always [add a function to your project](#add-a-function-to-your-project) later.
- :::image type="content" source="./media/functions-develop-vs-code/select-http-trigger.png" alt-text="Screenshot for selecting H T T P trigger.":::
+ :::image type="content" source="./media/functions-develop-vs-code/select-http-trigger.png" alt-text="Screenshot for selecting HTTP trigger.":::
- > [!TIP]
- > You can view additional templates by selecting the `Change template filter` option and setting it to "Core" or "All".
+ > [!TIP]
+ > You can view additional templates by selecting the **Change template filter** option and setting the value to **Core** or **All**.
-1. Type **HttpExample** for the function name and select Enter, and then select **Function** authorization. This authorization level requires you to provide a [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint.
+1. For the function name, enter **HttpExample**, select Enter, and then select **Function** authorization.
- :::image type="content" source="./media/functions-develop-vs-code/create-function-auth.png" alt-text="Screenshot for creating function authorization.":::
+ This authorization level requires that you provide a [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint.
+
+ :::image type="content" source="./media/functions-develop-vs-code/create-function-auth.png" alt-text="Screenshot for creating function authorization.":::
1. From the dropdown list, select **Add to workspace**.
- :::image type="content" source="./media/functions-develop-vs-code/add-to-workplace.png" alt-text=" Screenshot for selectIng Add to workplace.":::
+ :::image type="content" source="./media/functions-develop-vs-code/add-to-workplace.png" alt-text=" Screenshot for selectIng Add to workplace.":::
-1. In **Do you trust the authors of the files in this folder?** window, select **Yes**.
+1. In the **Do you trust the authors of the files in this folder?** window, select **Yes**.
- :::image type="content" source="./media/functions-develop-vs-code/select-author-file.png" alt-text="Screenshot to confirm trust in authors of the files.":::
+ :::image type="content" source="./media/functions-develop-vs-code/select-author-file.png" alt-text="Screenshot to confirm trust in authors of the files.":::
-1. A function is created in your chosen language and in the template for an HTTP-triggered function.
+Visual Studio Code creates a function in your chosen language and in the template for an HTTP-triggered function.
### Generated project files
-The project template creates a project in your chosen language and installs required dependencies. For any language, the new project has these files:
+The project template creates a project in your chosen language and installs the required dependencies. For any language, the new project has these files:
* **host.json**: Lets you configure the Functions host. These settings apply when you're running functions locally and when you're running them in Azure. For more information, see [host.json reference](functions-host-json.md).
-* **local.settings.json**: Maintains settings used when you're running functions locally. These settings are used only when you're running functions locally. For more information, see [Local settings file](#local-settings).
+* **local.settings.json**: Maintains settings used when you're locally running functions. These settings are used only when you're running functions locally. For more information, see [Local settings file](#local-settings).
- >[!IMPORTANT]
- >Because the local.settings.json file can contain secrets, you need to exclude it from your project source control.
+ > [!IMPORTANT]
+ > Because the **local.settings.json** file can contain secrets, make sure to exclude the file from your project source control.
Depending on your language, these other files are created:
Files generated depend on the chosen Python programming model for Functions:
::: zone-end
-At this point, you can do one of these tasks:
-
-+ [Add input or output bindings to an existing function](#add-input-and-output-bindings).
-+ [Add a new function to your project](#add-a-function-to-your-project).
-+ [Run your functions locally](#run-functions-locally).
-+ [Publish your project to Azure](#publish-to-azure).
+At this point, you're able to [run your HTTP trigger function locally](#run-functions-locally).
## Add a function to your project
-You can add a new function to an existing project based on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project based on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then find and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get that item ready before you create the function trigger.
-The results of this action are that a new C# class library (.cs) file is added to your project.
+This action adds a new C# class library (.cs) file to your project.
::: zone-end ::: zone pivot="programming-language-java"
-The results of this action are that a new Java (.java) file is added to your project.
+This action adds a new Java (.java) file to your project.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript"
-The results of this action depend on the Node.js model version.
+This action's results depend on the Node.js model version.
### [v4](#tab/node-v4)
The results of this action depend on the Node.js model version.
### [v3](#tab/node-v3)
-A new folder is created in the project. The folder contains a new function.json file and the new JavaScript code file.
+Visual Studio Code creates a new folder in the project. The folder contains a new **function.json** file and the new JavaScript code file.
::: zone-end ::: zone pivot="programming-language-powershell"
-The results of this action are that a new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
+This action creates a new folder in the project. The folder contains a new **function.json** file and the new PowerShell code file.
::: zone-end ::: zone pivot="programming-language-python"
-The results of this action depend on the Python model version.
+This action's results depends on the Python model version.
### [v2](#tab/python-v2)
-New function code is added either to the function_app.py file (the default behavior) or to another Python file you selected.
+Visual Studio Code adds new function code either to the **function_app.py** file (default behavior) or to another Python file that you selected.
### [v1](#tab/python-v1)
-A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+Visual Studio Code creates a new folder in the project. The folder contains a new **function.json** file and the new Python code file.
::: zone-end
A new folder is created in the project. The folder contains a new function.json
You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code.
-For example, the way you define an output binding that writes data to a storage queue depends on your process model:
+For example, the way that you define an output binding that writes data to a storage queue depends on your process model:
### [Isolated process](#tab/isolated-process)
-Update the function method to add a binding parameter defined by using the `QueueOutput` attribute. You can use a `MultiResponse` object to return multiple messages or multiple output streams.
+1. If necessary, [add a reference to the package that supports your binding extension](#install-binding-extensions).
+
+1. Update the function method to add an attribute that defines the binding parameter, like `QueueOutput` for a queue output binding. You can use a `MultiResponse` object to return multiple messages or multiple output streams.
### [In-process](#tab/in-process)
-Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages.
+1. If necessary, [add a reference to the package that supports your binding extension](#install-binding-extensions).
+
+1. Update the function method to add an attribute that defines the binding parameter, such as `Queue` for a Queue binding. You can use an `ICollector<T>` type to represent a collection of messages.
For example, to add an output binding that writes data to a storage queue you update the function method to add a binding parameter defined by using the [`QueueOutput`](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation. The [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding) object represents the messages that are written to an output binding when the function completes. ::: zone pivot="programming-language-javascript"
-For example, the way you define the output binding that writes data to a storage queue depends on your Node.js model version:
+For example, the way that you define the output binding that writes data to a storage queue depends on your Node.js model version:
### [v4](#tab/node-v4)
-Using the Node.js v4 model, you must manually add a `return:` option in the function definition using the `storageQueue` function on the `output` object, which defines the storage queue to write the `return` output. Output is written when the function completes.
+Using the Node.js v4 model, you must manually add a `return:` option in the function definition using the `storageQueue` function on the `output` object, which defines the storage queue to write the `return` output. The output is written when the function completes.
### [v3](#tab/node-v3)
Using the Node.js v4 model, you must manually add a `return:` option in the func
::: zone pivot="programming-language-powershell" [!INCLUDE [functions-add-output-binding-vs-code](../../includes/functions-add-output-binding-vs-code.md)] ::: zone-end+ ::: zone pivot="programming-language-python" For example, the way you define the output binding that writes data to a storage queue depends on your Python model version:
The `@queue_output` decorator on the function is used to define a named binding
::: zone-end+ [!INCLUDE [functions-add-output-binding-example-all-langs](../../includes/functions-add-output-binding-example-all-languages.md)] [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)] ## <a name="publish-to-azure"></a>Create Azure resources
-Before you can publish your Functions project to Azure, you must have a function app and related resources in your Azure subscription to run your code. The function app provides an execution context for your functions. When you publish to a function app in Azure from Visual Studio Code, the project is packaged and deployed to the selected function app in your Azure subscription.
+Before you can publish your Functions project to Azure, you must have a function app and related resources in your Azure subscription to run your code. The function app provides an execution context for your functions. When you publish from Visual Studio Code to a function app in Azure, the project is packaged and deployed to the selected function app in your Azure subscription.
-When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you have more control over the remote resources created.
+When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way, you have more control over creating the remote resources.
### Quick function app create
When you create a function app in Azure, you can choose either a quick function
The following steps publish your project to a new function app created with advanced create options:
-1. In the command pallet, enter **Azure Functions: Create function app in Azure...(Advanced)**.
+1. In the command palette, enter **Azure Functions: Create function app in Azure...(Advanced)**.
1. If you're not signed in, you're prompted to **Sign in to Azure**. You can also **Create a free Azure account**. After signing in from the browser, go back to Visual Studio Code. 1. Following the prompts, provide this information:
- | Prompt | Selection |
- | | -- |
- | Enter a globally unique name for the new function app. | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. |
- | Select a runtime stack. | Choose the language version on which you've been running locally. |
- | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux. |
- | Select a resource group for new resources. | Choose **Create new resource group** and type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. |
- | Select a location for new resources. | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
- | Select a hosting plan. | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. |
- | Select a storage account. | Choose **Create new storage account** and at the prompt, type a globally unique name for the new storage account used by your function app and then select Enter. Storage account names must be between 3 and 24 characters long and can contain only numbers and lowercase letters. You can also select an existing account. |
- | Select an Application Insights resource for your app. | Choose **Create new Application Insights resource** and at the prompt, type a name for the instance used to store runtime data from your functions.|
+ | Prompt | Selection |
+ | | |
+ | Enter a globally unique name for the new function app. | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. |
+ | Select a runtime stack. | Choose the language version that you're locally running. |
+ | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux. |
+ | Select a resource group for new resources. | Choose **Create new resource group**, and enter a resource group name such as **myResourceGroup**. You can also select an existing resource group. |
+ | Select a location for new resources. | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ | Select a hosting plan. | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're charged only when your functions run. |
+ | Select a storage account. | Choose **Create new storage account**, and at the prompt, enter a globally unique name for the new storage account used by your function app. Storage account names must be between 3 and 24 characters long and can contain only numbers and lowercase letters. You can also select an existing account. |
+ | Select an Application Insights resource for your app. | Choose **Create new Application Insights resource**, and at the prompt, enter a name for the instance used to store runtime data from your functions. |
- A notification appears after your function app is created and the deployment package is applied. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created.
+ A notification appears after your function app is created, and the deployment package is applied. To view the creation and deployment results, including the Azure resources that you created, select **View Output** in this notification.
### <a name="get-the-url-of-the-deployed-function"></a>Get the URL of an HTTP triggered function in Azure
-To call an HTTP-triggered function from a client, you need the URL of the function when it's deployed to your function app. This URL includes any required function keys. You can use the extension to get these URLs for your deployed functions. If you just want to run the remote function in Azure, [use the Execute function now](#run-functions-in-azure) functionality of the extension.
+To call an HTTP-triggered function from a client, you need the function's URL, which is available after deployment to your function app. This URL includes any required function keys. You can use the extension to get these URLs for your deployed functions. If you just want to run the remote function in Azure, [use the Execute function now](#run-functions-in-azure) functionality of the extension.
-1. Select F1 to open the command palette, and then search for and run the command **Azure Functions: Copy Function URL**.
+1. Select F1 to open the command palette, and then find and run the command **Azure Functions: Copy Function URL**.
1. Follow the prompts to select your function app in Azure and then the specific HTTP trigger that you want to invoke.
-The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
+The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser to submit GET requests to the remote function.
-When the extension gets the URL of functions in Azure, it uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When the extension gets the URL of a function in Azure, the extension uses your Azure account to automatically retrieve the keys needed to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
## <a name="republish-project-files"></a>Deploy project files
-We recommend setting-up [continuous deployment](functions-continuous-deployment.md) so that your function app in Azure is updated when you update source files in the connected source location. You can also deploy your project files from Visual Studio Code.
-
-When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
+We recommend setting up [continuous deployment](functions-continuous-deployment.md) so that your function app in Azure is updated when you update source files in the connected source location. You can also deploy your project files from Visual Studio Code. When you publish from Visual Studio Code, you can take advantage of the [Zip deploy technology](functions-deployment-technologies.md#zip-deploy).
[!INCLUDE [functions-deploy-project-vs-code](../../includes/functions-deploy-project-vs-code.md)]
When you publish from Visual Studio Code, you take advantage of the [Zip deploy]
The Azure Functions extension lets you run individual functions. You can run functions either in your project on your local development computer or in your Azure subscription.
-For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls administrator APIs to start the function. The message body of the request sent to the function depends on the type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
+For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, the extension calls administrator APIs to start the function. The message body of the request sent to the function depends on the trigger type. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
### Run functions in Azure
-To execute a function in Azure from Visual Studio Code.
+To execute a function in Azure from Visual Studio Code, follow these steps:
+
+1. In the command palette, enter **Azure Functions: Execute function now**, and select your Azure subscription.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
+1. From the list, choose your function app in Azure. If you don't see your function app, make sure you're signed in to the correct subscription.
-1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
+1. From the list, choose the function that you want to run. In **Enter request body**, type the message body of the request, and press Enter to send this request message to your function.
-1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
+ The default text in **Enter request body** indicates the body's format. If your function app has no functions, a notification error is shown with this error.
-1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
+ When the function executes in Azure and returns a response, Visual Studio Code shows a notification.
-You can also run your function from the **Azure: Functions** area by right-clicking (Ctrl-clicking on Mac) the function you want to run from your function app in your Azure subscription and choosing **Execute Function Now...**.
+You can also run your function from the **Azure: Functions** area by opening the shortcut menu for the function that you want to run from your function app in your Azure subscription, and then selecting **Execute Function Now...**.
-When you run your functions in Azure from Visual Studio Code, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+When you run your functions in Azure from Visual Studio Code, the extension uses your Azure account to automatically retrieve the keys needed to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
### Run functions locally
-The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [more requirements](#run-local-requirements).
+The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings). To run your Functions project locally, you must meet [more requirements](#prerequisites).
#### Configure the project to run locally
To debug your functions, select F5. If [Core Tools][Azure Functions Core Tools]
When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
-1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
+1. In the command palette, enter **Azure Functions: Execute function now** and choose **Local project**.
1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
C# script uses [extension bundles](functions-bindings-register.md#extension-bund
If for some reason you can't use an extension bundle to install binding extensions for your project, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions). ::: zone-end - ## Monitoring functions When you [run functions locally](#run-functions-locally), log data is streamed to the Terminal console. You can also get log data when your Functions project is running in a function app in Azure. You can connect to streaming logs in Azure to see near-real-time log data. You should enable Application Insights for a more complete understanding of how your function app is behaving. ### Streaming logs
-When you're developing an application, it's often useful to see logging information in near-real time. You can view a stream of log files being generated by your functions. This output is an example of streaming logs for a request to an HTTP-triggered function:
+When you're developing an application, it's often useful to see logging information in near-real time. You can view a stream of log files being generated by your functions. Turn on logs from the command pallet with the `Azure Functions: Start streaming logs` command. This output is an example of streaming logs for a request to an HTTP-triggered function:
:::image type="content" source="media/functions-develop-vs-code/streaming-logs-vscode-console.png" alt-text="Screenshot for streaming logs output for H T T P trigger."::: To learn more, see [Streaming logs](functions-monitoring.md?tabs=vs-code#streaming-logs). ### Application Insights
You should monitor the execution of your functions by integrating your function
To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md).
-### Enable emulation in Visual Studio Code
-
-Now that you've configured the Terminal with Rosetta to run x86 emulation for Python development, you can use the following steps to integrate this terminal emulation with Visual Studio Code:
-
-1. Open the Command Palette by pressing Cmd+Shift+P, select **Preferences: Open Settings (JSON)**, and add the following JSON to your configuration:
-
- ```json
- "terminal.integrated.profiles.osx": {
- "rosetta": {
- "path": "arch",
- "args": ["-x86_64", "zsh", "-l"],
- "overrideName": true
- }
- }
- ```
-1. Open a new Terminal and choose **rosetta**.
-
- ![Screenshot of starting a new Rosetta terminal in Visual Studio Code.](./media/functions-develop-vs-code/vs-code-rosetta.png)
-- ::: zone pivot="programming-language-csharp" ## C\# script projects
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
Title: Continuously update function app code using Azure Pipelines
description: Learn how to set up an Azure DevOps pipeline that targets Azure Functions. Previously updated : 05/15/2023 Last updated : 03/23/2024 ms.devlang: azurecli
Choose your task version at the top of the article. YAML pipelines aren't availa
## Prerequisites
-* A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com).
- * An Azure DevOps organization. If you don't have one, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). If your team already has one, then make sure you're an administrator of the Azure DevOps project that you want to use. * An ability to run pipelines on Microsoft-hosted agents. You can either purchase a [parallel job](/azure/devops/pipelines/licensing/concurrent-jobs) or you can request a free tier.
-* A function app with its code in a GitHub repository. If you don't yet have an Azure Functions code project, you can create one by completing the following language-specific article:
- # [C\#](#tab/csharp)
+* If you plan to use GitHub instead of Azure Repos, you also need a GitHub repository. If you don't have a GitHub account, you can [create one for free](https://github.com).
+
+* An existing function app in Azure that has its source code in a supported repository. If you don't yet have an Azure Functions code project, you can create one by completing the following language-specific article:
+ ### [C\#](#tab/csharp)
[Quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md)
- # [JavaScript](#tab/javascript)
+ ### [JavaScript](#tab/javascript)
[Quickstart: Create a JavaScript function in Azure using Visual Studio Code](create-first-function-vs-code-node.md)
- # [Python](#tab/python)
+ ### [Python](#tab/python)
[Quickstart: Create a function in Azure with Python using Visual Studio Code](create-first-function-vs-code-python.md)
- # [PowerShell](#tab/powershell)
+ ### [PowerShell](#tab/powershell)
[Quickstart: Create a PowerShell function in Azure using Visual Studio Code](create-first-function-vs-code-powershell.md)
+
+ Remember to upload the local code project to your GitHub or Azure Repos respository after you publish it to your function app.
::: zone pivot="v1"
Choose your task version at the top of the article. YAML pipelines aren't availa
# [YAML](#tab/yaml) 1. Sign in to your Azure DevOps organization and navigate to your project.
-1. In your project, navigate to the **Pipelines** page. Then choose the action to create a new pipeline.
-1. Walk through the steps of the wizard by first selecting **GitHub** as the location of your source code.
-1. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
+1. In your project, navigate to the **Pipelines** page. Then select **New pipeline**.
+1. Select one of these options for **Where is your code?**:
+ + **GitHub**: You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. When this is the first connection to GitHub, the wizard also walks you through the process of connecting DevOps to your GitHub accounts.
+ + **Azure Repos Git**: You are immediately able to choose a repository in your current DevOps project.
1. When the list of repositories appears, select your sample app repository.
-1. Azure Pipelines will analyze your repository and recommend a template. Select **Save and run**, then select **Commit directly to the main branch**, and then choose **Save and run** again.
+1. Azure Pipelines analyzes your repository and in **Configure your pipeline** provides a list of potential templates. Choose the appropriate **function app** template for your language. If you don't see the correct template select **Show more**.
+1. Select **Save and run**, then select **Commit directly to the main branch**, and then choose **Save and run** again.
1. A new run is started. Wait for the run to finish. # [Classic](#tab/classic)
steps:
artifactName: 'drop' ```
+To learn about potential issues with these pipeline tasks, see [Functions not found after deployment](recover-python-functions.md#functions-not-found-after-deployment).
+ # [PowerShell](#tab/powershell) You can use the following sample to create a YAML file to package a PowerShell app. PowerShell is supported only for Windows Azure Functions.
steps:
artifactName: 'drop' ```
+Please check the generated archive to ensure that the deployed file has the right format.
+To learn about potential issues with these pipeline tasks, see [Functions not found after deployment](recover-python-functions.md#functions-not-found-after-deployment).
+ # [PowerShell](#tab/powershell) You can use the following sample to create a YAML file to package a PowerShell app. PowerShell is supported only for Windows Azure Functions.
azure-functions Functions How To Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-custom-container.md
You can enable Azure Functions to automatically update your deployment of an ima
:::image type="content" source="./media/functions-create-function-linux-custom-image/dockerhub-set-continuous-webhook.png" alt-text="Screenshot showing how to add the webhook in your Docker Hub window."::: 1. With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub. ## Enable SSH connections
SSH enables secure communication between a container and a client. With SSH enab
1. After a connection is established with your container, run the `top` command to view the currently running processes. :::image type="content" source="media/functions-create-function-linux-custom-image/linux-custom-kudu-ssh-top.png" alt-text="Screenshot that shows Linux top command running in an SSH session.":::
+<!For when we support connecting to the container console -->
## Next steps
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
Title: Use GitHub Actions to make code updates in Azure Functions description: Learn how to use GitHub Actions to define a workflow to build and deploy Azure Functions projects in GitHub. Previously updated : 05/16/2023 Last updated : 03/16/2024 zone_pivot_groups: github-actions-deployment-options
You can get started quickly with GitHub Actions through the Deployment tab when
### For an existing function app
-You can also add GitHub Actions to an existing function app. To add a GitHub Actions workflow to an existing function app:
-
-1. Navigate to your function app in the Azure portal.
-
-1. Select **Deployment Center**.
-
-1. Under Continuous Deployment (CI / CD), select **GitHub**. You see a default message, *Building with GitHub Actions*.
-
-1. Enter your GitHub organization, repository, and branch.
-
-1. Select **Preview file** to see the workflow file that will be added to your GitHub repository in `github/workflows/`.
-
-1. Select **Save** to add the workflow file to your repository.
::: zone-end ::: zone pivot="method-cli"
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
description: Learn how to build, validate, and use a Bicep file or an Azure Resource Manager template to deploy your function app and related Azure resources. ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 01/31/2024 Last updated : 04/01/2024 zone_pivot_groups: functions-hosting-plan
Keep the following things in mind when including zip deployment resources in you
+ Consumption plans on Linux don't support `WEBSITE_RUN_FROM_PACKAGE = 1`. You must instead set the URI of the deployment package directly in the `WEBSITE_RUN_FROM_PACKAGE` setting. For more information, see [WEBSITE\_RUN\_FROM\_PACKAGE](functions-app-settings.md#website_run_from_package). For an example template, see [Function app hosted on Linux in a Consumption plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption). :::zone-end :::zone pivot="dedicated-plan,premium-plan,consumption-plan"
-+ The `packageUri` must be a location that can be accessed by Functions. Consider using Azure blob storage with a shared access signature (SAS). After the SAS expires, Functions can no longer access the share for deployments. When you regenerate your SAS, remember to update the `WEBSITE_RUN_FROM_PACKAGE` setting with the new URI value.
++ The `packageUri` must be a location that can be accessed by Functions. Consider using Azure blob storage with a shared access signature (SAS). After the SAS expires, Functions can no longer access the share for deployments. When you regenerate your SAS, remember to update the `WEBSITE_RUN_FROM_PACKAGE` setting with the new URI value.+++ When setting `WEBSITE_RUN_FROM_PACKAGE` to a URI, you must [manually sync triggers](functions-deployment-technologies.md#trigger-syncing). + Make sure to always set all required application settings in the `appSettings` collection when adding or updating settings. Existing settings not explicitly set are removed by the update. For more information, see [Application configuration](#application-configuration).
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
Billing for the Premium plan is based on the number of core seconds and memory a
When you create a function app in the Azure portal, the Consumption plan is the default. To create a function app that runs in a Premium plan, you must explicitly create or choose an Azure Functions Premium hosting plan using one of the _Elastic Premium_ SKUs. The function app you create is then hosted in this plan. The Azure portal makes it easy to create both the Premium plan and the function app at the same time. You can run more than one function app in the same Premium plan, but they must both run on the same operating system (Windows or Linux).
-The following articles show you how to create a function app with a Premium plan, either programmatically or in the Azure portal:
+The following articles show you how to programmatically create a function app with a Premium plan:
-+ [Azure portal](create-premium-plan-function-app-portal.md)
+ [Azure CLI](scripts/functions-cli-create-premium-plan.md) + [Azure Resource Manager template](functions-infrastructure-as-code.md?pivots=premium-plan)
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
The following considerations apply to Core Tools installations:
+ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. ::: zone-end
-When using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code).
- ## Next steps Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/training/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
azure-functions Monitor Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions-reference.md
See [Monitor executions in Azure Functions](functions-monitoring.md) for details
[!INCLUDE [horz-monitor-ref-metrics-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-intro.md)]
-There are two metrics specific to Functions that are of interest:
+There are two metrics that are of specific interest to function apps:
| Metric | Description | | - | - |
There are two metrics specific to Functions that are of interest:
These metrics are used specifically when [estimating Consumption plan costs](functions-consumption-costs.md). ### Supported metrics for Microsoft.Web/sites
-The following table lists the metrics available for the Microsoft.Web/sites resource type. Most of these metrics apply to both FunctionApps and WebApps.
+
+The following table lists the metrics available for the Microsoft.Web/sites resource type. Most of these metrics apply to both function app and web apps, which both run on App Service.
+
+>[!NOTE]
+>These metrics aren't available when your function app runs on Linux in a [Consumption plan](./consumption-plan.md).
[!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [Microsoft.Web/sites](~/azure-reference-other-repo/azure-monitor-ref/supported-metrics/includes/microsoft-web-sites-metrics-include.md)]
azure-functions Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md
For more information about the resource types for Azure Functions, see [Azure Fu
[!INCLUDE [horz-monitor-platform-metrics](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-platform-metrics.md)] For a list of available metrics for Azure Functions, see [Azure Functions monitoring data reference](monitor-functions-reference.md#metrics).
+>[!NOTE]
+>App Service metrics (Microsoft.Web/sites) aren't available when your function app runs on Linux in a [Consumption plan](./consumption-plan.md).
+ [!INCLUDE [horz-monitor-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-resource-logs.md)] Azure Functions integrates with Azure Monitor Logs to monitor functions. For detailed instructions on how to set up diagnostic settings to configure and route resource logs, see [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings).
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
This error is a result of how extensions are loaded from the bundle locally. To
* Create a storage account and add a connection string to the `AzureWebJobsStorage` environment variable in the *localsettings.json* file. Use this option when you're using a storage account trigger or binding with your application, or if you have an existing storage account. To get started, see [Create a storage account](../storage/common/storage-account-create.md). ::: zone-end
+## Functions not found after deployment
+
+There are several common build issues that can cause Python functions to not be found by the host after an apparently successful deployment:
+
+* The agent pool must be running on Ubuntu to guarantee that packages are restored correctly from the build step. Make sure your deployment template requires an Ubuntu environment for build and deployment.
+
+* When the function app isn't at the root of the source repo, make sure that the `pip install` step references the correct location in which to create the `.python-packages` folder. Keep in mind that this location is case sensitive, such as in this command example:
+
+ ```
+ pip install --target="./FunctionApp1/.python_packages/lib/site-packages" -r ./FunctionApp1/requirements.txt
+ ```
+
+* The template must generate a deployment package that can be loaded into `/home/site/wwwroot`. In Azure Pipelines, this is done by the `ArchiveFiles` task.
+ ## Development issues in the Azure portal When using the [Azure portal](https://portal.azure.com/), take into account these known issues and their workarounds:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Application Gateway](../../application-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Automation](../../automation/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft Entra ID (Free)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft Entra ID (P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Microsoft Entra ID (P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-maps Power Bi Visual Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-data-residency.md
The Azure Maps Power BI visual can get a users' tenant location and use it to call the correct Azure Maps geographic endpoints. For instance, if a user's tenant is located in Europe, Power BI calls the Azure Maps' `eu` endpoint `eu.atlas.microsoft.com`, ensuring that their data doesn't leave the Europe region. Similarly if users' tenant is in the US, `us.atlas.microsoft.com` is called and users' data doesn't leave the US region.
-## Tenent location
+## Tenant location
To discover your tenant's location in Power BI:
To discover your tenant's location in Power BI:
:::image type="content" source="media/power-bi-visual/help-menu.png" alt-text="Screenshot showing the help menu in Power BI."::: 1. Select **About Power BI**
-1. Once the **About Power BI** dialog box opens, notice the **your data is stored in** followed by the tenent location, which is, in this example, Ireland.
+1. Once the **About Power BI** dialog box opens, notice the **your data is stored in** followed by the Tenant location, which is, in this example, Ireland.
:::image type="content" source="media/power-bi-visual/about-power-bi.png" alt-text="Screenshot showing the About Power BI diloag box.":::
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
description: This article provides guidance for migrating from the existing lega
-+ Last updated 7/19/2023 # Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The agent introduces a simplified, flexible method of configuring data collection using [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides guidance on how to implement a successful migration from the Log Analytics agent to Azure Monitor Agent.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as Microsoft Monitor Agent (MMA) and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The agent introduces a simplified, flexible method of configuring data dollection using [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides guidance on how to implement a successful migration from the Log Analytics agent to Azure Monitor Agent.
-If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](#migrate-additional-services-and-features), start planning your migration to Azure Monitor Agent by using the information in this article. If you are using the Log Analytics Agent for SCOM you will need to [migrate to the SCOM Agent](../vm/scom-managed-instance-overview.md)
+If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](#migrate-additional-services-and-features), start planning your migration to Azure Monitor Agent by using the information in this article. If you are using the Log Analytics Agent for SCOM, you need to [migrate to the SCOM Agent](../vm/scom-managed-instance-overview.md).
The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date. > - **Data upload**: You can still upload data. At some point when major customer have finished migrating and data volumes significantly drop, upload will be suspended. You can expect this to take at least 6 to 9 months. You will not receive a breaking change notification of the suspension.
In addition to consolidating and improving on the legacy Log Analytics agents, A
## Migration guidance
-Before you begin migrating from the Log Analytics agent to Azure Monitor Agent, review the checklist below.
+Before you begin migrating from the Log Analytics agent to Azure Monitor Agent, review the checklist.
### Before you begin > [!div class="checklist"] > - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). The Arc agent makes your on-premises servers visible as to Azure as a resource it can target. You won't incur any additional cost for installing the Azure Arc agent. > - **Understand your current needs.**<br>Use the **Workspace overview** tab of the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to see connected agents and discover solutions enabled on your Log Analytics workspaces that use legacy agents, including per-solution migration recommendations.
-> - **Verify that Azure Monitor Agent can address all of your needs.**<br>Azure Monitor Agent is generally available for data collection and is used for data collection by various Azure Monitor features and other Azure services. For details, see [Supported services and features](#migrate-additional-services-and-features).
+> - **Verify that Azure Monitor Agent can address all of your needs.**<br>Azure Monitor Agent is General Availablity (GA) for data collection and is used for data collection by various Azure Monitor features and other Azure services. For details, see [Supported services and features](#migrate-additional-services-and-features).
> - **Consider installing Azure Monitor Agent together with a legacy agent for a transition period.**<br>Run Azure Monitor Agent alongside the legacy Log Analytics agent on the same machine to continue using existing functionality during evaluation or migration. Keep in mind that running two agents on the same machine doubles resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.<br> > - If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later. > - If you have two agents on the same machine, avoid collecting duplicate data.<br> Collecting duplicate data from the same machine can skew query results, affect downstream features like alerts, dashboards, and workbooks, and generate extra charges for data ingestion and retention.<br>
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
> - Defender for Cloud natively deduplicates data when you use both agents, and [you'll be billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when you run the agents side by side. > - For Sentinel, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
-### Migration steps
+### Migration services and features
1. Use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert your legacy agent configuration into [data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) automatically.<sup>1</sup>
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
To avoid double ingestion, you can disable data collection from legacy agents during the testing phase without uninstalling the agents yet, by [removing the workspace configurations for legacy agents](./agent-data-sources.md#configure-data-sources).
- 1. Compare the data ingested by Azure Monitor Agent with legacy agent data to ensure there are no gaps. You can do this on any table by using the [join operator](/azure/data-explorer/kusto/query/joinoperator?pivots=azuremonitor) to add the `Category` column from the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which indicates `Azure Monitor Agent` for data collected by the Azure Monitor Agent.
+ 1. Ensure there are no gaps, compare the data ingested by legacy agent data to Azure Monitor Agent. You can do the comparison on any table by using the [join operator](/azure/data-explorer/kusto/query/joinoperator?pivots=azuremonitor) to add the `Category` column from the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table, which indicates `Azure Monitor Agent` for data collected by the Azure Monitor Agent.
For example, this query adds the `Category` column from the `Heartbeat` table to data retrieved from the `Event` table:
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
1. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, **disable or uninstall the legacy Log Analytics agents**.
- - If you've migrated to Azure Monitor Agent for all your requirements, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. Continue using the legacy Log Analytics for features and solutions that Azure Monitor Agent doesn't support.
+ - Once Azure Monitor Agent is installed for all your requirements, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. Continue using the legacy Log Analytics for features and solutions that Azure Monitor Agent doesn't support.
Use the [MMA removal tool](../agents/azure-monitor-agent-mma-removal-tool.md) to discovery and remove the Log Analytics agent extension from all machines within your tenant. - Don't uninstall the legacy agent if you need to use it to upload data to System Center Operations Manager.
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
## Migrate additional services and features
-Azure Monitor Agent is generally available for data collection. Most services that used Log Analytics agent for data collection have migrated to Azure Monitor Agent.
+Azure Monitor Agent is GA for data collection. Most services that used Log Analytics agent for data collection have migrated to Azure Monitor Agent.
The following features and services now have an Azure Monitor Agent version (some are still in Public Preview). This means you can already choose to use Azure Monitor Agent to collect data when you enable the feature or service. | Service or feature | Migration recommendation | Current state | More information | | : | : | : | : |
-| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | GA | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
-| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration for Change Tracking and inventory](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
-| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Azure Virtual Desktop Insights](../../virtual-desktop/insights.md#session-host-data-settings) |
-| [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | Generally Available | [Enable Container Insights](../containers/container-insights-transition-solution.md) |
-| [DNS Collector](../../sentinel/connect-dns-ama.md) | Use new Sentinel Connector | Generally Available | [Enable DNS Connector](../../sentinel/connect-dns-ama.md)|
+| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | GA | [Migration for Change Tracking and inventory](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | GA | [Monitor network connectivity using connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | GA| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
+| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |GA | [Azure Virtual Desktop Insights](../../virtual-desktop/insights.md#session-host-data-settings) |
+| [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | GA | [Enable Container Insights](../containers/container-insights-transition-solution.md) |
+| [DNS Collector](../../sentinel/connect-dns-ama.md) | Use new Sentinel Connector | GA | [Enable DNS Connector](../../sentinel/connect-dns-ama.md)|
When you migrate the following services, which currently use Log Analytics agent, to their respective replacements (v2), you no longer need either of the monitoring agents: | Service | Migration recommendation | Current state | More information | | : | : | : | : |
-| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan for Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)|
-| [Update Management](../../automation/update-management/overview.md) | Migrate to Azure Update Manager (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Update Manager documentation](../../update-manager/update-manager-faq.md#la-agent-also-known-as-mma-is-retiring-and-will-be-replaced-with-ama-is-it-necessary-to-move-to-update-manager-or-can-i-continue-to-use-automation-update-management-with-ama) |
-| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
+| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | GA | [Defender for Cloud plan for Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)|
+| [Update Management](../../automation/update-management/overview.md) | Migrate to Azure Update Manager (No dependency on Log Analytics agents or Azure Monitor Agent) | GA | [Update Manager documentation](../../update-manager/update-manager-faq.md#la-agent-also-known-as-mma-is-retiring-and-will-be-replaced-with-ama-is-it-necessary-to-move-to-update-manager-or-can-i-continue-to-use-automation-update-management-with-ama) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | GA | [Migrate to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
## Known parity gaps for solutions that may impact your migration - ***Sentinel***: Windows firewall logs are not yet GA+ - ***SQL Assessment Solution***: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which is not the best practice recommended by the AMA team. -- ***Microsoft Defender for cloud***: Some features for the new agentless solution are in development. Your migration maybe impacted if you use FIM, Endpoint protection discovery recommendations, OS Misconfigurations (ASB recommendations) and Adaptive Application controls.
+- ***Microsoft Defender for cloud***: Some features for the new agentless solution are in development. Your migration maybe impacted if you use File Integraty Monitoring (FIM), Endpoint protection discovery recommendations, OS Misconfigurations (Azure Security Benchmark (ASB) recommendations) and Adaptive Application controls.
- ***Container Insights***: The Windows version is in public preview. ## Frequently asked questions
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Azure Monitor action groups
description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure functions. Previously updated : 05/02/2023 Last updated : 04/01/2024
azure-monitor Activity Log Alerts Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts-webhook.md
Title: Configure a webhook to get activity log alerts description: Learn about the schema of the JSON that's posted to a webhook URL when an activity log alert activates. -+ Previously updated : 03/31/2017 Last updated : 04/01/2024 # Configure a webhook to get activity log alerts
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
description: Learn about the steps required to upgrade your Azure Monitor Applic
Previously updated : 2/23/2022 Last updated : 04/01/2024 # Migrate Azure Monitor Application Insights smart detection to alerts (preview)
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
This error message can be returned when creating or editing your alert rule quer
- You're referencing a column that doesn't exist in the table schema. - You're referencing a column that wasn't used in a prior project clause of the query.
-To mitigate this, you can either add the column to the previous project clause or use the [columnifexists](https://learn.microsoft.com/azure/data-explorer/kusto/query/column-ifexists-function) operator.
+To mitigate this, you can either add the column to the previous project clause or use the [columnifexists](/azure/data-explorer/kusto/query/column-ifexists-function) operator.
### ScheduledQueryRules API isn't supported for read only OMS Alerts
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
When a metric value exhibits large fluctuations, dynamic thresholds may build a
If you want to alert on a specific metric but you can't see it when you create an alert rule, check to determine: - If you can't see any metrics for the resource, [check if the resource type is supported for metric alerts](./alerts-metric-near-real-time.md).-- If you can see some metrics for the resource but can't find a specific metric, [check if that metric is available](https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/metrics-index). If so, see the metric description to check if it's only available in specific versions or editions of the resource.
+- If you can see some metrics for the resource but can't find a specific metric, [check if that metric is available](/azure/azure-monitor/reference/supported-metrics/metrics-index). If so, see the metric description to check if it's only available in specific versions or editions of the resource.
- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md). ### Can't find the metric to alert on: Virtual machines guest metrics
Refer to [Metrics not supported by dynamic thresholds](alerts-dynamic-thresholds
#### The metric isn't available for the selected scope. This might happen if the metric only applies to a specific version or SKU error
-Review the metric description in [Supported metrics with Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/metrics-index) to check if it's only available in specific versions or editions of the resource or this specific type.
+Review the metric description in [Supported metrics with Azure Monitor](/azure/azure-monitor/reference/supported-metrics/metrics-index) to check if it's only available in specific versions or editions of the resource or this specific type.
For example, in SQL Database resources or Storage file services, there are specific metrics only supported on specific versions of the resource.
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you received an error while trying to create, update or delete an [alert proc
1. **Check the alert processing rule parameters.**
- Check the [alert processing rule documentation](./alerts-processing-rules.md), or the [alert processing rule PowerShell Set-AzAlertProcessingRule](https://learn.microsoft.com/powershell/module/az.alertsmanagement/set-azalertprocessingrule) command.
+ Check the [alert processing rule documentation](./alerts-processing-rules.md), or the [alert processing rule PowerShell Set-AzAlertProcessingRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule) command.
## Next steps
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-application-security-detection-pack.md
Title: Security detection Pack with Azure Application Insights
description: Monitor application with Azure Application Insights and smart detection for potential security issues. Previously updated : 12/12/2017 Last updated : 04/01/2024
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-arm-config.md
description: Automate management and configuration of Application Insights smart
Previously updated : 02/14/2021 Last updated : 04/01/2024
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
Title: Smart detection in Application Insights | Microsoft Docs
description: Application Insights performs automatic deep analysis of your app telemetry and warns you about potential problems. Previously updated : 02/07/2019 Last updated : 04/01/2024
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-email-notification.md
Title: Smart Detection notification change - Azure Application Insights
description: Change to the default notification recipients from Smart Detection. Smart Detection lets you monitor application traces with Azure Application Insights for unusual patterns in trace telemetry. Previously updated : 02/14/2021 Last updated : 04/01/2024
azure-monitor Proactive Exception Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-exception-volume.md
Title: Abnormal rise in exception volume - Azure Application Insights
description: Monitor application exceptions with smart detection in Azure Application Insights for unusual patterns in exception volume. Previously updated : 12/08/2017 Last updated : 04/01/2024
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
Title: Smart Detection of Failure Anomalies in Application Insights | Microsoft
description: Alerts you to unusual changes in the rate of failed requests to your web app, and provides diagnostic analysis. No configuration is needed. Previously updated : 12/18/2018- Last updated : 04/01/2024+ # Smart Detection - Failure Anomalies
azure-monitor Proactive Potential Memory Leak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-potential-memory-leak.md
Title: 'Detect memory leak: Application Insights smart detection'
description: Monitor applications with Application Insights for potential memory leaks. Previously updated : 12/12/2017 Last updated : 04/01/2024
azure-monitor Proactive Trace Severity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-trace-severity.md
Title: Degradation in trace severity ratio - Azure Application Insights
description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with smart detection. Previously updated : 11/27/2017 Last updated : 04/01/2024
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Previously updated : 09/15/2022 Last updated : 04/01/2024 # Prometheus alerts in Azure Monitor
azure-monitor Resource Manager Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-activity-log.md
Previously updated : 12/28/2022 Last updated : 04/01/2024 # Resource Manager template samples for Azure Monitor activity log alert rules (Administrative category)
azure-monitor Resource Manager Alerts Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-resource-health.md
Previously updated : 05/11/2022 Last updated : 04/01/2024 # Resource Manager template samples for resource health alert rules in Azure Monitor
azure-monitor Smart Detection Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md
description: Smart detection analyzes your app telemetry and warns you of potent
Previously updated : 05/04/2017 Last updated : 04/01/2024 # Smart detection - Performance Anomalies
azure-monitor Kubernetes Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-metric-alerts.md
The following tables list the details of each recommended alert rule. Source cod
### Prometheus community alert rules
-**Cluster level alerts**
+#### Cluster level alerts
| Alert name | Description | Default threshold | Timeframe (minutes) | |:|:|::|::|
The following tables list the details of each recommended alert rule. Source cod
| KubeQuotaAlmostFull | The utilization of Kubernetes resource quotas is between 90% and 100% of the hard limits for the last 15 minutes. | >0.9 <1 | 15 |
-**Node level alerts**
+#### Node level alerts
| Alert name | Description | Default threshold | Timeframe (minutes) | |:|:|::|::| | KubeNodeUnreachable | A node has been unreachable for the last 15 minutes. | 1 | 15 | | KubeNodeReadinessFlapping | The readiness status of a node has changed more than 2 times for the last 15 minutes. | 2 | 15 |
-**Pod level alerts**
+#### Pod level alerts
| Alert name | Description | Default threshold | Timeframe (minutes) | |:|:|::|::|
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
$rules = @($rule1, $rule2)
$scope = "/subscriptions/fffffffff-ffff-ffff-ffff-ffffffffffff/resourcegroups/MyresourceGroup/providers/microsoft.monitor/accounts/MyAccounts" New-AzPrometheusRuleGroup -ResourceGroupName MyresourceGroup -RuleGroupName MyRuleGroup -Location eastus -Rule $rules -Scope $scope -Enabled ```- ## View Prometheus rule groups
-You can view the rule groups and their included rules in the Azure portal by selecting **Rule groups** from the Azure Monitor workspace.
--
-## Disable and enable rules
-To enable or disable a rule, select the rule in the Azure portal. Select either **Enable** or **Disable** to change its status.
--
-> [!NOTE]
-> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group.
----
+You can view your Prometheus rule groups and their included rules in the Azure portal in one of the following ways:
+* In the [portal home page](https://portal.azure.com/), in the search box, look for **Prometheus Rule Groups**.
+* In the [portal home page](https://portal.azure.com/), select **Monitor** > **Alerts**, then select **Prometheus Rule Groups**.
+ :::image type="content" source="media/prometheus-metrics-rule-groups/prometheus-rule-groups-from-alerts.png" alt-text="Screenshot that shows how to view Prometheus rule groups from the alerts screen.":::
+* In the page of a specific Azure Kubernetes Services resource (AKS), or a specific Azure Monitor Workspace (AMW), select **Monitor** > **Alerts**, then select **Prometheus Rule Groups**, to view a list of rule groups for this specific resource.
+You can select a rule group from the list to view or edit its details.
+
+## View the resource health states of your Prometheus rule groups
+You can now view the [resource health state](../../service-health/resource-health-overview.md) of your Prometheus rule group in the portal. This can allow you to detect problems in your rule groups, such as incorrect configuration, or query throttling problems
+
+1. In the [portal](https://portal.azure.com/), go to the overview of your Prometheus rule group you would like to monitors
+1. From the left pane, under **Help**, select **Resource health**.
+ :::image type="content" source="media/prometheus-metrics-rule-groups/prometheus-rule-groups-resource-health.png" alt-text="Screenshot that shows how to view resource health state of a Prometheus rule group.":::
+1. In the rule group resource health screen, you can see the current availability state of the rule group, as well as a history of recent resource health events, up to 30 days back.
+ :::image type="content" source="media/prometheus-metrics-rule-groups/prometheus-rule-groups-resource-health-history.png" alt-text="Screenshot that shows how to view the resource health history of a Prometheus rule group.":::
+
+* If the rule group is marked as **Available**, it is working as expected.
+* If the rule group is marked as **Degraded**, one or more rules in the group are not working as expected. This can be due to the rule query being throttled, or to other issues that may cause the rule evaluation to fail. Expand the status entry for more information on the detected problem, as well as suggestions for mitigation or for further troubleshooting.
+* If the rule group is marked as **Unavailable**, the entire rule group is not working as expected. This can be due the configuration issue (for example, the Azure Monitor Workspace can't be detected) or due to internal service issues. Expand the status entry for more information on the detected problem, as well as suggestions for mitigation or for further troubleshooting.
+* If the rule group is marked as **Unknown**, the entire rule group is disabled or is in an unknown state.
+
+## Disable and enable rule groups
+To enable or disable a rule, select the rule group in the Azure portal. Select either **Enable** or **Disable** to change its status.
## Next steps- - [Learn more about the Azure alerts](../alerts/alerts-types.md). - [Prometheus documentation for recording rules](https://aka.ms/azureprometheus-promio-recrules). - [Prometheus documentation for alerting rules](https://aka.ms/azureprometheus-promio-alertrules).
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
You can use an Azure Key Vault that is configured to use Azure role-based access
], "permissions": [ {
- "actions": ["Microsoft.KeyVault/vaults/keys/read"],
+ "actions": [],
"notActions": [], "dataActions": [
+ "Microsoft.KeyVault/vaults/keys/read",
"Microsoft.KeyVault/vaults/keys/encrypt/action", "Microsoft.KeyVault/vaults/keys/decrypt/action" ],
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
# Standard storage with cool access in Azure NetApp Files
-Using Azure NetApp Files standard storage with cool access, you can configure inactive data to move from Azure NetApp Files Standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). Enabling cool access moves inactive data blocks from the volume and the volume's snapshots snapshots to the cool tier, resulting in cost savings.
+Using Azure NetApp Files standard storage with cool access, you can configure inactive data to move from Azure NetApp Files Standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). Enabling cool access moves inactive data blocks from the volume and the volume's snapshots to the cool tier, resulting in cost savings.
Most cold data is associated with unstructured data. It can account for more than 50% of the total storage capacity in many storage environments. Infrequently accessed data associated with productivity software, completed projects, and old datasets are an inefficient use of a high-performance storage.
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
The details are described below.
## Fail over to destination volume
-When you need to activate the destination volume (for example, when you want to failover to the destination region), you need to break replication peering and then mount the destination volume.
+Failover is a manual process. When you need to activate the destination volume (for example, when you want to failover to the destination region), you need to break replication peering and then mount the destination volume. .
-1. To break replication peering, select the destination volume. Click **Replication** under Storage Service.
+1. To break replication peering, select the destination volume. Select **Replication** under Storage Service.
2. Check the following fields before continuing: * Ensure that Mirror State shows ***Mirrored***.
azure-resource-manager Move Resources Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resources-overview.md
The move operation doesn't support moving resources to new [Microsoft Entra tena
If you actually want to upgrade your Azure subscription (such as switching from free to pay-as-you-go), you need to convert your subscription. -- To upgrade a free trial, see [Upgrade your Free Trial or Microsoft Imagine Azure subscription to Pay-As-You-Go](../../cost-management-billing/manage/upgrade-azure-subscription.md).-- To change a pay-as-you-go account, see [Change your Azure Pay-As-You-Go subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
+- To upgrade a free trial, see [Upgrade your Free Trial or Microsoft Imagine Azure subscription to pay-as-you-go](../../cost-management-billing/manage/upgrade-azure-subscription.md).
+- To change a pay-as-you-go account, see [Change your Azure pay-as-you-go subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
If you can't convert the subscription, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Select **Subscription Management** for the issue type.
You can move resources to another region using a couple of different methods:
- **Start moving resources from a resource group**: With this method, you kick off the region move from within a resource group. After selecting the resources you want to move, the process continues in the Resource Mover hub, to check resource dependencies, and orchestrate the move process. [Learn more](../../resource-mover/move-region-within-resource-group.md). - **Start moving resources directly from the Resource Mover hub**: With this method, you kick off the region move process directly in the hub. [Learn more](../../resource-mover/tutorial-move-region-virtual-machines.md).
+### Move resources manually through redeployment
+
+To move resources that aren't supported by Azure Resource Mover or to move any service manually, see [Azure services relocation guidance overview](/azure/operational-excellence/overview-relocation).
+
+### Move resources from non availability zone to availability zone support
+
+To move resources from a region that doesn't support availability zones to one that does, see [Availability zone migration guidance overview for Microsoft Azure products and services](/azure/reliability/availability-zones-migration-overview).
+ ## Next steps - To check if a resource type supports being moved, see [Move operation support for resources](move-support-resources.md). - To learn more about the region move process, see [About the move process](../../resource-mover/about-move-process.md).
+- To learn more deeply about service relocation and planning recommendations, see [Relocated cloud workloads](/azure/cloud-adoption-framework/relocate/).
azure-signalr Signalr Concept Client Negotiation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-client-negotiation.md
Previously updated : 12/08/2023 Last updated : 04/02/2024 # Client negotiation
The response to the `POST [endpoint-base]/negotiate` request contains one of thr
```json {
+ "connectionToken":"05265228-1e2c-46c5-82a1-6a5bcc3f0143",
"connectionId":"807809a5-31bf-470d-9e23-afaee35d8a0d",
- "negotiateVersion":0,
+ "negotiateVersion":1,
"availableTransports":[ { "transport": "WebSockets",
The response to the `POST [endpoint-base]/negotiate` request contains one of thr
The payload that this endpoint returns provides the following data: * The `connectionId` value is required by the `LongPolling` and `ServerSentEvents` transports to correlate sending and receiving.
- * The `negotiateVersion` value is the negotiation protocol version that you use between the server and the client.
+ * The `negotiateVersion` value is the negotiation protocol version that you use between the server and the client, see [Transport Protocols](https://github.com/dotnet/aspnetcore/blob/main/src/SignalR/docs/specs/TransportProtocols.md).
+ * `negotiateVersion: 0` only returns `connectionId`, and client should use the value of `connectionId` as `id` in connect requests.
+ * `negotiateVersion: 1` returns `connectionId` and `connectionToken`, and client should use the value of `connectionToken` as `id` in connect requests.
* The `availableTransports` list describes the transports that the server supports. For each transport, the payload lists the name of the transport (`transport`) and a list of transfer formats that the transport supports (`transferFormats`).
- > [!NOTE]
- > Azure SignalR Service supports only `Version 0` for the negotiation protocol. A client that has a `negotiateVersion` value greater than zero will get a response with `negotiateVersion=0` by design. For protocol details, see [Transport Protocols](https://github.com/dotnet/aspnetcore/blob/main/src/SignalR/docs/specs/TransportProtocols.md).
- * A redirect response that tells the client which URL and (optionally) access token to use as a result: ```json
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
description: Details on how to develop and configure serverless real-time applic
Previously updated : 03/20/2024 Last updated : 04/02/2024 ms.devlang: csharp # ms.devlang: csharp, javascript
A serverless real-time application built with Azure Functions and Azure SignalR
- A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL. - One or more functions that handle messages sent from SignalR Service to clients.
-### negotiate function
+### Negotiation function
A client application requires a valid access token to connect to Azure SignalR Service. An access token can be anonymous or authenticated to a user ID. Serverless SignalR Service applications require an HTTP endpoint named `negotiate` to obtain a token and other connection information, such as the SignalR Service endpoint URL. Use an HTTP-triggered Azure Function and the `SignalRConnectionInfo` input binding to generate the connection information object. The function must have an HTTP route that ends in `/negotiate`.
-With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiate experience in class-based model](#negotiate-experience-in-class-based-model).
+With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiation experience in class-based model](#negotiation-experience-in-class-based-model).
-For more information about the `negotiate` function, see [Azure Functions development](#negotiate-function).
+For more information about the `negotiate` function, see [Azure Functions development](#negotiation-function).
To learn how to create an authenticated token, refer to [Using App Service Authentication](#using-app-service-authentication).
Use the `SignalRTrigger` binding to handle messages sent from SignalR Service. Y
For more information, see the [SignalR Service trigger binding reference](../azure-functions/functions-bindings-signalr-service-trigger.md).
-You also need to configure your function endpoint as an upstream endpoint so that service will trigger the function when there's message from a client. For more information about how to configure upstream endpoints, see [Upstream endpoints](concept-upstream.md).
+You also need to configure your function endpoint as an upstream endpoint so that service triggers the function when there's message from a client. For more information about how to configure upstream endpoints, see [Upstream endpoints](concept-upstream.md).
> [!NOTE] > SignalR Service doesn't support the `StreamInvocation` message from a client in Serverless Mode.
SignalR has a concept of _hubs_. Each client connection and each message sent fr
## Class-based model
-The class-based model is dedicated for C#. The class-based model provides a consistent SignalR server-side programming experience, with the following features:
+The class-based model is dedicated for C#.
-- Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
+# [Isolated worker model](#tab/isolated-process)
+
+The class-based model provides better programming experience, which can replace SignalR input and output bindings, with the following features:
+- More flexible negotiation, sending messages and managing groups experience.
+- More managing functionalities are supported, including closing connections, checking whether a connection, user, or group exists.
+- Strongly Typed hub
+- Unified connection string setting in one place.
+
+The following code demonstrates how to write SignalR bindings in class-based model:
+
+In the *Functions.cs* file, define your hub, which extends a base class `ServerlessHub`:
+```cs
+[SignalRConnection("AzureSignalRConnectionString")]
+public class Functions : ServerlessHub
+{
+ private const string HubName = nameof(Functions);
+
+ public Functions(IServiceProvider serviceProvider) : base(serviceProvider)
+ {
+ }
+
+ [Function("negotiate")]
+ public async Task<HttpResponseData> Negotiate([HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequestData req)
+ {
+ var negotiateResponse = await NegotiateAsync(new() { UserId = req.Headers.GetValues("userId").FirstOrDefault() });
+ var response = req.CreateResponse();
+ response.WriteBytes(negotiateResponse.ToArray());
+ return response;
+ }
+
+ [Function("Broadcast")]
+ public Task Broadcast(
+ [SignalRTrigger(HubName, "messages", "broadcast", "message")] SignalRInvocationContext invocationContext, string message)
+ {
+ return Clients.All.SendAsync("newMessage", new NewMessage(invocationContext, message));
+ }
+
+ [Function("JoinGroup")]
+ public Task JoinGroup([SignalRTrigger(HubName, "messages", "JoinGroup", "connectionId", "groupName")] SignalRInvocationContext invocationContext, string connectionId, string groupName)
+ {
+ return Groups.AddToGroupAsync(connectionId, groupName);
+ }
+}
+```
+
+In the *Program.cs* file, register your serverless hub:
+```cs
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults(b => b.Services
+ .AddServerlessHub<Functions>())
+ .Build();
+```
+
+### Negotiation experience in class-based model
+
+Instead of using SignalR input binding `[SignalRConnectionInfoInput]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method `NegotiateAsync`, which allows user to customize negotiation options such as `userId`, `claims`, etc.
+
+```cs
+Task<BinaryData> NegotiateAsync(NegotiationOptions? options = null)
+```
++
+### Sending messages and managing experience in class-based model
+
+You could send messages, manage groups, or manage clients by accessing the members provided by base class `ServerlessHub`.
+- `ServerlessHub.Clients` for sending messages to clients.
+- `ServerlessHub.Groups` for managing connections with groups, such as adding connections to groups, removing connections from groups.
+- `ServerlessHub.UserGroups` for managing users with groups, such as adding users to groups, removing users from groups.
+- `ServerlessHub.ClientManager` for checking connections existence, closing connections, etc.
+
+### Strongly Typed Hub
+
+[Strongly typed hub](/aspnet/core/signalr/hubs?#strongly-typed-hubs) allows you to use strongly typed methods when you send messages to clients. To use strongly typed hub in class based model, extract client methods into an interface `T`, and make your hub class derived from `ServerlessHub<T>`.
+
+The following code is an interface sample for client methods.
+```cs
+public interface IChatClient
+{
+ Task newMessage(NewMessage message);
+}
+```
+
+Then you can use the strongly typed methods as follows:
+```cs
+[SignalRConnection("AzureSignalRConnectionString")]
+public class Functions : ServerlessHub<IChatClient>
+{
+ private const string HubName = nameof(Functions);
+
+ public Functions(IServiceProvider serviceProvider) : base(serviceProvider)
+ {
+ }
+
+ [Function("Broadcast")]
+ public Task Broadcast(
+ [SignalRTrigger(HubName, "messages", "broadcast", "message")] SignalRInvocationContext invocationContext, string message)
+ {
+ return Clients.All.newMessage(new NewMessage(invocationContext, message));
+ }
+}
+```
+
+> [!NOTE]
+> You can get a complete project sample from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/DotnetIsolated-ClassBased/).
+
+### Unified connection string setting in one place
+
+You might have noticed the `SignalRConnection` attribute used on serverless hub classes. It looks like this:
+```cs
+[SignalRConnection("AzureSignalRConnectionString")]
+public class Functions : ServerlessHub<IChatClient>
+```
+
+It allows you to customize where the SignalR Service bindings look for connection string. If it's absent, the default value `AzureSignalRConnectionString` is used.
+
+> [!IMPORTANT]
+> `SignalRConnection` attribute doesn't change the connection string setting of SignalR triggers, even though you use SignalR triggers inside the serverless hub. You should specify the connection string setting for each SignalR trigger if you want to customize it.
+
+# [In-process model](#tab/in-process)
+
+The class-based model provides a consistent SignalR server-side programming experience, with the following features:
+
+- Less configuration work: The class name is used as `HubName`, the method name is used as `Event`, and the `Category` is decided automatically according to method name.
- Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.-- Convenient output and negotiate experience.
+- Convenient output and negotiation experience.
The following code demonstrates these features:
In class based model, `[SignalRParameter]` is unnecessary because all the argume
- The argument's type is `ILogger` or `CancellationToken` - The argument is decorated by attribute `[SignalRIgnore]`
-### Negotiate experience in class-based model
+### Negotiation experience in class-based model
-Instead of using SignalR input binding `[SignalR]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method
+Instead of using SignalR input binding `[SignalR]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method.
```cs SignalRConnectionInfo Negotiate(string userId = null, IList<Claim> claims = null, TimeSpan? lifeTime = null)
public async Task Broadcast([SignalRTrigger]InvocationContext invocationContext,
{ } ```+ ## Client development
To connect to SignalR Service, a client must complete a successful connection ne
1. Make a request to the `negotiate` HTTP endpoint discussed above to obtain valid connection information 1. Connect to SignalR Service using the service endpoint URL and access token obtained from the `negotiate` endpoint
-SignalR client SDKs already contain the logic required to perform the negotiation handshake. Pass the negotiate endpoint's URL, minus the `negotiate` segment, to the SDK's `HubConnectionBuilder`. Here's an example in JavaScript:
+SignalR client SDKs already contain the logic required to perform the negotiation handshake. Pass the negotiation endpoint's URL, minus the `negotiate` segment, to the SDK's `HubConnectionBuilder`. Here's an example in JavaScript:
```javascript const connection = new signalR.HubConnectionBuilder()
connection.send("method1", "arg1", "arg2");
Azure Function apps that integrate with Azure SignalR Service can be deployed like any typical Azure Function app, using techniques such as [continuously deployment](../azure-functions/functions-continuous-deployment.md), [zip deployment](../azure-functions/deployment-zip-push.md), and [run from package](../azure-functions/run-functions-from-deployment-package.md).
-However, there are a couple of special considerations for apps that use the SignalR Service bindings. If the client runs in a browser, CORS must be enabled. And if the app requires authentication, you can integrate the negotiate endpoint with App Service Authentication.
+However, there are a couple of special considerations for apps that use the SignalR Service bindings. If the client runs in a browser, CORS must be enabled. And if the app requires authentication, you can integrate the negotiation endpoint with App Service Authentication.
### Enabling CORS
-The JavaScript/TypeScript client makes HTTP request to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the function app or the browser will block the requests.
+The JavaScript/TypeScript client makes HTTP request to the negotiation function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the function app, or the browser will block the requests.
#### Localhost
To enable CORS on an Azure Function app, go to the CORS configuration screen und
> [!NOTE] > CORS configuration is not yet available in Azure Functions Linux Consumption plan. Use [Azure API Management](#cloudazure-api-management) to enable CORS.
-CORS with Access-Control-Allow-Credentials must be enabled for the SignalR client to call the negotiate function. Select the checkbox to enable it.
+CORS with Access-Control-Allow-Credentials must be enabled for the SignalR client to call the negotiation function. To enable it, select the checkbox.
In the **Allowed origins** section, add an entry with the origin base URL of your web application.
Configure your SignalR clients to use the API Management URL.
### Using App Service Authentication
-Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
+Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that is authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
In the Azure portal, in your Function app's _Platform features_ tab, open the _Authentication/authorization_ settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice.
-Once configured, authenticated HTTP requests will include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively.
+Once configured, authenticated HTTP requests include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively.
-You can use these headers in your `SignalRConnectionInfo` binding configuration to create authenticated connections. Here's an example C# negotiate function that uses the `x-ms-client-principal-id` header.
+You can use these headers in your `SignalRConnectionInfo` binding configuration to create authenticated connections. Here's an example C# negotiation function that uses the `x-ms-client-principal-id` header.
```csharp [FunctionName("negotiate")]
For information on other languages, see the [Azure SignalR Service bindings](../
## Next steps
-In this article, you've learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
+In this article, you learn how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the [SignalR Service overview page](index.yml).
batch Batch Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-upgrade-policy.md
Title: Provision a pool with Auto OS Upgrade description: Learn how to create a Batch pool with Auto OS Upgrade so that customers can have control over their OS upgrade strategy to ensure safe, workload-aware OS upgrade deployments. Previously updated : 03/27/2024 Last updated : 04/02/2024 # Create an Azure Batch pool with Automatic Operating System (OS) Upgrade
-> [!IMPORTANT]
-> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- When you create an Azure Batch pool, you can provision the pool with nodes that have Auto OS Upgrade enabled. This article explains how to set up a Batch pool with Auto OS Upgrade. ## Why use Auto OS Upgrade?
communication-services Phone Number Management For Canada https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
+|Germany|
|Ireland| |Italy|
+|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Phone Number Management For United Kingdom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
+|Germany|
|Ireland| |Italy|
+|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Phone Number Management For United States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-states.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
+|Germany|
|Ireland| |Italy|
+|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
> [!IMPORTANT] > Number Retention and Portability: Phone numbers that are assigned to you during any preview program may need to be returned to Microsoft if you do not meet regulatory requirements before General Availability. During private preview and public preview, telephone numbers are not eligible for porting. [Details on offers in Public Preview / GA](../concepts/numbers/sub-eligibility-number-capability.md) + Numbers are billed on a per month basis, and pricing differs based on the type of a number and the source (country/region) of the number. Once a number is purchased, Customers can make / receive calls using that number and are billed on a per minute basis. PSTN call pricing is based on the type of number and location in which a call is terminated (destination), with few scenarios having rates based on origination location. In most cases, customers with Azure subscriptions locations that match the country/region of the Number offer are able to buy the Number. See here for details on [in-country/region and cross-country/region purchases](../concepts/numbers/sub-eligibility-number-capability.md).
-All prices shown below are in USD.
+Pricing for all countries/regions is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees. All prices shown below are in USD.
## United States telephony offers
All prices shown below are in USD.
|-||--| |Toll-free |N/A |USD 0.2161/min |
-***
-Note: Pricing for all countries/regions is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
-***
## Direct routing pricing For Azure Communication Services direct routing, there is a flat rate regardless of the geography:
communication-services Add Azure Managed Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-azure-managed-domains.md
# Quickstart: How to add Azure Managed Domains to Email Communication Service
-In this quick start, you learn about how to provision the Azure Managed domain in Azure Communication Services to send email.
+In this quick start, you learn how to provision the Azure Managed Domain to Email Communication Service in Azure Communication Services.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).-- An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+- An Azure Communication Services Email Resource created and ready to add the domains. See [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
-## Azure Managed Domains vs. Custom Domains
+## Azure Managed Domains compared to Custom Domains
-Before provisioning an Azure Managed Domain, review the following table to determine which domain type is most appropriate for your particular use case.
+Before provisioning an Azure Managed Domain, review the following table to decide which domain type best meets your needs.
| | [Azure Managed Domains](./add-azure-managed-domains.md) | [Custom Domains](./add-custom-verified-domains.md) | |||| |**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain |
-|**Cons:** | - Sender domain is not personalized and cannot be changed<br/>- Sender usernames cannot be personalized<br/>- Very limited sending volume<br />- User Engagement Tracking cannot be enabled <br /> | - Requires verification of domain records <br /> - Longer setup for verification |
+|**Cons:** | - Sender domain is not personalized and cannot be changed<br/>- Sender usernames can't be personalized<br/>- Very limited sending volume<br />- User Engagement Tracking can't be enabled <br /> | - Requires verification of domain records <br /> - Longer setup for verification |
## Provision Azure Managed Domain
-1. Go the overview page of the Email Communications Service resource that you created earlier.
-2. Create the Azure Managed Domain.
- - (Option 1) Click the **1-click add** button under **Add a free Azure subdomain**. Move to the next step.
+1. Open the Overview page of the Email Communications Service resource that you created in [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
+2. Create an Azure Managed Domain using one of the following options.
+ - (Option 1) Click the **1-click add** button under **Add a free Azure subdomain**. Continue to **step 3**.
:::image type="content" source="./media/email-add-azure-domain.png" alt-text="Screenshot that highlights the adding a free Azure Managed Domain.":::
Before provisioning an Azure Managed Domain, review the following table to deter
- Click **Add domain** on the upper navigation bar. - Select **Azure domain** from the dropdown.
+
3. Wait for the deployment to complete. :::image type="content" source="./media/email-add-azure-domain-progress.png" alt-text="Screenshot that shows the Deployment Progress." lightbox="media/email-add-azure-domain-progress-expanded.png":::
-4. After domain creation is completed, you'll see a list view with the created domain.
+4. Once the domain is created, you see a list view with the new domain.
:::image type="content" source="./media/email-add-azure-domain-created.png" alt-text="Screenshot that shows the list of provisioned email domains." lightbox="media/email-add-azure-domain-created-expanded.png":::
-5. Click the name of the provisioned domain, which navigates you to the overview page for the domain resource type.
+5. Click the name of the provisioned domain to open the overview page for the domain resource type.
:::image type="content" source="./media/email-azure-domain-overview.png" alt-text="Screenshot that shows Azure Managed Domain overview page." lightbox="media/email-azure-domain-overview-expanded.png"::: ## Sender authentication for Azure Managed Domain
-Azure communication Services Email automatically configures the required email authentication protocols to set proper authentication for the email as detailed in [Email Authentication best practices](../../concepts/email/email-authentication-best-practice.md).
+
+Azure Communication Services automatically configures the required email authentication protocols for the email as described in [Email Authentication best practices](../../concepts/email/email-authentication-best-practice.md).
**Your email domain is now ready to send emails.** ## Next steps
-* [Get started by connecting Email Communication Service with Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+* [Quickstart: How to connect a verified email domain](../../quickstarts/email/connect-email-communication-resource.md)
* [How to send an email using Azure Communication Service](../../quickstarts/email/send-email.md)
-The following documents may be interesting to you:
+## Related articles
-- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)-- How to send emails with custom verified domains? [Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+* Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+* Learn how to send emails with custom verified domains in [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md)
communication-services Add Custom Verified Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md
Title: How to add custom verified domains to Email Communication Service
+ Title: How to add custom verified email domains
-description: Learn about adding Custom domains for Email Communication Services.
+description: Learn about adding custom email domains in Azure Communication Services.
Last updated 03/31/2023
-# Quickstart: How to add custom verified domains to Email Communication Service
+# Quickstart: How to add custom verified email domains
-In this quick start, you learn about how to add a custom domain and verify in Azure Communication Services to send email.
+In this quick start, you learn how to provision a custom verified email domain in Azure Communication Services.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).-- An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)
+- An Azure account with an active subscription. See [Create an account for free](https://azure.microsoft.com/free/dotnet/).
+- An Azure Communication Services Email Resource created and ready to add the domains. See [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
-## Azure Managed Domains vs. Custom Domains
+## Azure Managed Domains compared to Custom Domains
-Before provisioning a Custom Domain, review the following table to determine which domain type is most appropriate for your particular use case.
+Before provisioning a custom email domain, review the following table to decide which domain type best meets your needs.
| | [Azure Managed Domains](./add-azure-managed-domains.md) | [Custom Domains](./add-custom-verified-domains.md) | |||| |**Pros:** | - Setup is quick & easy<br/>- No domain verification required<br /> | - Emails are sent from your own domain |
-|**Cons:** | - Sender domain is not personalized and cannot be changed<br/>- Sender usernames cannot be personalized<br/>- Very limited sending volume<br />- User Engagement Tracking cannot be enabled <br /> | - Requires verification of domain records <br /> - Longer setup for verification |
+|**Cons:** | - Sender domain isn't personalized and can't be changed<br/>- Sender usernames can't be personalized<br/>- Limited sending volume<br />- User Engagement Tracking can't be enabled<br /> | - Requires verification of domain records<br /> - Longer setup for verification |
+
+## Provision a custom domain
-## Provision custom domain
To provision a custom domain, you need to:
-* Verify the custom domain ownership by adding TXT record in your DNS.
-* Configure the sender authentication by adding SPF and DKIM records.
+* Verify the custom domain ownership by adding a TXT record in your Domain Name System (DNS).
+* Configure the sender authentication by adding Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) records.
### Verify custom domain
-1. Go the overview page of the Email Communications Service resource that you created earlier.
-2. Setup Custom Domain.
- - (Option 1) Click the **Setup** button under **Setup a custom domain**. Move to the next step.
+In this section, you verify the custom domain ownership by adding a TXT record in your DNS.
+1. Open the Overview page of the Email Communication Service resource that you created in [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
+2. Create a custom domain using one of the following options.
+ - (Option 1) Click the **Setup** button under **Setup a custom domain**. Continue to **step 3**.
:::image type="content" source="./media/email-domains-custom.png" alt-text="Screenshot that shows how to set up a custom domain.":::
To provision a custom domain, you need to:
- Click **Add domain** on the upper navigation bar. - Select **Custom domain** from the dropdown.
-3. Navigate to "Add a custom Domain."
-4. Enter your "Domain Name" and re enter domain name.
-5. Click **Confirm**
+3. Click **Add a custom Domain**.
+4. Enter your domain name in the text box.
+5. Re-enter your domain name in the next text box.
+6. Click **Confirm**.
:::image type="content" source="./media/email-domains-custom-add.png" alt-text="Screenshot that shows where to enter the custom domain value.":::
-6. Ensure that domain name isn't misspelled or click edit to correct the domain name and confirm.
-7. Click **Add**
+7. Make sure the domain name you entered is correct and both text boxes are the same. If needed, click **Edit** to correct the domain name before confirming.
+8. Click **Add**.
:::image type="content" source="./media/email-domains-custom-add-confirm.png" alt-text="Screenshot that shows how to add a custom domain of your choice.":::
-8. A custom domain configuration is created for your domain.
+9. Azure Communication Services creates a custom domain configuration for your domain.
:::image type="content" source="./media/email-domains-custom-add-progress.png" alt-text="Screenshot that shows the progress of custom domain Deployment.":::
-9. You can verify the ownership of the domain by clicking **Verify Domain**
+10. To verify domain ownership, click **Verify Domain**.
:::image type="content" source="./media/email-domains-custom-added.png" alt-text="Screenshot that shows custom domain is successfully added for verification.":::.
-10. If you would like to resume the verification later, you can click **Close** and resume the verification from **Provision Domains** by clicking **Configure** .
+11. To resume the verification later, click **Close** and resume. Then to continue verification from Provision Domains, click **Configure**.
:::image type="content" source="./media/email-domains-custom-configure.png" alt-text="Screenshot that shows the added domain ready for verification in the list of provisioned domains." lightbox="media/email-domains-custom-configure-expanded.png":::
-11. Clicking **Verify Domain** or **Configure** navigates to "Verify Domain via TXT record" to follow.
-
- :::image type="content" source="./media/email-domains-custom-verify.png" alt-text="Screenshot that shows the Configure link that you need to click to verify domain ownership." lightbox="media/email-domains-custom-verify-expanded.png":::
-12. Add the above TXT record to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#txt-records) for information on how to add a TXT record for your DNS provider.
+12. When you select either **Verify Domain** or **Configure**, it opens the **Verify Domain via TXT record** dialog box.
-Click **Next** once you've completed this step.
+ :::image type="content" source="./media/email-domains-custom-verify.png" alt-text="Screenshot that shows the Configure link that you need to click to verify domain ownership." lightbox="media/email-domains-custom-verify-expanded.png":::
-13. Verify that TXT record is created successfully in your DNS and Click **Done**
-14. DNS changes take up to 15 to 30 minutes. Click **Close**
+13. Add the preceding TXT record to your domain's registrar or DNS hosting provider. Refer to the [TXT records](#txt-records) section for information about adding a TXT record for your DNS provider.
+
+ Once you complete this step, click **Next**.
+
+14. Verify that the TXT record was successfully created in your DNS, then click **Done**.
+15. DNS changes require 15 to 30 minutes to take effect. Click **Close**.
:::image type="content" source="./media/email-domains-custom-verify-progress.png" alt-text="Screenshot that shows the domain verification is in progress.":::
-15. Once your domain is verified, you can add your SPF and DKIM records to authenticate your domains.
+
+16. Once you verify your domain, you can add your SPF and DKIM records to authenticate your domains.
:::image type="content" source="./media/email-domains-custom-verified.png" alt-text="Screenshot that shows the custom domain is verified." lightbox="media/email-domains-custom-verified-expanded.png"::: ### Configure sender authentication for custom domain
-To configure sender authentication for your domains, additional DNS records need to be added to your domain. Below, we provide steps where Azure Communication Services will offer records that should be added to your DNS. However, depending on whether the domain you are registering is a root domain or a subdomain, you will need to add the records to the respective zone or make appropriate alterations to the records that we generate.
-As an example, let's consider adding SPF and DKIM records for the custom domain "sales.us.notification.azurecommtest.net." The following are different methods for adding these records to the DNS, depending on the level of the Zone where the records are being added.
+To configure sender authentication for your domains, you need to add more Domain Name Service (DNS) records. This section describes how Azure Communication Services offer records for you to add to your DNS. However, depending on whether the domain you're registering is a root domain or a subdomain, you need to add the records to the respective zone or make changes to the automatically generated records.
+
+This section shows how to add SPF and DKIM records for the custom domain **sales.us.notification.azurecommtest.net**. The following examples describe four different methods for adding these records to the DNS, depending on the level of the zone where you're adding the records.
1. Zone: **sales.us.notification.azurecommtest.net**
As an example, let's consider adding SPF and DKIM records for the custom domain
| DKIM | CNAME | selector1-azurecomm-prod-net._domainkey | selector1-azurecomm-prod-net._domainkey.azurecomm.net | | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey | selector2-azurecomm-prod-net._domainkey.azurecomm.net |
-The records that get generated in our portal assumes that you will be adding these records in DNS in this Zone **sales.us.notification.azurecommtest.net**.
+The records generated by the portal assume that you are adding these records to the DNS in this zone **sales.us.notification.azurecommtest.net**.
2. Zone: **us.notification.azurecommtest.net**
The records that get generated in our portal assumes that you will be adding the
|SPF | TXT | sales.us | v=spf1 include:spf.protection.outlook.com -all | | DKIM | CNAME | selector1-azurecomm-prod-net._domainkey.**sales.us** | selector1-azurecomm-prod-net._domainkey.azurecomm.net | | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey.**sales.us** | selector2-azurecomm-prod-net._domainkey.azurecomm.net |
-
4. Zone: **azurecommtest.net**
The records that get generated in our portal assumes that you will be adding the
| DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey.**sales.us.notification** | selector2-azurecomm-prod-net._domainkey.azurecomm.net |
+### Add SPF and DKIM Records
-#### Adding SPF and DKIM Records
--
-1. Navigate to **Provision Domains** and confirm that **Domain Status** is in "Verified" state.
-2. You can add SPF and DKIM by clicking **Configure**. Add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#cname-records) for information on how to add a TXT & CNAME record for your DNS provider.
+In this section, you configure the sender authentication by adding Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) records.
- Click **Next** once you've completed this step.
+1. Open **Provision Domains** and confirm that **Domain Status** is in the `Verified` state.
+2. To add SPF and DKIM information, click **Configure**.
+3. Add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#cname-records) for information about adding a TXT and CNAME record for your DNS provider.
+
:::image type="content" source="./media/email-domains-custom-spf.png" alt-text="Screenshot that shows the D N S records that you need to add for S P F validation for your verified domains."::: :::image type="content" source="./media/email-domains-custom-dkim-1.png" alt-text="Screenshot that shows the D N S records that you need to add for D K I M."::: :::image type="content" source="./media/email-domains-custom-dkim-2.png" alt-text="Screenshot that shows the D N S records that you need to add for additional D K I M records.":::
-3. Verify that TXT and CNAME records are created successfully in your DNS and Click **Done**
+4. When you're done adding TXT and CNAME information, click **Next** to continue.
+
+4. Verify that TXT and CNAME records were successfully created in your DNS. Then click **Done**.
:::image type="content" source="./media/email-domains-custom-spf-dkim-verify.png" alt-text="Screenshot that shows the DNS records that you need to add for S P F and D K I M.":::
-4. DNS changes take up to 15 to 30 minutes. Click **Close**
+5. DNS changes take effect in 15 to 30 minutes. Click **Close** and wait for verification to complete.
:::image type="content" source="./media/email-domains-custom-spf-dkim-verify-progress.png" alt-text="Screenshot that shows that the sender authentication verification is in progress.":::
-5. Wait for Verification to complete. You can check the verification status from **Provision Domains** page.
+6. Check the verification status at the **Provision Domains** page.
:::image type="content" source="./media/email-domains-custom-verification-status.png" alt-text="Screenshot that shows that the sender authentication verification is done." lightbox="media/email-domains-custom-verification-status-expanded.png":::
-6. Once your sender authentication configurations are successfully verified, your email domain is ready to send emails using custom domain.
+7. Once you verify sender authentication configurations, your email domain is ready to send emails using the custom domain.
:::image type="content" source="./media/email-domains-custom-ready.png" alt-text="Screenshot that shows that your verified custom domain is ready to send Email." lightbox="media/email-domains-custom-ready-expanded.png":::
-## Changing MailFrom and FROM display name for custom domains
+## Change MailFrom and FROM display names for custom domains
-You can optionally configure your MailFrom address to be something other than the default DoNotReply, and also add more than one sender username to your domain. To understand how to configure your sender address, see how to [add multiple sender addresses](add-multiple-senders.md).
+You can optionally configure your `MailFrom` address to be something other than the default `DoNotReply` and add more than one sender username to your domain. For more information about how to configure your sender address, see [Quickstart: How to add multiple sender addresses](add-multiple-senders.md).
**Your email domain is now ready to send emails.**
-## Adding DNS records in popular domain registrars
+## Add DNS records in popular domain registrars
### TXT records
-The following links provide additional information on how to add a TXT record using many of the popular domain registrars.
+The following links provide instructions about how to add a TXT record using popular domain registrars.
| Registrar Name | Documentation Link | | | |
The following links provide additional information on how to add a TXT record us
### CNAME records
-The following links provide additional information on how to add a CNAME record using many of the popular domain registrars (Make sure to use the values from the configuration window rather than the ones in the documentation link.)
+The following links provide more information about how to add a CNAME record using popular domain registrars. Make sure to use your values from the configuration window rather than the examples in the documentation link.
| Registrar Name | Documentation Link | | | |
The following links provide additional information on how to add a CNAME record
* [Get started by connecting Email Communication Service with an Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
-* [How to send an email using Azure Communication Service](../../quickstarts/email/send-email.md)
+* [How to send an email using Azure Communication Services](../../quickstarts/email/send-email.md)
-The following documents may be interesting to you:
+## Related articles
-- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+* Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
communication-services Add Multiple Senders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders.md
Title: How to add and remove multiple sender addresses in Azure Communication Services to send email.
+ Title: How to add and remove multiple email sender addresses.
-description: Learn about how to add multiple sender address to Email Communication Services.
+description: Learn about how to add multiple email sender addresses in Email Communication Service.
Last updated 03/29/2023
-# Quickstart: How to add and remove Multiple Sender Addresses to Email Communication Service
+# Quickstart: How to add and remove Multiple Sender Addresses to Email Communication Service
-In this quick start, you learn about how to add and remove multiple sender addresses in Azure Communication Services to send email.
+In this quick start, you learn about how to add and remove multiple email sender addresses in Azure Communication Services.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet/).-- An Azure Email Communication Services Resource created and ready to provision the domains [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md)-- A [Custom Domain](../../quickstarts/email/add-custom-verified-domains.md) with higher than default sending limits provisioned and ready.
+- An Azure Communication Services Email Resource created and ready to add the domains. See [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
+- A custom domain with higher than default sending limits provisioned and ready. See [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md).
-## Creating multiple sender usernames
-When Email Domain is provisioned to send mail, it has default MailFrom address as donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net or
-if you have configured custom domain such as "notification.azuremails.net" then the default MailFrom address as "donotreply@notification.azurecommtest.net" added. You can configure and add additional MailFrom addresses and FROM display name to more user friendly value.
+## Create multiple sender usernames
+
+An email domain that is provisioned to send email has a default MailFrom address, formatted as `donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net`. If you configure a custom domain such as `notification.azuremails.net`, then the default MailFrom address has `donotreply@notification.azurecommtest.net` added. You can configure and add more MailFrom addresses and FROM display names to use values that are easier to read.
> [!NOTE]
-> Sender usernames cannot be enabled for Azure Managed Domains or Custom Domains with default sending limits. For more information, see [Service limits for Azure Communication Services](../../concepts/service-limits.md#rate-limits).
+> Sender usernames cannot be enabled for Azure Managed Domains or custom domains with default sending limits. For more information, see [Service limits for Azure Communication Services](../../concepts/service-limits.md#rate-limits).
-1. Go the overview page of the Email Communications Service resource that you created earlier.
-2. Click **Provision Domains** on the left navigation panel. You see list of provisioned domains.
-3. Click on the one of the provisioned domains.
+1. Open the Overview page of the Email Communication Service resource that you created in [Get started with Creating Email Communication Resource](../../quickstarts/email/create-email-communication-resource.md).
+2. Click **Provision Domains** on the left navigation panel to see list of provisioned domains.
+3. Click on the one of the provisioned domains to open the Domain Overview page.
:::image type="content" source="../../quickstarts/email/media/email-provisioned-domains.png" alt-text="Screenshot that shows Domain link in list of provisioned email domains." lightbox="../../quickstarts/email/media/email-provisioned-domains-expanded.png":::
-4. The navigation lands in Domain Overview page. Click on **MailFrom Addresses** link in left navigation. You see the default donotreply in MailFrom addresses list.
+4. Click the **MailFrom Addresses** link in left navigation to see the default `donotreply` in MailFrom addresses list.
:::image type="content" source="../../quickstarts/email/media/email-mailfrom-overview.png" alt-text="Screenshot that explains how to list of MailFrom addresses.":::
-5. Click on **Add**.
+5. Click **Add**.
+
:::image type="content" source="../../quickstarts/email/media/email-domains-mailfrom-add.png" alt-text="Screenshot that explains how to change MailFrom address and display name."::: 6. Enter the Display Name and MailFrom address. Click **Save**. :::image type="content" source="../../quickstarts/email/media/email-domains-mailfrom-add-save.png" alt-text="Screenshot that explains how to save MailFrom address and display name.":::
-7. Click **Save**. You see the updated list with newly added MailFrom address in the overview page.
+7. Click **Save** to see the updated list with newly added MailFrom address in the overview page.
:::image type="content" source="../../quickstarts/email/media/email-mailfrom-overview-updated.png" alt-text="Screenshot that shows Mailfrom addresses list with updated values." lightbox="../../quickstarts/email/media/email-mailfrom-overview-updated-expanded.png":::
if you have configured custom domain such as "notification.azuremails.net" then
## Removing multiple sender usernames
-1. Go the Domains overview page Click on **MailFrom addresses** link in left navigation. You'll able to see the MailFrom addresses in list.
+1. Open the Domains overview page.
+2. Click on **MailFrom addresses** link in left navigation to see the MailFrom addresses list.
+ :::image type="content" source="../../quickstarts/email/media/email-mailfrom-overview-updated.png" alt-text="Screenshot that shows MailFrom addresses." lightbox="../../quickstarts/email/media/email-mailfrom-overview-updated-expanded.png":::
-2. Select the MailFrom address that needs to be removed and Click on **Delete** button.
+3. Select the MailFrom address that needs to be removed and click **Delete**.
+ :::image type="content" source="../../quickstarts/email/media/email-domains-mailfrom-delete.png" alt-text="Screenshot that shows MailFrom addresses list with deletion.":::
-3. You see the updated list with newly added MailFrom address in the overview page.
+4. Review the updated list of MailFrom addresses in the overview page.
:::image type="content" source="../../quickstarts/email/media/email-mailfrom-overview.png" alt-text="Screenshot that shows MailFrom addresses list after deletion." lightbox="../../quickstarts/email/media/email-mailfrom-overview-expanded.png"::: ## Next steps
-* [Get started with create and manage Email Communication Service in Azure Communication Service](../../quickstarts/email/create-email-communication-resource.md)
+* [Quickstart: Create and manage Email Communication Service resources](../../quickstarts/email/create-email-communication-resource.md)
-* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+* [Quickstart: How to connect a verified email domain](../../quickstarts/email/connect-email-communication-resource.md)
-The following documents may be interesting to you:
+## Related articles
-- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)-- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+* Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+* Learn how to send emails with custom verified domains in [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md)
communication-services Connect Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/connect-email-communication-resource.md
Title: How to connect a verified email domain with Azure Communication Service resource
+ Title: How to connect a verified email domain
-description: Learn about how to connect verified email domains with Azure Communication Services Resource.
+description: Learn about how to connect verified email domains in Azure Communication Services.
zone_pivot_groups: acs-js-csharp-java-python-portal-rest
-# Quickstart: How to connect a verified email domain with Azure Communication Service resource
+# Quickstart: How to connect a verified email domain
-In this quick start, you'll learn how to connect a verified domain in Azure Communication Services to send email.
+In this quick start, you learn how to connect a verified domain in Azure Communication Services to send email.
::: zone pivot="azure-portal" [!INCLUDE [connect-domain-portal](./includes/connect-domain-portal.md)]
In this quick start, you'll learn how to connect a verified domain in Azure Comm
* [How to send an Email](../../quickstarts/email/send-email.md)
-* [What is Email Communication Resource for Azure Communication Service](../../concepts/email/prepare-email-communication-resource.md)
+* [What is Email Communication Resource for Azure Communication Services](../../concepts/email/prepare-email-communication-resource.md)
-The following documents may be interesting to you:
+## Related articles
- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
communication-services Create Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/create-email-communication-resource.md
Title: Quickstart - Create and manage Email Communication Service resource in Azure Communication Service
+ Title: Quickstart - Create and manage Email Communication Service resource in Azure Communication Services
-description: In this quickstart, you'll learn how to create and manage your first Azure Email Communication Services resource.
+description: In this quickstart, you'll learn how to create and manage your first Azure Email Communication Service resource.
Last updated 03/31/2023
-# Quickstart - Create and manage Email Communication Service resource in Azure Communication Service
+# Quickstart: Create and manage Email Communication Service resources
-Get started with Email by provisioning your first Email Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com/) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
+Get started with Email by provisioning your first Email Communication Service resource. Provision Email Communication Service resources through the [Azure portal](https://portal.azure.com/) or with the .NET management client library. The management client library and the Azure portal enable you to create, configure, update, and delete your resources and interface with [Azure Resource Manager](../../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functions available in the client libraries are available in the Azure portal.
## Create the Email Communications Service resource using portal
-1. Navigate to the [Azure portal](https://portal.azure.com/) to create a new resource.
-2. Search for Email Communication Services and hit enter. Select **Email Communication Services** and press **Create**
+1. Open the [Azure portal](https://portal.azure.com/) to create a new resource.
+2. Search for **Email Communication Services**.
+3. Select **Email Communication Services** and press **Create**
:::image type="content" source="./media/email-communication-search.png" alt-text="Screenshot that shows how to search Email Communication Service in market place."::: :::image type="content" source="./media/email-communication-create.png" alt-text="Screenshot that shows Create link to create Email Communication Service.":::
-3. Complete the required information on the basics tab:
+4. Complete the required information on the basics tab:
- Select an existing Azure subscription. - Select an existing resource group, or create a new one by clicking the **Create new** link. - Provide a valid name for the resource.
Get started with Email by provisioning your first Email Communication Services r
:::image type="content" source="./media/email-communication-create-review.png" alt-text="Screenshot that shows how to the summary for review and create Email Communication Service.":::
-4. Wait for the validation to pass. Click **Create**
-5. Wait for the Deployment to complete. Click **Go to Resource** will land on Email Communication Service Overview Page.
+5. Wait for the validation to pass, then click **Create**.
+6. Wait for the Deployment to complete, then click **Go to Resource**. This opens the Email Communication Service Overview.
:::image type="content" source="./media/email-communication-overview.png" alt-text="Screenshot that shows the overview of Email Communication Service resource.":::
Get started with Email by provisioning your first Email Communication Services r
* [Email domains and sender authentication for Azure Communication Services](../../concepts/email/email-domain-and-sender-authentication.md)
-* [Get started by connecting Email Communication Service with Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+* [Quickstart: How to connect a verified email domain](../../quickstarts/email/connect-email-communication-resource.md)
-The following documents may be interesting to you:
+## Related articles
- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)-- How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)-- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
+- Learn how to send emails with custom verified domains in [Quickstart: How to add custom verified email domains](../../quickstarts/email/add-custom-verified-domains.md)
+- Learn how to send emails with Azure Managed Domains in [Quickstart: How to add Azure Managed Domains to email](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
Title: How to configure user engagement tracking to an email domain with Azure Communication Service resource.
+ Title: How to enable user engagement tracking for an email domain with Azure Communication Services resource.
-description: Learn about how to enable user engagement for the email domains with Azure Communication Services resource.
+description: Learn about how to enable user engagement tracking for an email domain with Azure Communication Services resource.
Last updated 03/31/2023
-# Quickstart: How to enable user engagement tracking for the email domain with Azure Communication Service resource
+# Quickstart: How to enable user engagement tracking for an email domain
-Configuring email engagement enables the insights on your customers' engagement with emails to help build customer relationships. Only the emails that are sent from Azure Communication Services verified Email Domains that are enabled for user engagement analysis will get the engagement tracking metrics.
+To gain insights into your customer email engagements, enable user engagement tracking. Only emails sent from Azure Communication Services verified email domains that are enabled for user engagement analysis can receive engagement tracking metrics.
> [!IMPORTANT] > By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity.
-In this quick start, you'll learn about how to enable user engagement tracking for verified domain in Azure Communication Services.
+In this quick start, you learn how to enable user engagement tracking for a verified email domain in Azure Communication Services.
## Enable email engagement
-1. Go the overview page of the Email Communications Service resource that you created earlier.
-2. Click Provision Domains on the left navigation panel. You'll see list of provisioned domains.
-3. Click on the Custom Domain name that you would like to update.
+1. Go the overview page of the Email Communications Service resource that you created in [Quickstart: Create and manage an Email Communication Service resource](./create-email-communication-resource.md).
+2. In the left navigation panel, click **Provision Domains** to open a list of provisioned domains.
+3. Click on the name of the custom domain that you would like to update.
:::image type="content" source="./media/email-domains-custom-provision-domains.png" alt-text="Screenshot that shows how to get to overview page for Domain from provisioned domains list.":::
-4. The navigation lands in Domain Overview page where you'll able to see User interaction tracking Off by default.
+ When you click the custom domain name, it opens the Domain Overview page. The first time you open this page, User interaction tracking is **Off** by default.
+
+4. Click **Turn On** to enable engagement tracking.
:::image type="content" source="./media/email-domains-custom-overview.png" alt-text="Screenshot that shows the overview page of the domain." lightbox="media/email-domains-custom-overview-expanded.png":::
-5. Click turn on to enable engagement tracking.
+5. A confirmation dialog box opens. Click **Turn On** to confirm that you want to enable engagement tracking.
:::image type="content" source="./media/email-domains-user-engagement.png" alt-text="Screenshot that shows the user engagement turn-on page of the domain." lightbox="media/email-domains-user-engagement-expanded.png":::
-**Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.**
+**Your email domain is now ready to send emails with user engagement tracking. Note that user engagement tracking applies to HTML content and does not function if you submit the payload in plaintext.**
+
+You can now subscribe to Email User Engagement operational logs, which provide information about **open** and **click** user engagement metrics for messages sent from the email service.
-You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
> [!NOTE]
-> User Engagement Tracking cannot be enabled for Azure Managed Domains or Custom Domains with default sending limits. For more information, see [Service limits for Azure Communication Services](../../concepts/service-limits.md#rate-limits).
+> User Engagement Tracking cannot be enabled for Azure Managed Domains or custom domains with default sending limits. For more information, see [Service limits for Azure Communication Services](../../concepts/service-limits.md#rate-limits).
> [!IMPORTANT]
-> If you plan to enable open/click tracking for your email links, ensure that you are formatting the email content in HTML correctly. Specifically, make sure your tracking content is properly encapsulated within the payload, as demonstrated below:
+> If you plan to enable open/click tracking for your email links, ensure that you are correctly formatting the email content in HTML. Specifically, make sure that your tracking content is properly encapsulated within the payload, as follows:
```html
- <a href="https://www.contoso.com">Contoso Inc.,</a>.
+ <a href="https://www.contoso.com">Contoso Inc.</a>
``` ## Next steps - Access logs for [Email Communication Service](../../concepts/analytics/logs/email-logs.md).
-The following documents might be interesting to you:
+## Related articles
- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)-- [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+- [Quickstart: How to connect Email Communication Service with an Azure Communication Services resource](../../quickstarts/email/connect-email-communication-resource.md)
communication-services Number Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md
Title: Quickstart - Look up operator information for a phone number using Azure Communication Services
-description: Learn how to look up operator information for any phone number using Azure Communication Services
+description: Learn how to look up operator information for any phone number using Azure Communication Services.
zone_pivot_groups: acs-js-csharp-java-python
Common questions and issues: -- The data returned by this endpoint is subject to various international laws and regulations, therefore the accuracy of the results depends on several factors. These factors include whether the number has been ported, the country code, and the approval status of the caller. Based on these factors, operator information may not be available for some phone numbers or may reflect the original operator of the phone number, not the current operator.
+- Changes to environment variables may not take effect in programs that are already running. If you notice your environment variables aren't working as expected, try closing and reopening any programs you're using to run and edit code.
+- The data returned by this endpoint is subject to various international laws and regulations, therefore the accuracy of the results depends on several factors. These factors include whether the number was ported, the country code, and the approval status of the caller. Based on these factors, operator information may not be available for some phone numbers or may reflect the original operator of the phone number, not the current operator.
## Next steps In this quickstart you learned how to: > [!div class="checklist"]
+> * Look up number formatting
> * Look up operator information for a phone number > [!div class="nextstepaction"]
container-apps Connect Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-services.md
Title: 'Tutorial: Connect services in Azure Container Apps (preview)'
+ Title: 'Tutorial: Connect to an Azure Cache for Redis service in Azure Container Apps (preview)'
description: Connect a service in development and then promote to production in Azure Container Apps.
Last updated 06/13/2023
-# Tutorial: Connect services in Azure Container Apps (preview)
+# Tutorial: Connect to an Azure Cache for Redis service in Azure Container Apps (preview)
Azure Container Apps allows you to connect to services that support your app that run in the same environment as your container app.
When in development, your application can quickly create and connect to [dev ser
As you move to production, your application can connect production-grade managed services.
-This tutorial shows you how to connect both dev and production grade services to your container app.
+This tutorial shows you how to connect both dev and production grade Azure Cache for Redis service to your container app.
In this tutorial, you learn to:
container-instances Container Instances Tutorial Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-app.md
In this section, you use the Azure CLI to deploy the image built in the [first t
When you deploy an image that's hosted in a private Azure container registry like the one created in the [second tutorial](container-instances-tutorial-prepare-acr.md), you must supply credentials to access the registry.
-A best practice for many scenarios is to create and configure a Microsoft Entra service principal with *pull* permissions to your registry. See [Authenticate with Azure Container Registry from Azure Container Instances](../container-registry/container-registry-auth-aci.md) for sample scripts to create a service principal with the necessary permissions. Take note of the *service principal ID* and *service principal password*. You use these credentials to access the registry when you deploy the container.
+A best practice for many scenarios is to create and configure a Microsoft Entra service principal with *pull* permissions to your registry. Take note of the *service principal ID* and *service principal password*. You use these credentials to access the registry when you deploy the container.
You also need the full name of the container registry login server (replace `<acrName>` with the name of your registry):
container-registry Container Registry Auth Aci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-aci.md
- Title: Access from Container Instances
-description: Learn how to provide access to images in your private container registry from Azure Container Instances by using a Microsoft Entra service principal.
----- Previously updated : 10/31/2023--
-# Authenticate with Azure Container Registry from Azure Container Instances
-
-You can use a Microsoft Entra service principal to provide access to your private container registries in Azure Container Registry.
-
-In this article, you learn to create and configure a Microsoft Entra service principal with *pull* permissions to your registry. Then, you start a container in Azure Container Instances (ACI) that pulls its image from your private registry, using the service principal for authentication.
-
-## When to use a service principal
-
-You should use a service principal for authentication from ACI in **headless scenarios**, such as in applications or services that create container instances in an automated or otherwise unattended manner.
-
-For example, if you have an automated script that runs nightly and creates a [task-based container instance](../container-instances/container-instances-restart-policy.md) to process some data, it can use a service principal with pull-only permissions to authenticate to the registry. You can then rotate the service principal's credentials or revoke its access completely without affecting other services and applications.
-
-Service principals should also be used when the registry [admin user](container-registry-authentication.md#admin-account) is disabled.
--
-## Authenticate using the service principal
-
-To launch a container in Azure Container Instances using a service principal, specify its ID for `--registry-username`, and its password for `--registry-password`.
-
-```azurecli-interactive
-az container create \
- --resource-group myResourceGroup \
- --name mycontainer \
- --image mycontainerregistry.azurecr.io/myimage:v1 \
- --registry-login-server mycontainerregistry.azurecr.io \
- --registry-username <service-principal-ID> \
- --registry-password <service-principal-password>
-```
-
->[!Note]
-> We recommend running the commands in the most recent version of the Azure Cloud Shell. Set `export MSYS_NO_PATHCONV=1` for running on-perm bash environment.
-
-## Sample scripts
-
-You can find the preceding sample scripts for Azure CLI on GitHub, as well versions for Azure PowerShell:
-
-* [Azure CLI][acr-scripts-cli]
-* [Azure PowerShell][acr-scripts-psh]
-
-## Next steps
-
-The following articles contain additional details on working with service principals and ACR:
-
-* [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md)
-* [Authenticate with Azure Container Registry from Azure Kubernetes Service (AKS)](../aks/cluster-container-registry-integration.md)
-
-<!-- IMAGES -->
-
-<!-- LINKS - External -->
-[acr-scripts-cli]: https://github.com/Azure/azure-docs-cli-python-samples/tree/master/container-registry/create-registry/create-registry-service-principal-assign-role.sh
-[acr-scripts-psh]: https://github.com/Azure/azure-docs-powershell-samples/tree/master/container-registry
-
-<!-- LINKS - Internal -->
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
For example, configure your web application to use a service principal that prov
You should use a service principal to provide registry access in **headless scenarios**. That is, an application, service, or script that must push or pull container images in an automated or otherwise unattended manner. For example:
-* *Pull*: Deploy containers from a registry to orchestration systems including Kubernetes, DC/OS, and Docker Swarm. You can also pull from container registries to related Azure services such as [Azure Container Instances](container-registry-auth-aci.md), [App Service](../app-service/index.yml), [Batch](../batch/index.yml), [Service Fabric](../service-fabric/index.yml), and others.
+* *Pull*: Deploy containers from a registry to orchestration systems including Kubernetes, DC/OS, and Docker Swarm. You can also pull from container registries to related Azure services such as [App Service](../app-service/index.yml), [Batch](../batch/index.yml), [Service Fabric](../service-fabric/index.yml), and others.
> [!TIP] > A service principal is recommended in several [Kubernetes scenarios](authenticate-kubernetes-options.md) to pull images from an Azure container registry. With Azure Kubernetes Service (AKS), you can also use an automated mechanism to authenticate with a target registry by enabling the cluster's [managed identity](../aks/cluster-container-registry-integration.md).
The **Username** value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
You can use service principal credentials from any Azure service that authenticates with an Azure container registry. Use service principal credentials in place of the registry's admin credentials for a variety of scenarios.
-For example, use the credentials to pull an image from an Azure container registry to [Azure Container Instances](container-registry-auth-aci.md).
- ### Use with docker login You can run `docker login` using a service principal. In the following example, the service principal application ID is passed in the environment variable `$SP_APP_ID`, and the password in the variable `$SP_PASSWD`. For recommended practices to manage Docker credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
If you are an existing Azure AI or GitHub Copilot customer, you may try Azure Co
> [!div class="nextstepaction"] > [90-day Free Trial with Azure AI Advantage](ai-advantage.md)
-If you are not an Azure customer, you may use the [30-day Free Trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/). No commitment follows the end of your trial period.
+If you are not an Azure customer, you may use the 30-day Free Trial without an Azure subscription. No commitment follows the end of your trial period.
-Alternatively, you may use the [Azure Cosmos DB lifetime free tier](free-tier.md) with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
+> [!div class="nextstepaction"]
+> [30-day Free Trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+
+Alternatively, you may use the Azure Cosmos DB lifetime free tier with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
+
+> [!div class="nextstepaction"]
+> [Azure Cosmos DB lifetime free tier](free-tier.md)
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/introduction.md
Azure Cosmos DB for MongoDB vCore provides developers with a fully managed Mongo
## Build AI-Driven Applications with a Single Database Solution
-Azure Cosmos DB for MongoDB vCore empowers generative AI applications with an integrated **Vector Search** feature. This enables efficient indexing and querying of data by characteristics for advanced use cases such as generative AI, without the complexity of external integrations. Unlike MongoDB Atlas and similar platforms, Azure Cosmos DB for MongoDB vCore keeps all data within the database for vector searches, ensuring simplicity and security. Even our free tier offers this capability, making sophisticated AI features accessible without additional cost.
+Azure Cosmos DB for MongoDB vCore empowers generative AI applications with an integrated **vector database**. This enables efficient indexing and querying of data by characteristics for advanced use cases such as generative AI, without the complexity of external integrations. Unlike MongoDB Atlas and similar platforms, Azure Cosmos DB for MongoDB vCore keeps all original data and vector data within the database, ensuring simplicity and security. Even our free tier offers this capability, making sophisticated AI features accessible without additional cost.
## Effortless integration with the Azure platform
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/javascript-query-api.md
In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [
When included inside predicate and/or selector functions, the following JavaScript constructs get automatically optimized to run directly on Azure Cosmos DB indices: -- Simple operators: `=` `+` `-` `*` `/` `%` `|` `^` `&` `==` `!=` `===` `!===` `<` `>` `<=` `>=` `||` `&&` `<<` `>>` `>>>!` `~`-- Literals, including the object literal: {}
+- Simple operators: `=` `+` `-` `*` `/` `%` `|` `^` `&` `==` `!=` `===` `!==` `<` `>` `<=` `>=` `||` `&&` `<<` `>>` `>>>` `~`
+- Literals, including the object literal: `{}`
- var, return The following JavaScript constructs do not get optimized for Azure Cosmos DB indices: -- Control flow (for example, if, for, while)
+- Control flow: `if` `for` `while`
- Function calls For more information, see the [Azure Cosmos DB Server Side JavaScript Documentation](https://github.com/Azure/azure-cosmosdb-js-server/). ## SQL to JavaScript cheat sheet
-The following table presents various SQL queries and the corresponding JavaScript queries. As with SQL queries, properties (for example, item.id) are case-sensitive.
+The following table presents various SQL queries and the corresponding JavaScript queries. As with SQL queries, properties (for example, `item.id`) are case-sensitive.
> [!NOTE] > `__` (double-underscore) is an alias to `getContext().getCollection()` when using the JavaScript query API.
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
The client library is available through the Python Package Index, as the `azure-
| | | | [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) | This class is the primary client class and is used to manage account-wide metadata or databases. | | [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.database.databaseproxy) | This class represents a database within the account. |
-| [`CotnainerProxy`](/python/api/azure-cosmos/azure.cosmos.container.containerproxy) | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
+| [`ContainerProxy`](/python/api/azure-cosmos/azure.cosmos.container.containerproxy) | This class is primarily used to perform read, update, and delete operations on either the container or the items stored within the container. |
| [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partition_key.partitionkey) | This class represents a logical partition key. This class is required for many common operations and queries. | ## Code examples
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/transactional-batch.md
Title: Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
-description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET or Java SDK to perform a group of point operations that either succeed or fail.
-
+ Title: Transactional batch operations in Azure Cosmos DB using the .NET, Java or Python SDK
+description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET, Java SDK or Python SDK to perform a group of point operations that either succeed or fail.
+ -+ Last updated 10/27/2020
-# Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK
+# Transactional batch operations in Azure Cosmos DB
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
+Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. Operations are defined, added to the batch, and the batch is executed. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
## What's a transaction in Azure Cosmos DB
if (response.isSuccessStatusCode())
> [!IMPORTANT] > If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). If the operation fails because it tries to create an item that already exists, a status code of 409 (conflict) is returned. The status code enables one to identify the cause of transaction failure.
+### [Python](#tab/python)
+
+Get or create a container instance:
+
+```python
+container = database.create_container_if_not_exists(id="batch_container",
+ partition_key=PartitionKey(path='/road_bikes'))
+```
+In Python, Transactional Batch operations look very similar to the singular operations apis, and are tuples containing (operation_type_string, args_tuple, batch_operation_kwargs_dictionary). Below are sample items that will be used to demonstrate batch operations functionality:
+
+```python
+
+create_demo_item = {
+ "id": "68719520766",
+ "category": "road-bikes",
+ "name": "Chropen Road Bike"
+}
+
+# for demo, assume that this item already exists in the container.
+# the item id will be used for read operation in the batch
+read_demo_item1 = {
+ "id": "68719519884",
+ "category": "road-bikes",
+ "name": "Tronosuros Tire",
+ "productId": "68719520766"
+}
+
+# for demo, assume that this item already exists in the container.
+# the item id will be used for read operation in the batch
+read_demo_item2 = {
+ "id": "68719519886",
+ "category": "road-bikes",
+ "name": "Tronosuros Tire",
+ "productId": "68719520766"
+}
+
+# for demo, assume that this item already exists in the container.
+# the item id will be used for read operation in the batch
+read_demo_item3 = {
+ "id": "68719519887",
+ "category": "road-bikes",
+ "name": "Tronosuros Tire",
+ "productId": "68719520766"
+}
+
+# for demo, we'll upsert the item with id 68719519885
+upsert_demo_item = {
+ "id": "68719519885",
+ "category": "road-bikes",
+ "name": "Tronosuros Tire Upserted",
+ "productId": "68719520768"
+}
+
+# for replace demo, we'll replace the read_demo_item2 with this item
+replace_demo_item = {
+ "id": "68719519886",
+ "category": "road-bikes",
+ "name": "Tronosuros Tire replaced",
+ "productId": "68719520769"
+}
+
+# for replace with etag match demo, we'll replace the read_demo_item3 with this item
+# The use of etags and if-match/if-none-match options allows users to run conditional replace operations
+# based on the etag value passed. When using if-match, the request will only succeed if the item's latest etag
+# matches the passed in value. For more on optimistic concurrency control, see the link below:
+# https://learn.microsoft.com/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency
+replace_demo_item_if_match_operation = {
+ "id": "68719519887",
+ "category": "road-bikes",
+ "name": "Tronosuros Tireh",
+ "wasReplaced": "Replaced based on etag match"
+ "productId": "68719520769"
+}
+
+```
+
+Prepare the operations to be added to the batch:
+
+```python
+create_item_operation = ("create", (create_demo_item,), {})
+read_item_operation = ("read", ("68719519884",), {})
+delete_item_operation = ("delete", ("68719519885",), {})
+upsert_item_operation = ("upsert", (upsert_demo_item,), {})
+replace_item_operation = ("replace", ("68719519886", replace_demo_item), {})
+replace_item_if_match_operation = ("replace",
+ ("68719519887", replace_demo_item_if_match_operation),
+ {"if_match_etag": container.client_connection.last_response_headers.get("etag")})
+```
+Add the operations to the batch:
+
+```python
+batch_operations = [
+ create_item_operation,
+ read_item_operation,
+ delete_item_operation,
+ upsert_item_operation,
+ replace_item_operation,
+ replace_item_if_match_operation
+ ]
+```
+
+Finally, execute the batch:
+
+```python
+try:
+ # Run that list of operations
+ batch_results = container.execute_item_batch(batch_operations=batch_operations, partition_key="road_bikes")
+ # Batch results are returned as a list of item operation results - or raise a CosmosBatchOperationError if
+ # one of the operations failed within your batch request.
+ print("\nResults for the batch operations: {}\n".format(batch_results))
+except exceptions.CosmosBatchOperationError as e:
+ error_operation_index = e.error_index
+ error_operation_response = e.operation_responses[error_operation_index]
+ error_operation = batch_operations[error_operation_index]
+ print("\nError operation: {}, error operation response: {}\n".format(error_operation, error_operation_response))
+ # [END handle_batch_error]
+```
+> **Note for using patch operation and replace_if_match_etag operation in the batch** <br>
+The batch operation kwargs dictionary is limited, and only takes a total of three different key values. In the case of wanting to use conditional patching within the batch, the use of filter_predicate key is available for the patch operation, or in case of wanting to use etags with any of the operations, the use of the if_match_etag/if_none_match_etag keys is available as well.<br>
+>```python
+> batch_operations = [
+> ("replace", (item_id, item_body), {"if_match_etag": etag}),
+> ("patch", (item_id, operations), {"filter_predicate": filter_predicate, "if_none_match_etag": etag}),
+> ]
+>```
++
+> [!IMPORTANT]
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). If the operation fails because it tries to create an item that already exists, a status code of 409 (conflict) is returned. The status code enables one to identify the cause of transaction failure.
## How are transactional batch operations executed
-When the `ExecuteAsync` method is called, all operations in the `TransactionalBatch` object are grouped, serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
+When the Transactional Batch is executed, all operations in the Transactional Batch are grouped, serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
The service receives the request and executes all operations within a transactional scope, and returns a response using the same serialization protocol. This response is either a success, or a failure, and supplies individual operation responses per operation.
The SDK exposes the response for you to verify the result and, optionally, extra
Currently, there are two known limits:
-* The Azure Cosmos DB request size limit constrains the size of the `TransactionalBatch` payload to not exceed 2 MB, and the maximum execution time is 5 seconds.
-* There's a current limit of 100 operations per `TransactionalBatch` to ensure the performance is as expected and within SLAs.
+* The Azure Cosmos DB request size limit constrains the size of the Transactional Batch payload to not exceed 2 MB, and the maximum execution time is 5 seconds.
+* There's a current limit of 100 operations per Transactional Batch to ensure the performance is as expected and within SLAs.
## Next steps
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
Previously updated : 01/21/2024 Last updated : 04/02/2024 # Regional availability for Azure Cosmos DB for PostgreSQL
Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
| Central US | :heavy_check_mark: | :heavy_check_mark: | East US 2 | | East Asia | :heavy_check_mark: | :heavy_check_mark: | Southeast Asia | | East US | :heavy_check_mark: | :heavy_check_mark: | West US |
-| East US 2 | :heavy_check_mark: | :x: | Central US |
+| East US 2 | :heavy_check_mark: | :heavy_check_mark: | Central US |
| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | Japan West |
Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
| Switzerland West ΓÇá | :heavy_check_mark: | N/A | Switzerland North | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | | West Central US | :heavy_check_mark: | N/A | West US 2 |
-| West Europe | :heavy_check_mark: | :x: | North Europe |
+| West Europe | :heavy_check_mark: | :heavy_check_mark: | North Europe |
| West US | :heavy_check_mark: | :x: | East US | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | West Central US | | West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: |
cosmos-db Priority Based Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/priority-based-execution.md
To get started using priority-based execution, navigate to the **Features** page
- Java v4: [v4.45.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.45.0) or later - Spark 3.2: [v4.19.0](https://central.sonatype.com/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-2_2-12/4.19.0) or later - JavaScript v4: [v4.0.0](https://www.npmjs.com/package/@azure/cosmos) or later
+- Python 4.6.0: [v4.6.0](https://pypi.org/project/azure-cosmos/4.6.0/) or later
## Code samples
container.createItem(family, new PartitionKey(family.getLastName()), requestOpti
}).subscribe(); ```
+#### [Python SDK](#tab/python)
+
+Priority-based execution feature is a preview feature available in Python SDK v4.6.0 or later. It should be enabled at the account level before using it in the Python SDK.
+The request priority can be specified as "Low" or "High" as below:
+
+```python
+item1_read = container.read_item("item1", "pk1", priority="High")
+item2_read = container.read_item("item2", "pk2", priority="Low")
+query = list(container.query_items("Select * from c", partition_key="pk1", priority="High"))
+```
+ + ## Monitoring Priority-based execution You can monitor the behavior of requests with low and high priority using Azure monitor metrics in Azure portal.
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
The natively integrated vector database in our NoSQL API will become available i
- [Azure PostgreSQL Server pgvector Extension](../postgresql/flexible-server/how-to-use-pgvector.md) - [Azure AI Search](../search/search-what-is-azure-search.md)-- [Open Source Vector Database List](/semantic-kernel/memories/vector-db#available-connectors-to-vector-databases)
+- [Open Source Vector Databases](mongodb/vcore/vector-search-ai.md)
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 02/16/2024 Last updated : 04/02/2024 # Create an Enterprise Agreement subscription
-This article helps you create an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) subscription for yourself or for someone else in your current Microsoft Entra directory/tenant. You may want another subscription to avoid hitting subscription quota limits, to create separate environments for security, or to isolate data for compliance reasons.
+This article helps you create an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) subscription for yourself or for someone else in your current Microsoft Entra directory/tenant. You can create another subscription to avoid hitting subscription quota limits, to create separate environments for security, or to isolate data for compliance reasons.
If you want to create subscriptions for Microsoft Customer Agreements, see [Create a Microsoft Customer Agreement subscription](create-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/.
For more information, see [Understand Azure Enterprise Agreement administrative
## Create an EA subscription
-A user with Enterprise Administrator or Account Owner permissions can use the following steps to create a new EA subscription.
+A user with Enterprise Administrator or Account Owner permissions can use the following steps to create a new EA subscription for themselves or for another user. If the subscription is for another user, the user is sent a notification that they must approve.
>[!NOTE] > If you want to create an Enterprise Dev/Test subscription, an enterprise administrator must enable account owners to create them. Otherwise, the option to create them isn't available. To enable the dev/test offer for an enrollment, see [Enable the enterprise dev/test offer](direct-ea-administration.md#enable-the-enterprise-devtest-offer).
A user with Enterprise Administrator or Account Owner permissions can use the fo
1. Navigate to **Subscriptions** and then select **Add**. :::image type="content" source="./media/create-enterprise-subscription/subscription-add.png" alt-text="Screenshot showing the Subscription page where you Add a subscription." lightbox="./media/create-enterprise-subscription/subscription-add.png" ::: 1. On the Create a subscription page, on the **Basics** tab, type a **Subscription name**.
-1. Select the **Billing account** where the new subscription will get created.
-1. Select the **Enrollment account** where the subscription will get created.
-1. Select an **Offer type**, select **Enterprise Dev/Test** if the subscription will be used for development or testing workloads. Otherwise, select **Microsoft Azure Enterprise**.
+1. Select the **Billing account** where the new subscription gets created.
+1. Select the **Enrollment account** where the subscription gets created.
+1. Select an **Offer type**, select **Enterprise Dev/Test** if the subscription is for development or testing workloads. Otherwise, select **Microsoft Azure Enterprise**.
:::image type="content" source="./media/create-enterprise-subscription/create-subscription-basics-tab-enterprise-agreement.png" alt-text="Screenshot showing the Basics tab where you enter basic information about the enterprise subscription." lightbox="./media/create-enterprise-subscription/create-subscription-basics-tab-enterprise-agreement.png" ::: 1. Select the **Advanced** tab.
-1. Select your **Subscription directory**. It's the Microsoft Entra ID where the new subscription will get created.
+1. Select your **Subscription directory**. It's the Microsoft Entra ID where the new subscription gets created.
1. Select a **Management group**. It's the Microsoft Entra management group that the new subscription is associated with. You can only select management groups in the current directory. 1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID. :::image type="content" source="./media/create-enterprise-subscription/create-subscription-advanced-tab.png" alt-text="Screenshot showing the Advanced tab where you specify the directory, management group, and owner for the EA subscription." lightbox="./media/create-enterprise-subscription/create-subscription-advanced-tab.png" :::
A user with Enterprise Administrator or Account Owner permissions can use the fo
1. Enter tag pairs for **Name** and **Value**. :::image type="content" source="./media/create-enterprise-subscription/create-subscription-tags-tab.png" alt-text="Screenshot showing the tags tab where you enter tag and value pairs." lightbox="./media/create-enterprise-subscription/create-subscription-tags-tab.png" ::: 1. Select **Review + create**. You should see a message stating `Validation passed`.
-1. Verify that the subscription information is correct, then select **Create**. You'll see a notification that the subscription is getting created.
+1. Verify that the subscription information is correct, then select **Create**. You get a notification that the subscription is getting created.
After the new subscription is created, the account owner can see it in on the **Subscriptions** page.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 03/11/2024 Last updated : 04/02/2024
This article explains the common tasks that an Enterprise Agreement (EA) adminis
To start managing the EA enrollment, the initial enterprise administrator signs in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes) using the account where they received the invitation email.
-If you've been set up as the enterprise administrator, then go to the Azure portal and sign in with your work, school, or Microsoft account.
+If you're set up as the enterprise administrator, then go to the Azure portal and sign in with your work, school, or Microsoft account.
If you have more than one billing account, select a billing account from billing scope menu. You can view your billing account properties and policy from the left menu.
Enterprise agreements and the customers accessing the agreements can have multip
## Activate your enrollment To activate your enrollment, the initial enterprise administrator signs in to the Azure portal using their work, school, or Microsoft account.
-If you've been set up as the enterprise administrator, you don't need to receive the activation email. You can login to Azure portal and activate the enrollment.
+If you're set up as the enterprise administrator, you don't need to receive the activation email. You can sign in to the Azure portal and activate the enrollment.
### To activate an enrollment
Enterprise administrators automatically get department administrator permissions
The structure of accounts and subscriptions affect how they're administered and how they appear on your invoices and reports. Examples of typical organizational structures include business divisions, functional teams, and geographic locations.
-After a new account is added to the enrollment, the account owner is sent an account ownership email that's used to confirm ownership.
+After a new account is added to the enrollment, the account owner is sent an account ownership email that gets used to confirm ownership.
Check out the [EA admin manage accounts](https://www.youtube.com/watch?v=VKWAEx6qfPc) video. It's part of the [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm) series of videos.
Azure Active Directory is now Microsoft Entra ID. For more information, see [New
1. In the left menu, select **Billing scopes** and then select a billing account scope. 1. In the left menu, select **Accounts**. 1. Select **+ Add**.
-1. On the Add an account page, type a friendly name to identify the account that's used for reporting.
+1. On the Add an account page, type a friendly name to identify the account used for reporting.
1. Enter the **Account Owner email** address to associate it with the new account. 1. Select a department or leave it as unassigned. 1. When completed, select **Add**.
If you're a new EA account owner with a .onmicrosoft.com account, you might not
1. In the left menu under **Settings**, select **Activate Account**. 1. On the Activate Account page, select **Yes, I wish to continue** and the select **Activate this account**. :::image type="content" source="./media/direct-ea-administration/activate-account.png" alt-text="Screenshot showing the Activate Account page for onmicrosoft.com accounts." lightbox="./media/direct-ea-administration/activate-account.png" :::
-1. After the activation process completes, copy and paste the following link to your browser. The page opens and creates a subscription that's associated with your enrollment.
+1. After the activation process completes, copy and paste the following link to your browser. The page opens and creates a subscription that gets associated with your enrollment.
- For Azure global, the URL is `https://signup.azure.com/signup?offer=MS-AZR-0017P&appId=IbizaCatalogBlade`. - For Azure Government, the URL is `https://signup.azure.us/signup?offer=MS-AZR-0017P&appId=IbizaCatalogBlade`.
After subscriptions are activated under your Azure EA enrollment, cancel the Azu
## MSDN subscription transfer
-When your transfer an MSDN subscription to an EA enrollment, it's automatically converted to an [Enterprise Dev/Test subscription](https://azure.microsoft.com/pricing/offers/ms-azr-0148p/). After conversion, the subscription loses any existing monetary credit. So, we recommended that you use all your credit before you transfer it to your Enterprise Agreement.
+When your transfer an MSDN subscription to an enrollment, it gets converted to an [Enterprise Dev/Test subscription](https://azure.microsoft.com/pricing/offers/ms-azr-0148p/). After conversion, the subscription loses any existing monetary credit. So, we recommended that you use all your credit before you transfer it to your Enterprise Agreement.
## Azure in Open subscription transfer
If your Enterprise Agreement doesn't have a support plan and you try to transfer
## Manage department and account spending with budgets
-EA customers can set budgets for each department and account under an enrollment. Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spend limit. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. For more information about how to create budgets, see [Tutorial: Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
+EA customers can set budgets for each department and account under an enrollment. Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spend limit. When the budget thresholds are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. For more information about how to create budgets, see [Tutorial: Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
## Enterprise Agreement user roles
Each role has a different level of access and authority. For more information ab
## Add an Azure EA account
-An Azure EA account is an organizational unit in the Azure portal. In the Azure portal, it's referred to as _account_. It's used to administer subscriptions and it's also used for reporting. To access and use Azure services, you need to create an account or have one created for you. For more information about accounts, see [Add an account](#add-an-account-and-account-owner).
+An Azure EA account is an organizational unit in the Azure portal. In the Azure portal, it's an _account_. Its use is to administer subscriptions and for reporting. To access and use Azure services, you need to create an account or have one created for you. For more information about accounts, see [Add an account](#add-an-account-and-account-owner).
## Enable the Enterprise Dev/Test offer
-As an EA admin, you can allow account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the **Dev/Test** option in the Edit account window. After you've selected the Dev/Test option, let the account owner know so that they can create EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers. The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10.
+As an EA admin, you can allow account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the **Dev/Test** option in the Edit account window. After you select the Dev/Test option, let the account owner know so that they can create EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers. The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10.
>[!NOTE] > The Enterprise Dev/Test Offer isn't available for Azure Government customers. If you're an Azure Government customer, your can't enable the Dev/Test option.
_Microsoft Azure Enterprise_ is the default name when a subscription is create
The subscription name appears on reports. It's the name of the project associated with the subscription in the development portal.
-New subscriptions can take up to 24 hours to appear in the subscriptions list. After you've created a subscription, you can:
+New subscriptions can take up to 24 hours to appear in the subscriptions list. After you create a subscription, you can:
- Edit subscription details - Manage subscription services
You can delete an enrollment account only when there are no active subscriptions
## Manage notification contacts
-Notifications allow enterprise administrators to enroll their team members to receive usage, invoice, and user management notifications without giving them billing account access in the Azure portal.
+Notifications allow enterprise administrators to enroll their team members to receive usage notifications and user management notifications without giving them billing account access in the Azure portal.
-Notification contacts are shown in the Azure portal in the Notifications under Settings. Managing your notification contacts makes sure that the right people in your organization get Azure EA notifications.
+Notification contacts are shown in the Azure portal in on the Notifications page under Settings. Managing your notification contacts makes sure that the right people in your organization get Azure EA notifications.
+
+> [!NOTE]
+> Invoices are only sent to the person set to receive invoices for the enrollment, the **Bill to contact**. The bill-to contact can send others a copy of the invoice, if needed.
To view current notifications settings and add contacts:
Azure Enterprise users can convert from a Microsoft Account (MSA or Live ID) to
### To begin
-1. Add the work or school account to the Azure portal in the role(s) needed.
-1. If you get errors, the account may not be valid in Microsoft Entra ID. Azure uses User Principal Name (UPN), which isn't always identical to the email address.
+1. Add the work or school account to the Azure portal with the needed roles.
+1. If you get errors, the account might not be valid in Microsoft Entra ID. Azure uses User Principal Name (UPN), which isn't always identical to the email address.
1. Authenticate to the Azure portal using the work or school account. ### To convert subscriptions from Microsoft accounts to work or school accounts
Azure Enterprise users can convert from a Microsoft Account (MSA or Live ID) to
## Azure EA term glossary **Account**<br>
-An organizational unit. It's used to administer subscriptions and for reporting.
+An organizational unit used to administer subscriptions and for reporting.
**Account owner**<br> The person who manages subscriptions and service administrators on Azure. They can view usage data on this account and its associated subscriptions.
The person who manages departments, creates new accounts and account owners, vie
A unique identifier supplied by Microsoft to identify the specific enrollment associated with an Enterprise Agreement. **Enterprise administrator**<br>
-The person who manages departments, department owners, accounts, and account owners on Azure. They can manage enterprise administrators and view usage data, billed quantities, and unbilled charges across all accounts and subscriptions associated with the enterprise enrollment.
+The person who manages departments, department owners, accounts, and account owners on Azure. They can manage enterprise administrators, view usage data, and billed quantities. They also manage unbilled charges across all accounts and subscriptions associated with the enterprise enrollment.
**Enterprise agreement**<br> A Microsoft licensing agreement for customers with centralized purchasing who want to standardize their entire organization on Microsoft technology and maintain an information technology infrastructure on a standard of Microsoft software.
The person who accesses and manages subscriptions and development projects.
Represents an Azure EA subscription and is a container of Azure services managed by the same service administrator. **Work or school account**<br>
-For organizations that have set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.
+For organizations that set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.
## Enrollment status
The enrollment administrator needs to sign in to the Azure portal. After they si
The enrollment is Active and accounts and subscriptions can be created in the Azure portal. The enrollment remains active until the Enterprise Agreement end date. **Indefinite extended term**<br>
-An indefinite extended term takes place after the Enterprise Agreement end date has passed. It enables Azure EA customers who are opted in to the extended term to continue to use Azure services indefinitely at the end of their Enterprise Agreement.
+An indefinite extended term takes place after the Enterprise Agreement end date. It enables Azure EA customers who are opted in to the extended term to continue to use Azure services indefinitely at the end of their Enterprise Agreement.
Before the Azure EA enrollment reaches the Enterprise Agreement end date, the enrollment administrator should decide which of the following options to take:
Before the Azure EA enrollment reaches the Enterprise Agreement end date, the en
- Confirm disablement of all services associated with the enrollment. **Expired**<br>
-The Azure EA customer is opted out of the extended term, and the Azure EA enrollment has reached the Enterprise Agreement end date. The enrollment expires, and all associated services are disabled.
+The Azure EA customer is opted out of the extended term, and the Azure EA enrollment reached the Enterprise Agreement end date. The enrollment expires, and all associated services are disabled.
**Transferred**<br>
-Enrollments where all associated accounts and services have been transferred to a new enrollment appear with a transferred status.
+Enrollments where all associated accounts and services were transferred to a new enrollment appear with a transferred status.
> [!NOTE] > Enrollments don't automatically transfer if a new enrollment number is generated at renewal. You must include your prior enrollment number in your renewal paperwork to facilitate an automatic transfer.
cost-management-billing Billing Meter Id Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/billing-meter-id-updates.md
+
+ Title: Azure billing meter ID updates
+
+description: Learn about how Azure billing meter ID updates might affect your Azure consumption and billing.
+++++ Last updated : 03/29/2024+++
+# Azure billing meter ID updates
+
+On March 1, 2024, some Azure billing meters were updated for improved meter ID metadata. More updates are underway. A billing meter is used to determine the cost of using a specific service or resource in Azure. It helps calculate the amount you're charged based on the quantity of the resource consumed. The billing meter varies depending on the type of service or resource used.
+
+The meter ID updates result in having only individual unique meters. A unique meter means that every Azure service, resource, and region has its own billing meter ID that precisely reflects its consumption and price. The change ensures that you see the correct meter ID on your invoice, and that youΓÇÖre charged the correct price for each service or resource consumed.
+
+*The meter ID changes donΓÇÖt affect prices*. However, you might notice some changes in how your Azure consumption is shown on your invoice, price sheet, API, usage details file, and Cost Management + Billing experiences.
+
+HereΓÇÖs an example showing updated meter information.
+
+| Service Name | Product Name | Region | Feature | Meter Type | Meter ID (new) | Meter ID (previous) |
+||||||||
+| Virtual Machines | Virtual Machines DSv3 Series Windows | CH West | Low Priority | 1 Compute Hour | 59f7c6d9-3658-5693-8925-4aae24068de8 | 0ce7683b-0630-4103-a9a7-75a68fbf6140 |
+
+## Download updated meters
+
+The following download links are CSV files of the latest meter IDs with their corresponding service and old IDs, product, and region released to date. More meter ID changes will occur over time. When new files are available, we update this page to add their download links.
+
+- [March 1, 2024 updated meters](https://download.microsoft.com/download/5/f/8/5f8d3499-eaab-4e8b-8d1d-7835923c238f/20240301_new_meterIds.csv)
+- [April 1, 2024 updated meters](https://download.microsoft.com/download/5/f/8/5f8d3499-eaab-4e8b-8d1d-7835923c238f/20240401_new_meterIds.csv)
+
+## Recommendations
+
+We recommend you review the list of updated meters and familiarize yourself with the new meter IDs and names that apply to your Azure consumption. You should check reports that you have for analysis, budgets, and any saved views to see if they use the updated meters. If so, you might need to update them accordingly for the new meter IDs and names. If you donΓÇÖt use any meters in the updated meters list, the changes donΓÇÖt affect you.
+
+## See also
+
+To learn how to create and manage budgets and save and share customized views, see the following articles:
+
+- [Tutorial - Create and manage budgets](../costs/tutorial-acm-create-budgets.md)
+- [Save and share customized views](../costs/save-share-views.md)
+- If you want to learn more about how to manage your billing account and subscriptions, see the [Cost Management + Billing documentation](../index.yml).
data-manager-for-agri Concepts Llm Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-llm-apis.md
# Generative AI in Azure Data Manager for Agriculture
-The copilot templates for agriculture enable seamless retrieval of data stored in Azure Data Manager for Agriculture so that farming-related context and insights can be queried in a conversational context. These capabilities enable customers and partners to build their own agriculture copilots.
-
-Customers and partners can deliver insights to users around disease, yield, harvest windows, and more, by using actual planning and observational data. Although Azure Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, it enables customers to more easily integrate generative AI scenarios for their users.
+Microsoft copilot templates empower organizations to build agriculture copilots. Our copilot templates enable seamless retrieval of data stored in Azure Data Manager for Agriculture so that farming-related context and insights can be queried in a conversational context.
Many customers have proprietary data outside Azure Data Manager for Agriculture; for example, agronomy PDFs or market price data. These customers can benefit from an orchestration framework that allows for plugins, embedded data structures, and subprocesses to be selected as part of the query flow. Customers who have farm operations data in Azure Data Manager for Agriculture can use plugins that enable seamless selection of APIs mapped to farm operations. These plugins allow for a combination of results, calculation of area, ranking, and summarizing to help serve customer prompts.
-The copilot templates for agriculture make generative AI in agriculture a reality.
+Customers and partners can deliver insights to users around disease, yield, harvest windows, and more, by using actual planning and observational data. Although Azure Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, it enables customers to more easily integrate generative AI scenarios for their users.
+
+Our copilot templates make generative AI in agriculture a reality.
> [!NOTE] > Azure might include preview, beta, or other prerelease features, services, software, or regions offered by Microsoft for optional evaluation. Previews are licensed to you as part of [your agreement](https://azure.microsoft.com/support) governing the use of Azure, and are subject to terms applicable to previews.
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists all Microsoft Defender for Cloud security recomm
Previously updated : 03/13/2024 Last updated : 04/01/2024 ai-usage: ai-assisted
Secure your storage account with greater flexibility using customer-managed keys
**Severity**: Low
-### [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243)
-
-**Description**: Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges.
-(Related policy: [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f037eea7a-bd0a-46c5-9a66-03aea78705d3)).
-
-**Severity**: Medium
- ### [Cognitive Services accounts should use customer owned storage or enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/aa395469-1687-78a7-bf76-f4614ef72977) **Description**: This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption.
Configure a private endpoint connection to enable access to traffic coming only
**Severity**: Medium
-### [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7)
-
-**Description**: This policy audits any Cognitive Services account in your environment with public network access enabled. Public network access should be disabled so that only connections from private endpoints are allowed.
-(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca)).
-
-**Severity**: Medium
- ### [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ab153e43-2fb5-0670-2117-70340851ea9b) **Description**: Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules.
Note that the following subnet types will be listed as not applicable: GatewaySu
**Severity**: Medium
+### [Azure AI Services resources should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243)
+
+**Description**: By restricting network access, you can ensure that only allowed networks can access the service. This can be achieved by configuring network rules so that only applications from allowed networks can access the Azure AI service resource.
+
+**Severity**: Medium
+
+### [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6)
+
+**Description**: Key access (local authentication) is recommended to be disabled for security. Azure OpenAI Studio, typically used in development/testing, requires key access and will not function if key access is disabled. After disabling, Microsoft Entra ID becomes the only access method, which allows maintaining minimum privilege principle and granular control. [Learn more](https://aka.ms/AI/auth).
+
+**Severity**: Medium
+ ## Deprecated recommendations ### Over-provisioned identities in subscriptions should be investigated to reduce the Permission Creep Index (PCI)
Learn more about how endpoint protection for machines is evaluated in [Endpoint
**Severity**: High
+### [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7)
+
+**Description**: This policy audits any Cognitive Services account in your environment with public network access enabled. Public network access should be disabled so that only connections from private endpoints are allowed.
+(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca)).
+
+**Severity**: Medium
+ ## Related content - [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 03/25/2024 Last updated : 04/02/2024 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md). ## April 2024
-|Date | Update |
-|--|--|
-| April 2| [Containers multicloud recommendations (GA)](#containers-multicloud-recommendations-ga) |
+
+| Date | Update |
+| - | - |
+| April 2 | [Update to recommendations to align with Azure AI Services resources](#update-to-recommendations-to-align-with-azure-ai-services-resources) |
+| April 2 | [Deprecation of Cognitive Services recommendation](#deprecation-of-cognitive-services-recommendation) |
+| April 2 | [Containers multicloud recommendations (GA)](#containers-multicloud-recommendations-ga) |
+
+### Update to recommendations to align with Azure AI Services resources
+
+April 2, 2024
+
+The following recommendations have been updated to align with the Azure AI Services category (formerly known as Cognitive Services and Cognitive search) to comply with the new Azure AI Services naming format and align with the relevant resources.
+
+| Old recommendation | Updated recommendation |
+| - | - |
+| Cognitive Services accounts should restrict network access | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) |
+| Cognitive Services accounts should have local authentication methods disabled | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) |
+| Diagnostic logs in Search services should be enabled | [Diagnostic logs in Azure AI services resources should be enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e) |
+
+See the [list of security recommendations](recommendations-reference.md).
+
+### Deprecation of Cognitive Services recommendation
+
+April 2, 2024
+
+The recommendation [`Public network access should be disabled for Cognitive Services accounts`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) is deprecated. The related policy definition [`Cognitive Services accounts should disable public network access`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) has been removed from the regulatory compliance dashboard.
+
+This recommendation is already being covered by another networking recommendation for Azure AI Services, [`Cognitive Services accounts should restrict network access`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243/showSecurityCenterCommandBar~/false).
+
+See the [list of security recommendations](recommendations-reference.md).
### Containers multicloud recommendations (GA) April 2, 2024
-As part of Defender for Containers multicloud general availability, following recommendations are announced GA as well:
+As part of Defender for Containers multicloud general availability, the following recommendations are announced GA as well:
- For Azure
-| **Recommendation** | **Description** | **Assessment Key** |
+| **Recommendation** | **Description** | **Assessment Key** |
| | | | | Azure registry container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 | | Azure running container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | - For GCP
-| **Recommendation** | **Description** | **Assessment Key** |
+| **Recommendation** | **Description** | **Assessment Key** |
| | | | | GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce | | GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Google Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 5cc3a2c1-8397-456f-8792-fe9d0d4c9145 | - For AWS
-| **Recommendation** | **Description** | **Assessment Key** |
+| **Recommendation** | **Description** | **Assessment Key** |
| | | | | AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce | | AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f |
-Please note that those recommendations would affect the secure score calculation.
-
+The recommendations affect the secure score calculation.
+ ## March 2024 |Date | Update |
Please note that those recommendations would affect the secure score calculation
| March 5 | [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) | | March 3 | [Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement](#defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys-retirement) | - ### Windows container images scanning is now generally available (GA) March 31, 2024
-We are announcing the general availability (GA) of the Windows container images support for scanning by Defender for Containers.
+We're announcing the general availability (GA) of the Windows container images support for scanning by Defender for Containers.
### Continuous export now includes attack path data March 25, 2024
-We are announcing that continuous export now includes attack path data. This feature allows you to stream security data to Log Analytics in Azure Monitor, to Azure Event Hubs, or to another Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), or IT classic deployment model solution.
+We're announcing that continuous export now includes attack path data. This feature allows you to stream security data to Log Analytics in Azure Monitor, to Azure Event Hubs, or to another Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), or IT classic deployment model solution.
Learn more about [continuous export](benefits-of-continuous-export.md).
Learn more about [continuous export](benefits-of-continuous-export.md).
March 21, 2024
-Until now agentless scanning covered CMK encrypted VMs in AWS and GCP. With this release we are completing support for Azure as well. The capability employs a unique scanning approach for CMK in Azure:
+Until now agentless scanning covered CMK encrypted VMs in AWS and GCP. With this release we're completing support for Azure as well. The capability employs a unique scanning approach for CMK in Azure:
-- Defender for Cloud does not handle the key or decryption process. Key handling and decryption is seamlessly handled by Azure Compute and is transparent to Defender for Cloud's agentless scanning service.
+- Defender for Cloud doesn't handle the key or decryption process. Key handling and decryption are seamlessly handled by Azure Compute and is transparent to Defender for Cloud's agentless scanning service.
- The unencrypted VM disk data is never copied or re-encrypted with another key.-- The original key is not replicated during the process. Purging it eradicates the data on both your production VM and Defender for CloudΓÇÖs temporary snapshot.
+- The original key isn't replicated during the process. Purging it eradicates the data on both your production VM and Defender for CloudΓÇÖs temporary snapshot.
-During public preview this capability is not automatically enabled. If you are using Defender for Servers P2 or Defender CSPM and your environment has VMs with CMK encrypted disks, you can now have them scanned for vulnerabilities, secrets and malware following these [enablement steps](enable-agentless-scanning-vms.md#agentless-vulnerability-assessment-on-azure).
+During public preview this capability isn't automatically enabled. If you're using Defender for Servers P2 or Defender CSPM and your environment has VMs with CMK encrypted disks, you can now have them scanned for vulnerabilities, secrets and malware following these [enablement steps](enable-agentless-scanning-vms.md#agentless-vulnerability-assessment-on-azure).
- [Learn more on agentless scanning for VMs](concept-agentless-data-collection.md) - [Learn more on agentless scanning permissions](faq-permissions.yml#which-permissions-are-used-by-agentless-scanning-)
During public preview this capability is not automatically enabled. If you are u
March 18, 2024
-We are announcing new endpoint detection and response recommendations that discover and assesses the configuration of supported endpoint detection and response solutions. If issues are found, these recommendations offer remediation steps.
+We're announcing new endpoint detection and response recommendations that discover and assesses the configuration of supported endpoint detection and response solutions. If issues are found, these recommendations offer remediation steps.
-The following new agentless endpoint protection recommendations are now available if you have Defender for Servers Plan 2 or the Defender CSPM plan enabled on your subscription with the agentless machine scanning feature enabled. The recommendations support Azure and multicloud machines. On-premises machines are not supported.
+The following new agentless endpoint protection recommendations are now available if you have Defender for Servers Plan 2 or the Defender CSPM plan enabled on your subscription with the agentless machine scanning feature enabled. The recommendations support Azure and multicloud machines. On-premises machines aren't supported.
| Recommendation name | Description | Severity | |--|
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 03/28/2024 Last updated : 04/01/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Deprecating of virtual machine recommendation](#deprecating-of-virtual-machine-recommendation) | April 2, 2024 | April 30, 2024 |
| [General Availability of Unified Disk Encryption recommendations](#general-availability-of-unified-disk-encryption-recommendations) | March 28, 2024 | April 30, 2024 | | [Defender for open-source relational databases updates](#defender-for-open-source-relational-databases-updates) | March 6, 2024 | April, 2024 | | [Changes in where you access Compliance offerings and Microsoft Actions](#changes-in-where-you-access-compliance-offerings-and-microsoft-actions) | March 3, 2024 | September 30, 2025 | | [Microsoft Security Code Analysis (MSCA) is no longer operational](#microsoft-security-code-analysis-msca-is-no-longer-operational) | February 26, 2024 | February 26, 2024 |
-| [Update recommendations to align with Azure AI Services resources](#update-recommendations-to-align-with-azure-ai-services-resources) | February 20, 2024 | February 28, 2024 |
-| [Deprecation of data recommendation](#deprecation-of-data-recommendation) | February 12, 2024 | March 14, 2024 |
| [Decommissioning of Microsoft.SecurityDevOps resource provider](#decommissioning-of-microsoftsecuritydevops-resource-provider) | February 5, 2024 | March 6, 2024 | | [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 | | [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Deprecating of virtual machine recommendation
+
+**Announcement date: April 2, 2024**
+
+**Estimated date of change: April 30, 2024**
+
+The recommendation [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12018f4f-3d10-999b-e4c4-86ec25be08a1) is set to be deprecated. There should be no effect on customers as these resources no longer exist.
+ ## General Availability of Unified Disk Encryption recommendations **Announcement date: March 28, 2024**
In February 2021, the deprecation of the MSCA task was communicated to all custo
Customers can get the latest DevOps security tooling from Defender for Cloud through [Microsoft Security DevOps](azure-devops-extension.md) and more security tooling through [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security).
-## Update recommendations to align with Azure AI Services resources
-
-**Announcement date: February 20, 2024**
-
-**Estimated date of change: February 28, 2024**
-
-The Azure AI Services category (formerly known as Cognitive Services) is adding new resource types. As a result, the following recommendations and related policy are set to be updated to comply with the new Azure AI Services naming format and align with the relevant resources.
-
-| Current Recommendation | Updated Recommendation |
-| - | - |
-| Cognitive Services accounts should restrict network access | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) |
-| Cognitive Services accounts should have local authentication methods disabled | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) |
-
-See the [list of security recommendations](recommendations-reference.md).
-
-## Deprecation of data recommendation
-
-**Announcement date: February 12, 2024**
-
-**Estimated date of change: March 14, 2024**
-
-The recommendation [`Public network access should be disabled for Cognitive Services accounts`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) is set to be deprecated. The related policy definition [`Cognitive Services accounts should disable public network access`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) is also being removed from the regulatory compliance dashboard.
-
-This recommendation is already being covered by another networking recommendation for Azure AI Services, [`Cognitive Services accounts should restrict network access`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243/showSecurityCenterCommandBar~/false).
-
-See the [list of security recommendations](recommendations-reference.md).
- ## Decommissioning of Microsoft.SecurityDevOps resource provider **Announcement date: February 5, 2024**
defender-for-iot References Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md
Title: Data retention across Microsoft Defender for IoT
+ Title: Data retention and sharing across Microsoft Defender for IoT
description: Learn about the data retention periods and capacities for Microsoft Defender for IoT data stored in Azure, the OT sensor, and on-premises management console. Last updated 01/22/2023
-# Data retention across Microsoft Defender for IoT
+# Data retention and sharing across Microsoft Defender for IoT
-Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinitely on your sensors.
+Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinitely on your sensors.
Defender for IoT also stores other data in the Azure portal, on OT network sensors, and on-premises management consoles.
For more information, see:
- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md) - [Troubleshoot the on-premises management console](legacy-central-management/how-to-troubleshoot-on-premises-management-console.md)
+## Data sharing
+
+Defender for IoT shares data, including customer data, among the following Microsoft products also licensed by the customer:
+
+- Microsoft Security Exposure Management
+ ## On-premises backup file capacity Both the OT network sensor and the on-premises management console have automated backups running daily.
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
The following diagram allows for a comparison between the standard ExpressRoute
| Metro location | Peering locations | Location address | Zone | Local Azure Region | ER Direct | Service Provider | |--|--|--|--|--|--|--|
-| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Equinix AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
-| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | West Europe | &check; | Megaport<sup>1</sup><br>Equinix<sup>1</sup><br>Console Connect<sup>1</sup> |
+| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Reality AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
+| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | Southeast Asia | &check; | Megaport<sup>1</sup><br>Equinix<sup>1</sup><br>Console Connect<sup>1</sup> |
| Zurich Metro | Zurich<br>Zurich2 | Interxion ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Reality<sup>1</sup> | <sup>1<sup> These service providers will be available in the future.
hdinsight-aks Control Egress Traffic From Hdinsight On Aks Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/control-egress-traffic-from-hdinsight-on-aks-clusters.md
Title: Control network traffic from HDInsight on AKS Cluster pools and cluster
description: A guide to configure and manage inbound and outbound network connections from HDInsight on AKS. Previously updated : 03/26/2024 Last updated : 04/02/2024 # Control network traffic from HDInsight on AKS Cluster pools and clusters
For example, you may want to:
## Methods and tools to control egress traffic
-
You have different options and tools for managing how the egress traffic flows from HDInsight on AKS clusters. You can set up some of these at the cluster pool level and others at the cluster level. * **Outbound with load balancer.** When you deploy a cluster pool with this Egress path, a public IP address is provisioned and assigned to the load balancer resource. A custom virtual network (VNET) is not required; however, it is highly recommended. You can use Azure Firewall or Network Security Groups (NSGs) on the custom VNET to manage the traffic that leaves the network.
In the following sections, we describe each method in detail.
### Outbound with load balancer
-The load balancer is used for egress through an HDInsight on AKS assigned public IP. When you configure the outbound type of loadBalancer on your cluster pool, you can expect egress out of the load balancer created by the HDInsight on AKS.
+The load balancer is used for egress through an HDInsight on AKS assigned public IP. When you configure the outbound type of load balancer on your cluster pool, you can expect egress out of the load balancer created by the HDInsight on AKS.
You can configure the outbound with load balancer configuration using the Azure portal.
To allow requests to be sent to the cluster, you need to [allowlist the traffic]
> [!NOTE] > The `userDefinedRouting` outbound type is an advanced networking scenario and requires proper network configuration, before you begin.
-> Changing the outbound type after cluster pool creation is not supported.
+> Changing the outbound type after cluster pool creation is not supported.
+
+If userDefinedRouting is set, HDInsight on AKS won't automatically configure egress paths. The egress setup must be done by the user.
-When `userDefinedRouting` is enabled, HDInsight on AKS doesn't have the ability to set up egress paths automatically. The user has to do the egress configuration.
-You need to set up the HDInsight on AKS cluster within an existing virtual network that has a pre-set subnet, and you need to create clear egress.
+You must deploy the HDInsight on AKS cluster into an existing virtual network with a subnet that has been previously configured, and you must establish explicit egress.
-This design needs to send egress traffic to a network appliance such as a firewall, gateway, or proxy. Then, the public IP attached to the appliance can take care of the Network Address Translation (NAT).
+This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, or proxy, so a public IP assigned to the standard load balancer or appliance can handle the Network Address Translation (NAT).
-Unlike Outbound with load balancer cluster pools, HDInsight on AKS does not set up outbound public IP address or outbound rules. Your custom route table (UDR) is the only path for outgoing traffic.
+HDInsight on AKS doesn't configure outbound public IP address or outbound rules, unlike the Outbound with load balancer type clusters as described in the above section. Your UDR is the only source for egress traffic.
+
+For inbound traffic, you are required to choose based on the requirements to choose a private cluster (for securing traffic on AKS control plane / API server) and select the private ingress option available on each of the cluster shape to use public or internal load balancer based traffic.
-The path for the inbound traffic is determined by whether you choose to Enable Private AKS on your cluster pool. Then, you can select the private ingress option available on each of the cluster to use public or internal load balancer based traffic.
### Cluster pool creation for outbound with `userDefinedRouting `
When you use HDInsight on AKS cluster pools and choose userDefinedRouting (UDR)
> [!IMPORTANT] > UDR egress path needs a route for 0.0.0.0/0 and a next hop destination of your Firewall or NVA in the route table. The route table already has a default 0.0.0.0/0 to the Internet. You can't get outbound Internet connectivity by just adding this route, because Azure needs a public IP address for SNAT. AKS checks that you don't create a 0.0.0.0/0 route pointing to the Internet, but to a gateway, NVA, etc. When you use UDR, a load balancer public IP address for inbound requests is only created if you configure a service of type loadbalancer. HDInsight on AKS never creates a public IP address for outbound requests when you use a UDR egress path. +
+With the following steps you will understand how to lock down the outbound traffic from your HDInsight on AKS service to back-end Azure resources or other network resources with Azure Firewall. This configuration helps prevent data exfiltration or the risk of malicious program implantation.
-This guide shows you how to secure the outbound traffic from your HDInsight on AKS service to back-end Azure resources or other network resources with Azure Firewall. This configuration helps protect against data leakage or the threat of malicious program installation.
+Azure Firewall lets you control outbound traffic at a much more granular level and filter traffic based on real-time threat intelligence from Microsoft Cyber Security. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.
-Azure Firewall gives you more fine-grained control over outbound traffic and filters it based on up-to-date threat data from Microsoft Cyber Security. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks [see Azure Firewall features](/azure/firewall/features).
+Following is an example of setting up firewall rules, and testing your outbound connections
Here is an example of how to configure firewall rules, and check your outbound connections.
Here is an example of how to configure firewall rules, and check your outbound c
1. Configure the route table like the following example:
- :::image type="content" source="./media/control-egress traffic-from-hdinsight-on-aks-clusters/create-cluster-basic-tab.png" alt-text="Screenshot showing create cluster basic tab." lightbox="./media/control-egress traffic-from-hdinsight-on-aks-clusters/create-cluster-basic-tab.png":::
+ :::image type="content" source="./media/control-egress traffic-from-hdinsight-on-aks-clusters/create-route-table.png" alt-text="Screenshot showing how to create route table." lightbox="./media/control-egress traffic-from-hdinsight-on-aks-clusters/create-route-table.png":::
Make sure you select the same region as the firewall you created.
Here is an example of how to configure firewall rules, and check your outbound c
1. From the left navigation, select **Subnets > Associate**. 1. In **Virtual network**, select your integrated virtual network. 1. In **Subnet**, select the HDInsight on AKS subnet you wish to use.
-
-
+
:::image type="content" source="./media/control-egress traffic-from-hdinsight-on-aks-clusters/associate-subnet.png" alt-text="Screenshot showing how to associate subnet." lightbox="./media/control-egress traffic-from-hdinsight-on-aks-clusters/associate-subnet.png"::: 1. Select **OK**.
Once the cluster pool is created, you can observe in the MC Group that there's n
With private AKS, the control plane or API server has internal IP addresses that are defined in the [RFC1918 - Address Allocation for Private Internet document](https://datatracker.ietf.org/doc/html/rfc1918). By using this option of private AKS, you can ensure network traffic between your API server and your HDInsight on AKS workload clusters remains on the private network only.
-> [!IMPORTANT]
-> By default, a private DNS zone with a private FQDN and a public DNS zone with a public FQDN are created when you enable private AKS. The agent nodes use the A record in the private DNS zone to find the private IP address of the private endpoint to communicate with the API server. The HDInsight on AKS Resource provider adds the A record to the private DNS zone automatically for private ingress.
+When you provision a private AKS cluster, AKS by default creates a private FQDN with a private DNS zone and an additional public FQDN with a corresponding A record in Azure public DNS. The agent nodes continue to use the record in the private DNS zone to resolve the private IP address of the private endpoint for communication to the API server.
+
+As HDInsight on AKS will automatically insert the record to the private DNS zone in the HDInsight on AKS created managed group, for private ingress.
### Clusters with private ingress
hdinsight-aks Cosmos Db For Apache Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md
Title: Using Azure Cosmos DB for Apache Cassandra® with HDInsight on AKS for Ap
description: Learn how to Sink Apache Kafka® message into Azure Cosmos DB for Apache Cassandra®, with Apache Flink® running on HDInsight on AKS. Previously updated : 10/30/2023 Last updated : 04/02/2024 # Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example uses [Apache Flink](../flink/flink-overview.md) to sink [HDInsight for Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction)
+This example uses [Apache Flink](../flink/flink-overview.md) to sink [HDInsight for Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction).
This example is prominent when Engineers prefer real-time aggregated data for analysis. With access to historical aggregated data, you can build machine learning (ML) models to build insights or actions. You can also ingest IoT data into Apache Flink to aggregate data in real-time and store it in Apache Cassandra. ## Prerequisites
-* [Apache Flink 1.16.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Apache Flink 1.17.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
* [Apache Kafka 3.2 on HDInsight](../../hdinsight/kafk) * [Azure Cosmos DB for Apache Cassandra](../../cosmos-db/cassandra/index.yml) * An Ubuntu VM for maven project development environment in the same VNet as HDInsight on AKS cluster.
Get credentials uses it on Stream source code:
### Cloning repository of Azure Samples
-Refer GitHub readme to download maven, clone this repository using `Azure-Samples/azure-cosmos-db-cassandra-java-getting-started.git` from
-[Azure Samples ](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started)
+Refer GitHub readme to download maven, and clone this repository using `Azure-Samples/azure-cosmos-db-cassandra-java-getting-started.git` from
+[Azure Samples ](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started).
### Updating maven project for Cassandra
-Go to maven project folder **azure-cosmos-db-cassandra-java-getting-started-main** and update the changes required for this example
+Go to maven project folder **azure-cosmos-db-cassandra-java-getting-started-main** and update the changes required for this example.
**maven pom.xml** ``` xml- <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>com.azure.cosmosdb.cassandra</groupId>
- <artifactId>cosmosdb-cassandra-examples</artifactId>
- <version>1.0-SNAPSHOT</version>
- <dependencies>
- <dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-java</artifactId>
- <version>1.16.0</version>
- </dependency>
- <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
- <dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-streaming-java</artifactId>
- <version>1.16.0</version>
- </dependency>
- <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
- <dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-clients</artifactId>
- <version>1.16.0</version>
- </dependency>
- <dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-connector-files</artifactId>
- <version>1.16.0</version>
- </dependency>
- <dependency>
- <groupId>org.apache.flink</groupId>
- <artifactId>flink-connector-kafka</artifactId>
- <version>1.16.0</version>
- </dependency>
- <dependency>
- <groupId>com.datastax.cassandra</groupId>
- <artifactId>cassandra-driver-core</artifactId>
- <version>3.3.0</version>
- </dependency>
- <dependency>
- <groupId>com.datastax.cassandra</groupId>
- <artifactId>cassandra-driver-mapping</artifactId>
- <version>3.1.4</version>
- </dependency>
- <dependency>
- <groupId>com.datastax.cassandra</groupId>
- <artifactId>cassandra-driver-extras</artifactId>
- <version>3.1.4</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-api</artifactId>
- <version>1.7.5</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>1.7.5</version>
- </dependency>
- </dependencies>
-
- <build>
- <plugins>
- <plugin>
- <artifactId>maven-assembly-plugin</artifactId>
- <configuration>
- <descriptorRefs>
- <descriptorRef>jar-with-dependencies</descriptorRef>
- </descriptorRefs>
- <finalName>cosmosdb-cassandra-examples</finalName>
- <appendAssemblyId>false</appendAssemblyId>
- </configuration>
- <executions>
- <execution>
- <id>make-assembly</id>
- <phase>package</phase>
- <goals>
- <goal>single</goal>
- </goals>
- </execution>
- </executions>
- </plugin>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
-
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+
+ <groupId>com.azure.cosmosdb.cassandra</groupId>
+ <artifactId>cosmosdb-cassandra-examples</artifactId>
+ <version>1.0-SNAPSHOT</version>
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-java</artifactId>
+ <version>1.17.0</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-streaming-java</artifactId>
+ <version>1.17.0</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-clients</artifactId>
+ <version>1.17.0</version>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-connector-files</artifactId>
+ <version>1.17.0</version>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-connector-kafka</artifactId>
+ <version>1.17.0</version>
+ </dependency>
+ <dependency>
+ <groupId>com.datastax.cassandra</groupId>
+ <artifactId>cassandra-driver-core</artifactId>
+ <version>3.3.0</version>
+ </dependency>
+ <dependency>
+ <groupId>com.datastax.cassandra</groupId>
+ <artifactId>cassandra-driver-mapping</artifactId>
+ <version>3.1.4</version>
+ </dependency>
+ <dependency>
+ <groupId>com.datastax.cassandra</groupId>
+ <artifactId>cassandra-driver-extras</artifactId>
+ <version>3.1.4</version>
+ </dependency>
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-api</artifactId>
+ <version>1.7.5</version>
+ </dependency>
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-log4j12</artifactId>
+ <version>1.7.5</version>
+ </dependency>
+ </dependencies>
+
+ <build>
+ <plugins>
+ <plugin>
+ <artifactId>maven-assembly-plugin</artifactId>
+ <configuration>
+ <descriptorRefs>
+ <descriptorRef>jar-with-dependencies</descriptorRef>
+ </descriptorRefs>
+ <finalName>cosmosdb-cassandra-examples</finalName>
+ <appendAssemblyId>false</appendAssemblyId>
+ </configuration>
+ <executions>
+ <execution>
+ <id>make-assembly</id>
+ <phase>package</phase>
+ <goals>
+ <goal>single</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <configuration>
+ <source>1.8</source>
+ <target>1.8</target>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+
</project>- ```+ **Cosmos DB for Apache Cassandra's connection configuration** You're required to update your host-name and user-name, and keys in the below snippet.
public class CassandraDemo {
### Building the project
-Run **mvn clean install** from azure-cosmos-db-cassandra-java-getting-started-main folder to build the project. This command generates cosmosdb-cassandra-examples.jar under target folder
+Run **mvn clean install** from azure-cosmos-db-cassandra-java-getting-started-main folder to build the project. This command generates cosmosdb-cassandra-examples.jar under target folder.
``` root@flinkvm:/home/flinkvm/azure-cosmos-db-cassandra-java-getting-started-main/target# ll
bin/flink run -c com.azure.cosmosdb.cassandra.examples.UserProfile -j cosmosdb-c
## Sink Kafka Topics into Cosmos DB for Apache Cassandra
-Run CassandraDemo class to sink Kafka topic into Cosmos DB for Apache Cassandra
+Run CassandraDemo class to sink Kafka topic into Cosmos DB for Apache Cassandra.
``` bin/flink run -c com.azure.cosmosdb.cassandra.examples.CassandraDemo -j cosmosdb-cassandra-examples.jar
bin/flink run -c com.azure.cosmosdb.cassandra.examples.CassandraDemo -j cosmosdb
## Validate Apache Flink Job Submission
-Check job on Flink Web UI on HDInsight on AKS Cluster
+Check job on Flink Web UI on HDInsight on AKS Cluster.
:::image type="content" source="./media/cosmos-db-for-apache-cassandra/check-output-on-flink-ui.png" alt-text="Screenshot showing how to check the job on HDInsight on AKS Flink UI." lightbox="./media/cosmos-db-for-apache-cassandra/check-output-on-flink-ui.png"::: ## Producing Messages in Kafka
-Produce message into Kafka topic
+Produce message into Kafka topic.
``` python sshuser@hn0-flinkd:~$ cat user.py
sshuser@hn0-flinkd:~$ python user.py | /usr/hdp/current/kafka-broker/bin/kafka-c
* [Azure Cosmos DB for Apache Cassandra](../../cosmos-db/cassandr). * [Create a API for Cassandra account in Azure Cosmos DB](../../cosmos-db/cassandr) * [Azure Samples ](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started)
-* Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, Cassandra and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, Cassandra, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Web Ssh On Portal To Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-web-ssh-on-portal-to-flink-sql.md
Title: How to enter the Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal
-description: How to enter Apache Flink® SQL & DStream CLI client using webssh on HDInsight on AKS clusters with Azure portal
+description: How to enter Apache Flink® SQL & DStream CLI client using webssh on HDInsight on AKS clusters with Azure portal.
Previously updated : 10/27/2023 Last updated : 02/04/2024 # Access Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example guides how to enter the Apache Flink CLI client on HDInsight on AKS clusters using SSH on Azure portal, we cover both SQL and Flink DataStream
+This example guides how to enter the Apache Flink CLI client on HDInsight on AKS clusters using SSH on Azure portal, we cover both SQL and Flink DataStream.
## Prerequisites - You're required to select SSH during [creation](./flink-create-cluster-portal.md) of Flink Cluster
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
This table lists the versions of HDInsight that are available in the Azure porta
| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | | | [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |November 1, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
-| [HDInsight 5.0](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |March 11, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
-| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 19, 2025 | March 31, 2025 |Yes |
+| [HDInsight 5.0](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |March 11, 2022 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025| Yes |
+| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025 |Yes |
**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You might not be able to create clusters from the Azure portal.
healthcare-apis Bulk Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/bulk-delete-operation.md
Title: Bulk-delete operation for Azure API for FHIR
-description: This article describes the bulk-delete operation for Azure API for FHIR.
+ Title: Bulk delete resources from the Azure API for FHIR service in Azure Health Data Services
+description: Learn how to bulk delete resources from the Azure API for FHIR service in Azure Health Data Services.
Previously updated : 10/22/2022 Last updated : 04/01/2024
-# Bulk Delete operation
+# Bulk delete in Azure API for FHIR
-## Next steps
+## Related content
-In this article, you learned how to bulk delete resources in the FHIR service. For information about supported FHIR features, see
+[Supported FHIR features](fhir-features-supported.md)
->[!div class="nextstepaction"]
->[Supported FHIR features](fhir-features-supported.md)
+[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
->[!div class="nextstepaction"]
->[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-bulk-delete.md
Title: Bulk-delete operation for Azure Health Data Services FHIR service.
-description: This article describes the bulk-delete operation for the AHDS FHIR service.
+ Title: Bulk delete resources from the FHIR service in Azure Health Data Services
+description: Learn how to bulk delete resources from the FHIR service in Azure Health Data Services.
Previously updated : 10/22/2022 Last updated : 04/01/2024
-# Bulk Delete
+# Bulk delete in the FHIR service
-## Next steps
+## Related content
-In this article, you learned how to bulk delete resources in the FHIR service. For information about supported FHIR features, see
+[Supported FHIR features](fhir-features-supported.md)
->[!div class="nextstepaction"]
->[Supported FHIR features](fhir-features-supported.md)
+[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
->[!div class="nextstepaction"]
->[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
By using Azure Data Lake Storage with the DICOM service, organizations are able
- Grant controls to manage storage permissions, access controls, tiers, and rules. Learn more:-- [Manage medical imaging data with the DICOM service and Azure Data Lake Storage](https://learn.microsoft.com/azure/healthcare-apis/dicom/dicom-data-lake)-- [Deploy the DICOM service with Azure Data Lake Storage](https://learn.microsoft.com/azure/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake)
+- [Manage medical imaging data with the DICOM service and Azure Data Lake Storage](./dicom/dicom-data-lake.md)
+- [Deploy the DICOM service with Azure Data Lake Storage](./dicom/deploy-dicom-services-in-azure-data-lake.md)
## February 2024
iot-central Concepts Iiot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iiot-architecture.md
Title: Industrial IoT patterns with Azure IoT Central
-description: This article introduces common Industrial IoT patterns that you can implement using Azure IoT Central
+ Title: Industrial IoT solutions with Azure IoT Central
+description: This article introduces common Industrial IoT solutions that you can implement using Azure IoT Central
Previously updated : 03/01/2024 Last updated : 03/29/2024
-# Industrial IoT (IIoT) architecture patterns with Azure IoT Central
+# Industrial IoT (IIoT) solutions with Azure IoT Central
:::image type="content" source="media/concepts-iiot-architecture/industrial-iot-architecture.svg" alt-text="Diagram of high-level industrial IoT architecture." border="false":::
IoT Central lets you evaluate your IIoT scenario by using the following built-in
- Model and organize the data from your industrial assets and use the built-in analytics and monitoring capabilities - Integrate and extend your solution by connecting to first and third party applications and services
-By using the Azure IoT platform, IoT Central lets you evaluate solutions that are scalable and secure.
+By using the Azure IoT platform, IoT Central lets you evaluate solutions that are scalable and secure. To set up a sample to evaluate a solution, see the [Ingest Industrial Data with Azure IoT Central and Calculate OEE](https://github.com/Azure-Samples/iotc-solution-builder) sample.
-To set up a sample to evaluate a solution, see [Ingest Industrial Data with Azure IoT Central and Calculate OEE](https://github.com/Azure-Samples/iotc-solution-builder).
+> [!TIP]
+> Azure IoT Operations Preview is a new collection of services that includes native support for OPC UA, MQTT, and other industrial protocols. You can use Azure IoT Operations to connect and manage your industrial assets. To learn more, see [Azure IoT Operations Preview](../../iot-operations/get-started/overview-iot-operations.md).
## Connect your industrial assets
Manage industrial assets and perform software updates to OT using features such
View the health of your industrial assets in real-time with customizable dashboards: Drill in telemetry using queries in the IoT Central **Data Explorer**: ## Integrate data into applications
Extend your IIoT solution by using the following IoT Central features:
- Use data export to stream data from your industrial assets to other services. Data export can enrich messages, use filters, and transform the data. These capabilities can deliver business insights to industrial operators. ## Secure your solution
Alternate versions include:
Connectivity partner third-party IoT Edge modules help connect to PLCs and publish JSON data to Azure IoT Central: ### Connectivity partner OT solutions that connect to Azure IoT Central through an Azure IoT Edge device Connectivity partner third-party solutions help connect to PLCs and publish JSON data through IoT Edge to Azure IoT Central: ## Industrial network protocols
Industrial networks are crucial to the working of a manufacturing facility. With
## Next steps
-Now that you've learned about IIoT architecture patterns with Azure IoT Central, the suggested next step is to learn about [device connectivity](overview-iot-central-developer.md) in Azure IoT Central.
+Now that you've learned about IIoT solutions with Azure IoT Central, the suggested next step is to learn about [Azure IoT Operations](../../iot-operations/get-started/overview-iot-operations.md).
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-test-connection.md
Title: Test connectivity to IoT MQ with MQTT clients
-description: Learn how to use common and standard MQTT tools to test connectivity to Azure IoT MQ.
+description: Learn how to use common and standard MQTT tools to test connectivity to Azure IoT MQ in a nonproduction environment.
- ignite-2023 Previously updated : 11/15/2023 Last updated : 03/18/2024 #CustomerIntent: As an operator or developer, I want to test MQTT connectivity with tools that I'm already familiar with to know that I set up my Azure IoT MQ broker correctly.
Last updated 11/15/2023
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-This article shows different ways to test connectivity to Azure IoT MQ Preview with MQTT clients.
+This article shows different ways to test connectivity to Azure IoT MQ Preview with MQTT clients in a nonproduction environment.
-By default:
+By default, Azure IoT MQ Preview:
-- IoT MQ deploys a [TLS-enabled listener](howto-configure-brokerlistener.md) on port 8883 with *ClusterIp* as the service type. *ClusterIp* means that the broker is accessible only from within the Kubernetes cluster. To access the broker from outside the cluster, you must configure a service of type *LoadBalancer* or *NodePort*.
+- Deploys a [TLS-enabled listener](howto-configure-brokerlistener.md) on port 8883 with *ClusterIp* as the service type. *ClusterIp* means that the broker is accessible only from within the Kubernetes cluster. To access the broker from outside the cluster, you must configure a service of type *LoadBalancer* or *NodePort*.
-- IoT MQ only accepts [Kubernetes service accounts for authentication](howto-configure-authentication.md) for connections from within the cluster. To connect from outside the cluster, you must configure a different authentication method.
+- Accepts [Kubernetes service accounts for authentication](howto-configure-authentication.md) for connections from within the cluster. To connect from outside the cluster, you must configure a different authentication method.
-Before you begin, [install or deploy IoT Operations](../get-started/quickstart-deploy.md). Use the following options to test connectivity to IoT MQ with MQTT clients.
+> [!CAUTION]
+> For production scenarios, you should use TLS and service accounts authentication to secure your IoT solution. For more information, see:
+> - [Configure TLS with automatic certificate management to secure MQTT communication in Azure IoT MQ Preview](./howto-configure-tls-auto.md)
+> - [Configure authentication in Azure IoT MQ Preview](./howto-configure-authentication.md)
+> - [Expose Kubernetes services to external devices](/azure/aks/hybrid/aks-edge-howto-expose-service) using port forwarding or a virtual switch with Azure Kubernetes Services Edge Essentials.
+
+Before you begin, [install or deploy IoT Operations](../get-started/quickstart-deploy.md). Use the following options to test connectivity to IoT MQ with MQTT clients in a nonproduction environment.
## Connect from a pod within the cluster with default configuration The first option is to connect from within the cluster. This option uses the default configuration and requires no extra updates. The following examples show how to connect from within the cluster using plain Alpine Linux and a commonly used MQTT client, using the service account and default root CA cert.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mqtt-client
- # Namespace must match IoT MQ BrokerListener's namespace
- # Otherwise use the long hostname: aio-mq-dmqtt-frontend.azure-iot-operations.svc.cluster.local
- namespace: azure-iot-operations
-spec:
- # Use the "mqtt-client" service account which comes with default deployment
- # Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations`
- serviceAccountName: mqtt-client
- containers:
- # Mosquitto and mqttui on Alpine
- - image: alpine
- name: mqtt-client
- command: ["sh", "-c"]
- args: ["apk add mosquitto-clients mqttui && sleep infinity"]
- volumeMounts:
- - name: mq-sat
- mountPath: /var/run/secrets/tokens
- - name: trust-bundle
- mountPath: /var/run/certs
- volumes:
- - name: mq-sat
- projected:
- sources:
- - serviceAccountToken:
- path: mq-sat
- audience: aio-mq # Must match audience in BrokerAuthentication
- expirationSeconds: 86400
- - name: trust-bundle
- configMap:
- name: aio-ca-trust-bundle-test-only # Default root CA cert
-```
+1. Create a file named `client.yaml` with the following configuration:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mqtt-client
+ # Namespace must match IoT MQ BrokerListener's namespace
+ # Otherwise use the long hostname: aio-mq-dmqtt-frontend.azure-iot-operations.svc.cluster.local
+ namespace: azure-iot-operations
+ spec:
+ # Use the "mqtt-client" service account which comes with default deployment
+ # Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations`
+ serviceAccountName: mqtt-client
+ containers:
+ # Mosquitto and mqttui on Alpine
+ - image: alpine
+ name: mqtt-client
+ command: ["sh", "-c"]
+ args: ["apk add mosquitto-clients mqttui && sleep infinity"]
+ volumeMounts:
+ - name: mq-sat
+ mountPath: /var/run/secrets/tokens
+ - name: trust-bundle
+ mountPath: /var/run/certs
+ volumes:
+ - name: mq-sat
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: mq-sat
+ audience: aio-mq # Must match audience in BrokerAuthentication
+ expirationSeconds: 86400
+ - name: trust-bundle
+ configMap:
+ name: aio-ca-trust-bundle-test-only # Default root CA cert
+ ```
1. Use `kubectl apply -f client.yaml` to deploy the configuration. It should only take a few seconds to start.
spec:
For example, to publish a message to the broker, open a shell inside the pod: ```bash
- kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
+ kubectl exec --stdin --tty mqtt-client --namespace azure-iot-operations -- sh
``` 1. Inside the pod's shell, run the following command to publish a message to the broker: ```console
- $ mosquitto_pub -h aio-mq-dmqtt-frontend -p 8883 -m "hello" -t "world" -u '$sat' -P $(cat /var/run/secrets/tokens/mq-sat) -d --cafile /var/run/certs/ca.crt
+ mosquitto_pub --host aio-mq-dmqtt-frontend --port 8883 --message "hello" --topic "world" --username '$sat' --pw $(cat /var/run/secrets/tokens/mq-sat) --debug --cafile /var/run/certs/ca.crt
+ ```
+
+ The output should look similar to the following:
+
+ ```Output
Client (null) sending CONNECT Client (null) received CONNACK (0) Client (null) sending PUBLISH (d0, q0, r0, m1, 'world', ... (5 bytes))
spec:
1. To subscribe to the topic, run the following command: ```console
- $ mosquitto_sub -h aio-mq-dmqtt-frontend -p 8883 -t "world" -u '$sat' -P $(cat /var/run/secrets/tokens/mq-sat) -d --cafile /var/run/certs/ca.crt
+ mosquitto_sub --host aio-mq-dmqtt-frontend --port 8883 --topic "world" --username '$sat' --pw $(cat /var/run/secrets/tokens/mq-sat) --debug --cafile /var/run/certs/ca.crt
+ ```
+
+ The output should look similar to the following:
+
+ ```Output
Client (null) sending CONNECT Client (null) received CONNACK (0) Client (null) sending SUBSCRIBE (Mid: 1, Topic: world, QoS: 0, Options: 0x00)
spec:
The mosquitto client uses the same service account token and root CA cert to authenticate with the broker and subscribe to the topic.
-1. To use *mqttui*, the command is similar:
+1. You can also use mqttui to connect to the broker using the service account token. The `--insecure` flag is required because mqttui doesn't support TLS certificate chain verification with a custom root CA cert.
+
+ > [!CAUTION]
+ > Using `--insecure` is not recommended for production scenarios. Only use it for testing or development purposes.
```console
- $ mqttui -b mqtts://aio-mq-dmqtt-frontend:8883 -u '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure
+ mqttui --broker mqtts://aio-mq-dmqtt-frontend:8883 --username '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure
```
- With the above command, mqttui connects to the broker using the service account token. The `--insecure` flag is required because mqttui doesn't support TLS certificate chain verification with a custom root CA cert.
- 1. To remove the pod, run `kubectl delete pod mqtt-client -n azure-iot-operations`. ## Connect clients from outside the cluster to default the TLS port ### TLS trust chain
-Since the broker uses TLS, the client must trust the broker's TLS certificate chain. To do so, you must configure the client to trust the root CA cert used by the broker. To use the default root CA cert, download it from the `aio-ca-trust-bundle-test-only` ConfigMap:
+Since the broker uses TLS, the client must trust the broker's TLS certificate chain. You need to configure the client to trust the root CA certificate used by the broker.
+
+To use the default root CA certificate, download it from the `aio-ca-trust-bundle-test-only` ConfigMap:
```bash kubectl get configmap aio-ca-trust-bundle-test-only -n azure-iot-operations -o jsonpath='{.data.ca\.crt}' > ca.crt ```
-Then, use the `ca.crt` file to configure your client to trust the broker's TLS certificate chain.
+Use the downloaded `ca.crt` file to configure your client to trust the broker's TLS certificate chain.
### Authenticate with the broker
By default, IoT MQ only accepts Kubernetes service accounts for authentication f
To turn off authentication for testing purposes, edit the `BrokerListener` resource and set the `authenticationEnabled` field to `false`:
+> [!CAUTION]
+> Turning off authentication should only be used for testing purposes with a test cluster that's not accessible from the internet.
+ ```bash kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='[{"op": "replace", "path": "/spec/authenticationEnabled", "value": false}]' ```
-> [!WARNING]
-> Turning off authentication should only be used for testing purposes with a test cluster that's not accessible from the internet.
- ### Port connectivity Some Kubernetes distributions can [expose](https://k3d.io/v5.1.0/usage/exposing_services/) IoT MQ to a port on the host system (localhost). You should use this approach because it makes it easier for clients on the same host to access IoT MQ.
Some Kubernetes distributions can [expose](https://k3d.io/v5.1.0/usage/exposing_
For example, to create a K3d cluster with mapping the IoT MQ's default MQTT port 8883 to localhost:8883: ```bash
-k3d cluster create -p '8883:8883@loadbalancer'
+k3d cluster create --port '8883:8883@loadbalancer'
```
-But for this method to work with IoT MQ, you must configure it to use a load balancer instead of cluster IP.
+But for this method to work with IoT MQ, you must configure it to use a load balancer instead of cluster IP. There are two ways to do this: create a load balancer or patch the existing default BrokerListener resource service type to load balancer.
-To configure a load balancer:
+#### Option 1: Create a load balancer
+
+1. Create a file named `loadbalancer.yaml` with the following configuration:
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: iotmq-public-svc
+ spec:
+ type: LoadBalancer
+ ports:
+ - name: mqtt1
+ port: 8883
+ targetPort: 8883
+ selector:
+ app: broker
+ app.kubernetes.io/instance: broker
+ app.kubernetes.io/managed-by: dmqtt-operator
+ app.kubernetes.io/name: dmqtt
+ tier: frontend
+ ```
+
+1. Apply the configuration to create a load balancer service:
+
+ ```bash
+ kubectl apply -f loadbalancer.yaml
+ ```
+
+#### Option 2: Patch the default load balancer
1. Edit the `BrokerListener` resource and change the `serviceType` field to `loadBalancer`. ```bash
- kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='[{"op": "replace", "path": "/spec/serviceType", "value": "loadBalancer"}]'
+ kubectl patch brokerlistener listener --namespace azure-iot-operations --type='json' --patch='[{"op": "replace", "path": "/spec/serviceType", "value": "loadBalancer"}]'
```
-1. Wait for the service to be updated, You should see output similar to the following:
+1. Wait for the service to be updated.
```console
- $ kubectl get service aio-mq-dmqtt-frontend -n azure-iot-operations
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- aio-mq-dmqtt-frontend LoadBalancer 10.43.107.11 XXX.XX.X.X 8883:30366/TCP 14h
+ kubectl get service aio-mq-dmqtt-frontend --namespace azure-iot-operations
```
-1. Use the external IP address to connect to IoT MQ from outside the cluster. If you used the K3d command with port forwarding, you can use `localhost` to connect to IoT MQ. For example, to connect with mosquitto client:
+ Output should look similar to the following:
- ```bash
- mosquitto_pub -q 1 -d -h localhost -m hello -t world -u client1 -P password --cafile ca.crt --insecure
+ ```Output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ aio-mq-dmqtt-frontend LoadBalancer 10.43.107.11 XXX.XX.X.X 8883:30366/TCP 14h
```
- In this example, the mosquitto client uses username/password to authenticate with the broker along with the root CA cert to verify the broker's TLS certificate chain. Here, the `--insecure` flag is required because the default TLS certificate issued to the load balancer is only valid for the load balancer's default service name (aio-mq-dmqtt-frontend) and assigned IPs, not localhost.
-
-1. If your cluster like Azure Kubernetes Service automatically assigns an external IP address to the load balancer, you can use the external IP address to connect to IoT MQ over the internet. Make sure to use the external IP address instead of `localhost` in the prior command, and remove the `--insecure` flag.
+1. You can use the external IP address to connect to IoT MQ over the internet. Make sure to use the external IP address instead of `localhost`.
```bash
- mosquitto_pub -q 1 -d -h XXX.XX.X.X -m hello -t world -u client1 -P password --cafile ca.crt
+ mosquitto_pub --qos 1 --debug -h XXX.XX.X.X --message hello --topic world --username client1 --pw password --cafile ca.crt
```
- > [!WARNING]
- > Never expose IoT MQ port to the internet without authentication and TLS. Doing so is dangerous and can lead to unauthorized access to your IoT devices and bring unsolicited traffic to your cluster.
+> [!TIP]
+> You can use the external IP address to connect to IoT MQ from outside the cluster. If you used the K3d command with port forwarding option, you can use `localhost` to connect to IoT MQ. For example, to connect with mosquitto client:
+>
+> ```bash
+> mosquitto_pub --qos 1 --debug -h localhost --message hello --topic world --username client1 --pw password --cafile ca.crt --insecure
+> ```
+>
+> In this example, the mosquitto client uses username and password to authenticate with the broker along with the root CA cert to verify the broker's TLS certificate chain. Here, the `--insecure` flag is required because the default TLS certificate issued to the load balancer is only valid for the load balancer's default service name (aio-mq-dmqtt-frontend) and assigned IPs, not localhost.
+>
+> Never expose IoT MQ port to the internet without authentication and TLS. Doing so is dangerous and can lead to unauthorized access to your IoT devices and bring unsolicited traffic to your cluster.
+>
+> For information on how to add localhost to the certificate's subject alternative name (SAN) to avoid using the insecure flag, see [Configure server certificate parameters](howto-configure-tls-auto.md#optional-configure-server-certificate-parameters).
#### Use port forwarding
With [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8
1. To access the broker, forward the broker listening port 8883 to the host. ```bash
- kubectl port-forward service/aio-mq-dmqtt-frontend 8883:mqtts-8883 -n azure-iot-operations
+ kubectl port-forward --namespace azure-iot-operations service/aio-mq-dmqtt-frontend 8883:mqtts-8883
``` 1. Use 127.0.0.1 to connect to the broker at port 8883 with the same authentication and TLS configuration as the example without port forwarding. Port forwarding is also useful for testing IoT MQ locally on your development machine without having to modify the broker's configuration.
-To learn more, see [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) for minikube.
+To learn more, see [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) for minikube and [Expose Kubernetes services to external devices](/azure/aks/hybrid/aks-edge-howto-expose-service) for Azure Kubernetes Services Edge Essentials.
## No TLS and no authentication
If you understand the risks and need to use an insecure port in a well-controlle
The `authenticationEnabled` and `authorizationEnabled` fields are set to `false` to turn off authentication and authorization. The `port` field is set to `1883` to use common MQTT port.
-1. Wait for the service to be updated. You should see output similar to the following:
+1. Wait for the service to be updated.
```console
- $ kubectl get service my-unique-service-name -n azure-iot-operations
+ kubectl get service my-unique-service-name --namespace azure-iot-operations
+ ```
+
+ Output should look similar to the following:
+
+ ```Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-unique-service-name LoadBalancer 10.43.144.182 XXX.XX.X.X 1883:31001/TCP 5m11s ```
If you understand the risks and need to use an insecure port in a well-controlle
1. Use mosquitto client to connect to the broker: ```console
- $ mosquitto_pub -q 1 -d -h localhost -m hello -t world
+ mosquitto_pub --qos 1 --debug -h localhost --message hello --topic world
+ ```
+
+ The output should look similar to the following:
+
+ ```Output
Client mosq-7JGM4INbc5N1RaRxbW sending CONNECT Client mosq-7JGM4INbc5N1RaRxbW received CONNACK (0) Client mosq-7JGM4INbc5N1RaRxbW sending PUBLISH (d0, q1, r0, m1, 'world', ... (5 bytes))
load-balancer Upgrade Basic Standard With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md
This article introduces a PowerShell module that creates a Standard Load Balance
For an in-depth walk-through of the upgrade module and process, see the following video: > [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6] -- 03:06 - <a href="https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h3m06s" target="_blank">Step-by-step</a>-- 32:54 - <a href="https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h32m45s" target="_blank">Recovery</a>-- 40:55 - <a href="https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h40m55s" target="_blank">Advanced Scenarios</a>-- 57:54 - <a href="https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h57m54s" target="_blank">Resources</a>
+- 03:06 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6?#time=0h3m06s" target="_blank">Step-by-step</a>
+- 32:54 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h32m45s" target="_blank">Recovery</a>
+- 40:55 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h40m55s" target="_blank">Advanced Scenarios</a>
+- 57:54 - <a href="https://learn-video.azurefd.net/vod/player?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h57m54s" target="_blank">Resources</a>
## Upgrade Overview
The PowerShell module performs the following functions:
> If the Virtual Machine Scale Set in the Load Balancer backend pool has Public IP Addresses in its network configuration, the Public IP Addresses associated with each Virtual Machine Scale Set instance will change when they are upgraded to Standard SKU. This is because scale set instance-level Public IP addresses cannot be upgraded, only replaced with a new Standard SKU Public IP. All other Public IP addresses will be retained through the migration. >[!NOTE]
-> If the Virtual Machine Scale Set behind the Load Balancer is a **Service Fabric Cluster**, migration with this script will take more time. In testing, a 5-node Bronze cluster was unavailable for about 30 minutes and a 5-node Silver cluster was unavailable for about 45 minutes. For Service Fabric clusters that require minimal / no connectivity downtime, adding a new nodetype with Standard Load Balancer and IP resources is a better solution.
+> If the Virtual Machine Scale Set behind the Load Balancer is a **Service Fabric Cluster**, migration with this script will take more time, is higher risk to your application, and will cause downtime. Review [Service Fabric Cluster Load Balancer upgrade guidance](https://aka.ms/sfc-lb-upgrade) for migration options.
### Unsupported Scenarios
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
This article assumes the following configuration:
| [Use AutoML, the designer, the dataset, and the datastore from the studio](#scenario-use-automl-the-designer-the-dataset-and-the-datastore-from-the-studio) | Not applicable | Not applicable | <ul><li>Configure the workspace service principal</li><li>Allow access from trusted Azure services</li></ul>For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). | | [Use a compute instance and a compute cluster](#scenario-use-a-compute-instance-and-a-compute-cluster) | <ul><li>Azure Machine Learning on port 44224</li><li>Azure Batch on ports 29876-29877</li></ul> | <ul><li>Microsoft Entra ID</li><li>Azure Resource Manager</li><li>Azure Machine Learning</li><li>Azure Storage</li><li>Azure Key Vault</li></ul> | If you use a firewall, create user-defined routes. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md). | | [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | Not applicable | For information on the outbound configuration for AKS, see [Secure Azure Kubernetes Service inferencing environment](how-to-secure-kubernetes-inferencing-environment.md). | |
-| [Use Docker images that Azure Machine Learning manages](#scenario-use-docker-images-that-azure-machine-learning-manages) | Not applicable | <ul><li>Microsoft Artifact Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the container registry for your workspace is behind the virtual network, configure the workspace to use a compute cluster to build images. For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
+| [Use Docker images that Azure Machine Learning manages](#scenario-use-docker-images-that-azure-machine-learning-manages) | Not applicable | Microsoft Artifact Registry | If the container registry for your workspace is behind the virtual network, configure the workspace to use a compute cluster to build images. For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
## Purposes of storage accounts
If your model requires extra inbound or outbound connectivity, such as to an ext
## Scenario: Use Docker images that Azure Machine Learning manages
-Azure Machine Learning provides Docker images that you can use to train models or perform inference. These images are hosted on Microsoft Artifact Registry. They're also hosted on a geo-replicated Azure Container Registry instance named `viennaglobal.azurecr.io`.
+Azure Machine Learning provides Docker images that you can use to train models or perform inference. These images are hosted on Microsoft Artifact Registry.
-If you provide your own Docker images, such as on a container registry that you provide, you don't need the outbound communication with Artifact Registry or `viennaglobal.azurecr.io`.
+If you provide your own Docker images, such as on a container registry that you provide, you don't need the outbound communication with Artifact Registry.
> [!TIP] > If your container registry is secured in the virtual network, Azure Machine Learning can't use it to build Docker images. Instead, you must designate an Azure Machine Learning compute cluster to build images. For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
ms.devlang: azurecli
# Progressive rollout of MLflow models to Online Endpoints
-In this article, you'll learn how you can progressively update and deploy MLflow models to Online Endpoints without causing service disruption. You'll use blue-green deployment, also known as a safe rollout strategy, to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
+In this article, you learn how you can progressively update and deploy MLflow models to Online Endpoints without causing service disruption. You use blue-green deployment, also known as a safe rollout strategy, to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
## About this example
Additionally, you will need to:
pip install mlflow azureml-mlflow ``` -- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
+- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. Learn how to [configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
The workspace is the top-level resource for Azure Machine Learning, providing a
from mlflow.deployments import get_deploy_client ```
-1. Configure the deployment client
+1. Configure the MLflow client and the deployment client:
```python
+ mlflow_client = mlflow.MLflowClient()
deployment_client = get_deploy_client(mlflow.get_tracking_uri()) ```
We are going to exploit this functionality by deploying multiple versions of the
# [Python (MLflow SDK)](#tab/mlflow)
- We can configure the properties of this endpoint using a configuration file. In this case, we are configuring the authentication mode of the endpoint to be "key".
+ We can configure the properties of this endpoint using a configuration file. We configure the authentication mode of the endpoint to be "key" in the following example:
```python endpoint_config = {
We are going to exploit this functionality by deploying multiple versions of the
# [Python (MLflow SDK)](#tab/mlflow)
- This functionality is not available in the MLflow SDK. Go to [Azure Machine Learning studio](https://ml.azure.com), navigate to the endpoint and retrieve the secret key from there.
+ This functionality is not available in the MLflow SDK. Go to [Azure Machine Learning studio](https://ml.azure.com), navigate to the endpoint, and retrieve the secret key from there.
### Create a blue deployment
-So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default" and it will represent our "blue deployment".
+So far, the endpoint is empty. There are no deployments on it. Let's create the first one by deploying the same model we were working on before. We will call this deployment "default", representing our "blue deployment".
1. Configure the deployment
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md
description: Learn how to use Azure Policy to use built-in policies for Azure Machine Learning to make sure your workspaces are compliant with your requirements. Previously updated : 03/25/2024 Last updated : 04/01/2024
# Audit and manage Azure Machine Learning
-When teams collaborate on Azure Machine Learning, they might face varying requirements to the configuration and organization of resources. Machine learning teams might look for flexibility in how to organize workspaces for collaboration, or size compute clusters to the requirements of their use cases. In these scenarios, it might lead to most productivity if the application team can manage their own infrastructure.
+When teams collaborate on Azure Machine Learning, they might face varying requirements to configure and organize resources. Machine learning teams might look for flexibility in how to organize workspaces for collaboration, or how to size compute clusters for the requirements of their use cases. In these scenarios, productivity could benefit if application teams can manage their own infrastructure.
-As a platform administrator, you can use policies to lay out guardrails for teams to manage their own resources. [Azure Policy](../governance/policy/index.yml) helps audit and govern resource state. In this article, you learn about available auditing controls and governance practices for Azure Machine Learning.
+As a platform administrator, you can use policies to lay out guardrails for teams to manage their own resources. [Azure Policy](../governance/policy/index.yml) helps audit and govern resource state. This article explains how you can use audit controls and governance practices for Azure Machine Learning.
## Policies for Azure Machine Learning [Azure Policy](../governance/policy/index.yml) is a governance tool that allows you to ensure that Azure resources are compliant with your policies.
-Azure Machine Learning provides a set of policies that you can use for common scenarios with Azure Machine Learning. You can assign these policy definitions to your existing subscription or use them as the basis to create your own custom definitions.
+Azure Policy provides a set of policies that you can use for common scenarios with Azure Machine Learning. You can assign these policy definitions to your existing subscription or use them as the basis to create your own custom definitions.
The following table lists the built-in policies you can assign with Azure Machine Learning. For a list of all Azure built-in policies, see [Built-in policies](../governance/policy/samples/built-in-policies.md).
To view the built-in policy definitions related to Azure Machine Learning, use t
1. Go to __Azure Policy__ in the [Azure portal](https://portal.azure.com). 1. Select __Definitions__.
-1. For __Type__, select _Built-in_, and for __Category__, select __Machine Learning__.
+1. For __Type__, select __Built-in__. For __Category__, select __Machine Learning__.
-From here, you can select policy definitions to view them. While viewing a definition, you can use the __Assign__ link to assign the policy to a specific scope, and configure the parameters for the policy. For more information, see [Assign a policy - portal](../governance/policy/assign-policy-portal.md).
+From here, you can select policy definitions to view them. While viewing a definition, you can use the __Assign__ link to assign the policy to a specific scope, and configure the parameters for the policy. For more information, see [Create a policy assignment to identify non-compliant resources using Azure portal](../governance/policy/assign-policy-portal.md).
-You can also assign policies by using [Azure PowerShell](../governance/policy/assign-policy-powershell.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), and [templates](../governance/policy/assign-policy-template.md).
+You can also assign policies by using [Azure PowerShell](../governance/policy/assign-policy-powershell.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), or [templates](../governance/policy/assign-policy-template.md).
## Conditional access policies
-To control who can access your Azure Machine Learning workspace, use Microsoft Entra [Conditional Access](../active-directory/conditional-access/overview.md). To use Conditional Access for Azure Machine Learning workspaces, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to the app named __Azure Machine Learning__. The app ID is __0736f41a-0425-bdb5-1563eff02385__.
+To control who can access your Azure Machine Learning workspace, use [Microsoft Entra Conditional Access](../active-directory/conditional-access/overview.md). To use Conditional Access for Azure Machine Learning workspaces, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to the app named __Azure Machine Learning__. The app ID is __0736f41a-0425-bdb5-1563eff02385__.
## Enable self-service using landing zones
-Landing zones are an architectural pattern to set up Azure environments that accounts for scale, governance, security, and productivity. A data landing zone is an administator-configured environment that an application team uses to host a data and analytics workload.
+Landing zones are an architectural pattern that accounts for scale, governance, security, and productivity when setting up Azure environments. A data landing zone is an administator-configured environment that an application team uses to host a data and analytics workload.
-The purpose of the landing zone is to ensure when a team starts in the Azure environment, all infrastructure configuration work is done. For instance, security controls are set up in compliance with organizational standards and network connectivity is set up.
+The purpose of the landing zone is to ensure that all infrastructure configuration work is done when a team starts in the Azure environment. For instance, security controls are set up in compliance with organizational standards and network connectivity is set up.
-Using the landing zones pattern, machine learning teams can be enabled to self-service deploy and manage their own resources. By use of Azure policy, as an administrator you can audit and manage Azure resources for compliance and make sure workspaces are compliant to meet your requirements.
+Using the landing zones pattern, machine learning teams can deploy and manage their own resources on a self-service basis. By using Azure policy as an administrator, you can audit and manage Azure resources for compliance.
-Azure Machine Learning integrates with [data landing zones](https://github.com/Azure/data-landing-zone) in the [Cloud Adoption Framework data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/). This reference implementation provides an optimized environment to migrate machine learning workloads onto and includes policies for Azure Machine Learning preconfigured.
+Azure Machine Learning integrates with [data landing zones](https://github.com/Azure/data-landing-zone) in the [Cloud Adoption Framework data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/). This reference implementation provides an optimized environment to migrate machine learning workloads onto Azure Machine Learning and includes preconfigured policies.
## Configure built-in policies
-### Compute instances should have idle shutdown
+### Compute instance should have idle shutdown
-Controls whether an Azure Machine Learning compute instance should have idle shutdown enabled. Idle shutdown automatically stops the compute instance when it's idle for a specified period of time. This policy is useful for cost savings and to ensure that resources aren't being used unnecessarily.
+This policy controls whether an Azure Machine Learning compute instance should have idle shutdown enabled. Idle shutdown automatically stops the compute instance when it's idle for a specified period of time. This policy is useful for cost savings and to ensure that resources aren't being used unnecessarily.
To configure this policy, set the effect parameter to __Audit__, __Deny__, or __Disabled__. If set to __Audit__, you can create a compute instance without idle shutdown enabled and a warning event is created in the activity log. ### Compute instances should be recreated to get software updates
-Controls whether Azure Machine Learning compute instances should be audited to make sure they are running the latest available software updates. This policy is useful to ensure that compute instances are running the latest software updates to maintain security and performance. For more information, see [Vulnerability management for Azure Machine Learning](concept-vulnerability-management.md#compute-instance).
+Controls whether Azure Machine Learning compute instances should be audited to make sure they're running the latest available software updates. This policy is useful to ensure that compute instances are running the latest software updates to maintain security and performance. For more information, see [Vulnerability management for Azure Machine Learning](concept-vulnerability-management.md#compute-instance).
To configure this policy, set the effect parameter to __Audit__ or __Disabled__. If set to __Audit__, a warning event is created in the activity log when a compute isn't running the latest software updates.
If the policy is set to __Deny__, then you can't create a compute unless SSH is
### Workspaces should be encrypted with customer-managed key
-Controls whether a workspace should be encrypted with a customer-managed key, or using a Microsoft-managed key to encrypt metrics and metadata. For more information on using customer-managed key, see the [Azure Cosmos DB](concept-data-encryption.md#azure-cosmos-db) section of the data encryption article.
+Controls whether a workspace should be encrypted with a customer-managed key, or with a Microsoft-managed key to encrypt metrics and metadata. For more information on using customer-managed key, see the [Azure Cosmos DB](concept-data-encryption.md#azure-cosmos-db) section of the data encryption article.
To configure this policy, set the effect parameter to __Audit__ or __Deny__. If set to __Audit__, you can create a workspace without a customer-managed key and a warning event is created in the activity log. If the policy is set to __Deny__, then you can't create a workspace unless it specifies a customer-managed key. Attempting to create a workspace without a customer-managed key results in an error similar to `Resource 'clustername' was disallowed by policy` and creates an error in the activity log. The policy identifier is also returned as part of this error.
-### Workspaces should disable public network access
+### Configure workspaces to disable public network access
Controls whether a workspace should disable network access from the public internet.
If the policy is set to __Deny__, then you can't create a workspace that allows
Controls whether a workspace should enable V1LegacyMode to support network isolation backward compatibility. This policy is useful if you want to keep Azure Machine Learning control plane data inside your private networks. For more information, see [Network isolation change with our new API platform](how-to-configure-network-isolation-with-v2.md).
-To configure this policy, set the effect parameter to __Audit__ or __Deny__, or __Disabled__ . If set to __Audit__, you can create a workspace without enabling V1LegacyMode and a warning event is created in the activity log.
+To configure this policy, set the effect parameter to __Audit__ or __Deny__, or __Disabled__. If set to __Audit__, you can create a workspace without enabling V1LegacyMode and a warning event is created in the activity log.
If the policy is set to __Deny__, then you can't create a workspace unless it enables V1LegacyMode.
-### Workspace should use private link
+### Workspaces should use private link
-Controls whether a workspace should use Azure Private Link to communicate with Azure Virtual Network. For more information on using private link, see [Configure private link for a workspace](how-to-configure-private-link.md).
+Controls whether a workspace should use Azure Private Link to communicate with Azure Virtual Network. For more information on using private link, see [Configure a private endpoint for an Azure Machine Learning workspace](how-to-configure-private-link.md).
To configure this policy, set the effect parameter to __Audit__ or __Deny__. If set to __Audit__, you can create a workspace without using private link and a warning event is created in the activity log. If the policy is set to __Deny__, then you can't create a workspace unless it uses a private link. Attempting to create a workspace without a private link results in an error. The error is also logged in the activity log. The policy identifier is returned as part of this error.
-### Workspace should use user-assigned managed identity
+### Workspaces should use user-assigned managed identity
-Controls whether a workspace is created using a system-assigned managed identity (default) or a user-assigned managed identity. The managed identity for the workspace is used to access associated resources such as Azure Storage, Azure Container Registry, Azure Key Vault, and Azure Application Insights. For more information, see [Use managed identities with Azure Machine Learning](how-to-identity-based-service-authentication.md).
+Controls whether a workspace is created using a system-assigned managed identity (default) or a user-assigned managed identity. The managed identity for the workspace is used to access associated resources such as Azure Storage, Azure Container Registry, Azure Key Vault, and Azure Application Insights. For more information, see [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md).
To configure this policy, set the effect parameter to __Audit__, __Deny__, or __Disabled__. If set to __Audit__, you can create a workspace without specifying a user-assigned managed identity. A system-assigned identity is used and a warning event is created in the activity log. If the policy is set to __Deny__, then you can't create a workspace unless you provide a user-assigned identity during the creation process. Attempting to create a workspace without providing a user-assigned identity results in an error. The error is also logged to the activity log. The policy identifier is returned as part of this error.
-### Configure computes to Modify/disable local authentication
+### Configure computes to modify/disable local authentication
-Modifies any Azure Machine Learning compute cluster or instance creation request to disable local authentication (SSH).
+This policy modifies any Azure Machine Learning compute cluster or instance creation request to disable local authentication (SSH).
To configure this policy, set the effect parameter to __Modify__ or __Disabled__. If set __Modify__, any creation of a compute cluster or instance within the scope where the policy applies will automatically have local authentication disabled.
-### Configure workspaces to use private DNS zones
+### Configure workspace to use private DNS zones
-Configures a workspace to use a private DNS zone, overriding the default DNS resolution for a private endpoint.
+This policy configures a workspace to use a private DNS zone, overriding the default DNS resolution for a private endpoint.
To configure this policy, set the effect parameter to __DeployIfNotExists__. Set the __privateDnsZoneId__ to the Azure Resource Manager ID of the private DNS zone to use. ### Configure workspaces to disable public network access
-Configures a workspace to disable network access from the public internet. This helps protect thee workspaces against data leakage risks. You can instead access your workspace by creating private endpoints. For more information, see [Configure private link for a workspace](how-to-configure-private-link.md).
+Configures a workspace to disable network access from the public internet. This helps protect the workspaces against data leakage risks. You can instead access your workspace by creating private endpoints. For more information, see [Configure a private endpoint for an Azure Machine Learning workspace](how-to-configure-private-link.md).
To configure this policy, set the effect parameter to __Modify__ or __Disabled__. If set to __Modify__, any creation of a workspace within the scope where the policy applies will automatically have public network access disabled.
Audits whether resource logs are enabled for an Azure Machine Learning workspace
To configure this policy, set the effect parameter to __AuditIfNotExists__ or __Disabled__. If set to __AuditIfNotExists__, the policy audits if resource logs aren't enabled for the workspace.
-## Next steps
+## Related content
* [Azure Policy documentation](../governance/policy/overview.md) * [Built-in policies for Azure Machine Learning](policy-reference.md) * [Working with security policies with Microsoft Defender for Cloud](../security-center/tutorial-security-policy.md)
-* The [Cloud Adoption Framework scenario for data management and analytics](/azure/cloud-adoption-framework/scenarios/data-management/) outlines considerations in running data and analytics workloads in the cloud.
-* [Cloud Adoption Framework data landing zones](https://github.com/Azure/data-landing-zone) provide a reference implementation for managing data and analytics workloads in Azure.
-* [Learn how to use policy to integrate Azure Private Link with Azure Private DNS zones](/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale), to manage private link configuration for the workspace and dependent resources.
+* The [Cloud Adoption Framework scenario for data management and analytics](/azure/cloud-adoption-framework/scenarios/data-management/) outlines considerations in running data and analytics workloads in the cloud
+* [Cloud Adoption Framework data landing zones](https://github.com/Azure/data-landing-zone) provide a reference implementation for managing data and analytics workloads in Azure
+* [Learn how to use policy to integrate Azure Private Link with Azure Private DNS zones](/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale)
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
description: In this article, learn how to create a batch endpoint to continuous
-+ Previously updated : 11/04/2022 Last updated : 04/02/2024 #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
[!INCLUDE [cli v2](includes/machine-learning-dev-v2.md)]
-Batch endpoints provide a convenient way to deploy models to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. We call this type of deployments *model deployments*.
+Batch endpoints provide a convenient way to deploy models that run inference over large volumes of data. These endpoints simplify the process of hosting your models for batch scoring, so that your focus is on machine learning, rather than the infrastructure.
-Use batch endpoints to deploy models when:
+Use batch endpoints for model deployment when:
-> [!div class="checklist"]
-> * You have expensive models that requires a longer time to run inference.
-> * You need to perform inference over large amounts of data, distributed in multiple files.
-> * You don't have low latency requirements.
-> * You can take advantage of parallelization.
+- You have expensive models that require a longer time to run inference.
+- You need to perform inference over large amounts of data that is distributed in multiple files.
+- You don't have low latency requirements.
+- You can take advantage of parallelization.
-In this article, you'll learn how to use batch endpoints to deploy a machine learning model to perform inference.
+In this article, you use a batch endpoint to deploy a machine learning model that solves the classic MNIST (Modified National Institute of Standards and Technology) digit recognition problem. Your deployed model then performs batch inferencing over large amounts of dataΓÇöin this case, image files. You begin by creating a batch deployment of a model that was created using Torch. This deployment becomes the default one in the endpoint. Later, you [create a second deployment](#add-deployments-to-an-endpoint) of a mode that was created with TensorFlow (Keras), test the second deployment, and then set it as the endpoint's default deployment.
-## About this example
+To follow along with the code samples and files needed to run the commands in this article locally, see the __[Clone the examples repository](#clone-the-examples-repository)__ section. The code samples and files are contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository.
-In this example, we're going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we're going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. In the second half, [we're going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
+## Prerequisites
++
+## Clone the examples repository
[!INCLUDE [machine-learning-batch-clone](includes/azureml-batch-clone-samples-with-studio.md)]
-The files for this example are in:
+## Prepare your system
+
+### Connect to your workspace
+
+# [Azure CLI](#tab/cli)
+
+First, connect to the Azure Machine Learning workspace where you'll work.
+
+If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, resource group, and location multiple times, run this code:
```azurecli
-cd endpoints/batch/deploy-models/mnist-classifier
+az account set --subscription <subscription>
+az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
```
-### Follow along in Jupyter Notebooks
+# [Python](#tab/python)
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, you connect to the workspace in which you'll perform deployment tasks.
-You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb).
+1. Import the required libraries:
-## Prerequisites
+ ```python
+ from azure.ai.ml import MLClient, Input, load_component
+ from azure.ai.ml.entities import BatchEndpoint, ModelBatchDeployment, ModelBatchDeploymentSettings, PipelineComponentBatchDeployment, Model, AmlCompute, Data, BatchRetrySettings, CodeConfiguration, Environment, Data
+ from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+ from azure.ai.ml.dsl import pipeline
+ from azure.identity import DefaultAzureCredential
+ ```
+ > [!NOTE]
+ > Classes `ModelBatchDeployment` and `PipelineComponentBatchDeployment` were introduced in version 1.7.0 of the SDK.
+
+2. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+# [Studio](#tab/azure-studio)
+
+Open the [Azure Machine Learning studio portal](https://ml.azure.com) and sign in using your credentials.
++ ### Create compute
-Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
+Batch endpoints run on compute clusters and support both [Azure Machine Learning compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) and [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource, therefore, one cluster can host one or many batch deployments (along with other workloads, if desired).
-This article uses a compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>` or create one as shown.
+Create a compute named `batch-cluster`, as shown in the following code. You can adjust as needed and reference your compute using `azureml:<your-compute-name>`.
# [Azure CLI](#tab/cli)
This article uses a compute created here named `batch-cluster`. Adjust as needed
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_compute)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
-*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](./how-to-create-attach-compute-cluster.md?tabs=studio).*
+Follow the steps in the tutorial [Create an Azure Machine Learning compute cluster](./how-to-create-attach-compute-cluster.md?tabs=studio) to create a compute cluster.
> [!NOTE]
-> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
-
+> You're not charged for the compute at this point, as the cluster remains at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. For more information about compute costs, see [Manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
## Create a batch endpoint
-A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](concept-endpoints-batch.md)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+A __batch endpoint__ is an HTTPS endpoint that clients can call to trigger a _batch scoring job_. A __batch scoring job__ is a job that scores multiple inputs. A __batch deployment__ is a set of compute resources hosting the model that does the actual batch scoring (or batch inferencing). One batch endpoint can have multiple batch deployments. For more information on batch endpoints, see [What are batch endpoints?](concept-endpoints-batch.md).
> [!TIP]
-> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](concept-endpoints-batch.md).
-
-### Steps
+> One of the batch deployments serves as the default deployment for the endpoint. When the endpoint is invoked, the default deployment does the actual batch scoring. For more information on batch endpoints and deployments, see [batch endpoints and batch deployment](concept-endpoints-batch.md).
-1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+1. Name the endpoint. The __endpoint's name must be unique within an Azure region__, since the name is included in the endpoint's URI. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
# [Azure CLI](#tab/cli)
-
- In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
-
+
+ Place the endpoint's name in a variable so you can easily reference it later.
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="name_endpoint" :::
-
+ # [Python](#tab/python)
-
- In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ Place the endpoint's name in a variable so you can easily reference it later.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=name_endpoint)]
-
- # [Studio](#tab/studio)
-
- *You'll configure the name of the endpoint later in the creation wizard.*
-
-1. Configure your batch endpoint
+ # [Studio](#tab/azure-studio)
+
+ You provide the endpoint's name later, at the point when you create the deployment.
+
+1. Configure the batch endpoint
# [Azure CLI](#tab/cli)
- The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint).
-
- __endpoint.yml__
+ The following YAML file defines a batch endpoint. You can use this file with the CLI command for [batch endpoint creation](#create-a-batch-endpoint).
+
+ _endpoint.yml_
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/endpoint.yml"::: The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
-
+ | Key | Description | | | -- |
- | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level. |
| `description` | The description of the batch endpoint. This property is optional. |
- | `tags` | The tags to include in the endpoint. This property is optional.
-
+ | `tags` | The tags to include in the endpoint. This property is optional. |
+ # [Python](#tab/python)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_endpoint)]
-
+
+ The following table describes the key properties of the endpoint. For more information on batch endpoint definition, see [BatchEndpoint Class](/python/api/azure-ai-ml/azure.ai.ml.entities.batchendpoint).
+ | Key | Description | | | -- | | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.| | `description` | The description of the batch endpoint. This property is optional. | | `tags` | The tags to include in the endpoint. This property is optional. |
-
- # [Studio](#tab/studio)
-
- *You'll create the endpoint in the same step you create the deployment.*
-
+
+ # [Studio](#tab/azure-studio)
+
+ You create the endpoint later, at the point when you create the deployment.
1. Create the endpoint: # [Azure CLI](#tab/cli)
-
- Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ Run the following code to create a batch endpoint.
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_endpoint" ::: # [Python](#tab/python)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_endpoint)]
- # [Studio](#tab/studio)
-
- *You'll create the endpoint in the same step you are creating the deployment later.*
+ # [Studio](#tab/azure-studio)
+
+ You create the endpoint later, at the point when you create the deployment.
## Create a batch deployment
-A model deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch model deployment, you need all the following items:
+A model deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch model deployment, you need the following items:
-* A registered model in the workspace.
-* The code to score the model.
-* The environment with the model's dependencies installed.
-* The pre-created compute and resource settings.
+* A registered model in the workspace
+* The code to score the model
+* An environment with the model's dependencies installed
+* The pre-created compute and resource settings
-1. Let's start by registering the model we want to deploy. Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
+1. Begin by registering the model to be deployedΓÇöa Torch model for the popular digit recognition problem (MNIST). Batch Deployments can only deploy models that are registered in the workspace. You can skip this step if the model you want to deploy is already registered.
> [!TIP]
- > Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+ > Models are associated with the deployment, rather than with the endpoint. This means that a single endpoint can serve different models (or model versions) under the same endpoint, provided that the different models (or model versions) are deployed in different deployments.
-
# [Azure CLI](#tab/cli) :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="register_model" :::
A model deployment is a set of resources required for hosting the model that doe
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=register_model)]
- # [Studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Models__ tab on the side menu.
A model deployment is a set of resources required for hosting the model that doe
1. Select __Register__.
-1. Now it's time to create a scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
+1. Now it's time to create a scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch endpoints support scripts created in Python. In this case, you deploy a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
> [!NOTE]
- > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+ > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the article [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
> [!WARNING]
- > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
+ > If you're deploying an Automated machine learning (AutoML) model under a batch endpoint, note that the scoring script that AutoML provides only works for online endpoints and is not designed for batch execution. For information on how to create a scoring script for your batch deployment, see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md).
- __deployment-torch/code/batch_driver.py__
+ _deployment-torch/code/batch_driver.py_
:::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
-1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yaml`:
-
- __deployment-torch/environment/conda.yaml__
-
+1. Create an environment where your batch deployment will run. The environment should include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yaml` file:
+
+ _deployment-torch/environment/conda.yaml_
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yaml":::
-
+ > [!IMPORTANT] > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
-
- Indicate the environment as follows:
-
+
+ Specify the environment as follows:
+ # [Azure CLI](#tab/cli)
-
+ The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
-
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml" range="12-15":::
-
+ # [Python](#tab/python)
-
- Let's get a reference to the environment:
+
+ Get a reference to the environment:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_environment)]
- # [Studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
- On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
+ In the [Azure Machine Learning studio](https://ml.azure.com), follow these steps:
1. Navigate to the __Environments__ tab on the side menu.
-
+ 1. Select the tab __Custom environments__ > __Create__.
-
+ 1. Enter the name of the environment, in this case `torch-batch-env`.
-
- 1. On __Select environment type__ select __Use existing docker image with conda__.
-
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
-
- 1. On __Customize__ section copy the content of the file `deployment-torch/environment/conda.yaml` included in the repository into the portal.
-
- 1. Select __Next__ and then on __Create__.
-
- 1. The environment is ready to be used.
-
+
+ 1. For __Select environment source__, select __Use existing docker image with optional conda file__.
+
+ 1. For __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
+
+ 1. Select **Next** to go to the "Customize" section.
+
+ 1. Copy the content of the file _deployment-torch/environment/conda.yaml_ from the GitHub repo into the portal.
+
+ 1. Select __Next__ until you get to the "Review page".
+
+ 1. Select __Create__ and wait until the environment is ready for use.
+
-
+ > [!WARNING]
- > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+ > Curated environments are not supported in batch deployments. You need to specify your own environment. You can always use the base image of a curated environment as yours to simplify the process.
1. Create a deployment definition # [Azure CLI](#tab/cli)
-
- __deployment-torch/deployment.yml__
-
+
+ _deployment-torch/deployment.yml_
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml":::
-
- For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
+
+ The following table describes the key properties of the batch deployment. For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
| Key | Description | | | -- | | `name` | The name of the deployment. | | `endpoint_name` | The name of the endpoint to create the deployment under. |
- | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](./reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+ | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. This definition allows model files to be automatically uploaded and registered with an autogenerated name and version. See the [Model schema](./reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
| `code_configuration.code` | The local directory that contains all the Python source code to score the model. |
- | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
- | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
- | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. |
+ | `code_configuration.scoring_script` | The Python file in the `code_configuration.code` directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, to load the model in memory). `init()` will be called only once at the start of the process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author a scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
+ | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. See the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
+ | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using the `azureml:<compute-name>` syntax. |
| `resources.instance_count` | The number of instances to be used for each batch scoring job. | | `settings.max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. | | `settings.mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
- | `settings.output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
+ | `settings.output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and will only calculate `error_threshold`. |
| `settings.output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. | | `settings.retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. | | `settings.retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
- | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
| `settings.logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. | | `settings.environment_variables` | [Optional] Dictionary of environment variable name-value pairs to set for each batch scoring job. |
-
+ # [Python](#tab/python) [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_deployment)]
-
- This class allows user to configure the following key aspects:
+
+ The [BatchDeployment Class](/python/api/azure-ai-ml/azure.ai.ml.entities.batchdeployment) allows you to configure the following key properties of a batch deployment:
| Key | Description | | | -- |
A model deployment is a set of resources required for hosting the model that doe
| `endpoint_name` | Name of the endpoint to create the deployment under. | | `model` | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. | | `environment` | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification (optional for MLflow models). |
- | `code_configuration` | The configuration about how to run inference for the model (optional for MLflow models). |
- | `code_configuration.code` | Path to the source code directory for scoring the model |
- | `code_configuration.scoring_script` | Relative path to the scoring file in the source code directory |
- | `compute` | Name of the compute target to execute the batch scoring jobs on |
+ | `code_configuration` | The configuration about how to run inference for the model (optional for MLflow models). |
+ | `code_configuration.code` | Path to the source code directory for scoring the model. |
+ | `code_configuration.scoring_script` | Relative path to the scoring file in the source code directory. |
+ | `compute` | Name of the compute target on which to execute the batch scoring jobs. |
| `instance_count` | The number of nodes to use for each batch scoring job. |
- | `settings` | The model deployment inference configuration |
- | `settings.max_concurrency_per_instance` | The maximum number of parallel scoring_script runs per instance.
- | `settings.mini_batch_size` | The number of files the code_configuration.scoring_script can process in one `run`() call.
+ | `settings` | The model deployment inference configuration. |
+ | `settings.max_concurrency_per_instance` | The maximum number of parallel `scoring_script` runs per instance.|
+ | `settings.mini_batch_size` | The number of files the `code_configuration.scoring_script` can process in one `run`() call.|
| `settings.retry_settings` | Retry settings for scoring each mini batch. |
- | `settings.retry_settingsmax_retries` | The maximum number of retries for a failed or timed-out mini batch (default is 3) |
- | `settings.retry_settingstimeout` | The timeout in seconds for scoring a mini batch (default is 30) |
- | `settings.output_action` | Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row` |
+ | `settings.retry_settingsmax_retries` | The maximum number of retries for a failed or timed-out mini batch (default is 3). |
+ | `settings.retry_settingstimeout` | The timeout in seconds for scoring a mini batch (default is 30). |
+ | `settings.output_action` | How the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`. |
| `settings.logging_level` | The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`. | | `settings.environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
- # [Studio](#tab/studio)
-
- On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
-
+ # [Studio](#tab/azure-studio)
+
+ In the studio, follow these steps:
+ 1. Navigate to the __Endpoints__ tab on the side menu.
-
+ 1. Select the tab __Batch endpoints__ > __Create__.
-
+ 1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
-
- 1. Select __Next__.
-
- 1. On the model list, select the model `mnist` and select __Next__.
-
- 1. On the deployment configuration page, give the deployment a name.
-
- 1. On __Output action__, ensure __Append row__ is selected.
-
- 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
-
- 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
-
- 1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
-
- 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
-
- 1. Once done, select __Next__.
-
- 1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
-
- 1. Select the scoring script file on `deployment-torch/code/batch_driver.py`.
-
- 1. On the section __Choose an environment__, select the environment you created a previous step.
-
- 1. Select __Next__.
-
- 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ 1. Select __Next__ to go to the "Model" section.
+
+ 1. Select the model __mnist-classifier-torch__.
+
+ 1. Select __Next__ to go to the "Deployment" page.
+
+ 1. Give the deployment a name.
+
+ 1. For __Output action__, ensure __Append row__ is selected.
+
+ 1. For __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+
+ 1. For __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This size will control the amount of data your scoring script receives per batch.
+
+ 1. For __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+
+ 1. For __Max concurrency per instance__, configure the number of executors you want to have for each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+
+ 1. Once done, select __Next__ to go to the "Code + environment" page.
+
+ 1. For "Select a scoring script for inferencing", browse to find and select the scoring script file *deployment-torch/code/batch_driver.py*.
+
+ 1. In the "Select environment" section, select the environment you created previously _torch-batch-env_.
+
+ 1. Select __Next__ to go to the "Compute" page.
+
+ 1. Select the compute cluster you created in a previous step.
> [!WARNING] > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure Machine Learning CLI or Python SDK.
- 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
-
+ 1. For __Instance count__, enter the number of compute instances you want for the deployment. In this case, use 2.
+ 1. Select __Next__. 1. Create the deployment: # [Azure CLI](#tab/cli)
-
- Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ Run the following code to create a batch deployment under the batch endpoint, and set it as the default deployment.
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_deployment" ::: > [!TIP]
- > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
+ > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you might want to create a new deployment without setting it as default. Verify that the deployment works as you expect, and then update the default deployment later. For more information on implementing this process, see the [Deploy a new model](#add-deployments-to-an-endpoint) section.
# [Python](#tab/python)
- Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+ Using the `MLClient` created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_deployment)]
- Once the deployment is completed, we need to ensure the new deployment is the default deployment in the endpoint:
+ Once the deployment is completed, set the new deployment as the default deployment in the endpoint:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=set_default_deployment)]
- # [Studio](#tab/studio)
-
+ # [Studio](#tab/azure-studio)
+ In the wizard, select __Create__ to start the deployment process.
-
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/review-batch-wizard.png" alt-text="Screenshot of batch endpoints deployment review screen." lightbox="media/how-to-use-batch-model-deployments/review-batch-wizard.png":::
A model deployment is a set of resources required for hosting the model that doe
# [Azure CLI](#tab/cli)
- Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
+ Use `show` to check the endpoint and deployment details. To check a batch deployment, run the following code:
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="query_deployment" :::
A model deployment is a set of resources required for hosting the model that doe
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=query_deployment)]
- # [Studio](#tab/studio)
-
+ # [Studio](#tab/azure-studio)
+
+ After creating the batch endpoint, the endpoint's details page opens up. You can also find this page by following these steps:
+ 1. Navigate to the __Endpoints__ tab on the side menu.
-
+ 1. Select the tab __Batch endpoints__.
-
- 1. Select the batch endpoint you want to get details from.
-
- 1. In the endpoint page, you'll see all the details of the endpoint along with all the deployments available.
+
+ 1. Select the batch endpoint you want to view.
+
+ 1. The endpoint's **Details** page shows the details of the endpoint along with all the deployments available in the endpoint.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
## Run batch endpoints and access results
-Invoking a batch endpoint triggers a batch scoring job. The job `name` will be returned from the invoke response and can be used to track the batch scoring progress. When running models for scoring in Batch Endpoints, you need to indicate the input data path where the endpoints should look for the data you want to score. The following example shows how to start a new job over a sample data of the MNIST dataset stored in an Azure Storage Account.
+Invoking a batch endpoint triggers a batch scoring job. The job `name` is returned from the invoke response and can be used to track the batch scoring progress. When running models for scoring in batch endpoints, you need to specify the path to the input data so that the endpoints can find the data you want to score. The following example shows how to start a new job over a sample data of the MNIST dataset stored in an Azure Storage Account.
-You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning SDK, or REST endpoints. Read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md) for details about all the options.
+You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning SDK, or REST endpoints. For more details about these options, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
> [!NOTE]
-> __How does parallelization work?__:
+> __How does parallelization work?__
>
-> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this happens regardless of the size of the files involved. If your files are too big to be processed in large mini-batches, we suggest that you either split the files into smaller files to achieve a higher level of parallelism or you decrease the number of files per mini-batch. Currently, batch deployments can't account for skews in a file's size distribution.
# [Azure CLI](#tab/cli)
You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning
1. Select __Create job__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring." lightbox="media/how-to-use-batch-model-deployments/create-batch-job.png":::
-1. On __Deployment__, select the deployment you want to execute.
+1. For __Deployment__, select the deployment to execute.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job." lightbox="media/how-to-use-batch-model-deployments/job-setting-batch-scoring.png":::
-1. Select __Next__.
+1. Select __Next__ to go to the "Select data source" page.
+
+1. For the "Data source type", select __Datastore__.
+
+1. For the "Datastore", select __workspaceblobstore__ from the dropdown menu.
-1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/dat) for details.
+1. For "Path", enter the full URL `https://azuremlexampledata.blob.core.windows.net/data/mnist/sample`.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+ > [!TIP]
+ > This path works only because the given path has public access enabled. In general, you need to register the data source as a __Datastore__. See [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md) for details.
+
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option." lightbox="media/how-to-use-batch-model-deployments/select-datastore-job.png":::
+
+1. Select __Next__.
-1. Start the job.
+1. Select __Create__ to start the job.
-Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+Batch endpoints support reading files or folders that are located in different locations. To learn more about the supported types and how to specify them, see [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
### Monitor batch job execution progress
The following code checks the job status and outputs a link to the Azure Machine
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=get_job)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
The following code checks the job status and outputs a link to the Azure Machine
1. Select the batch endpoint you want to monitor.
-1. Select the tab __Jobs__.
+1. Select the __Jobs__ tab.
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+ :::image type="content" source="media/how-to-use-batch-model-deployments/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint." lightbox="media/how-to-use-batch-model-deployments/summary-jobs.png":::
-1. You'll see a list of the jobs created for the selected endpoint.
+1. From the displayed list of the jobs created for the selected endpoint, select the last job that is running.
-1. Select the last job that is running.
-
-1. You'll be redirected to the job monitoring page.
+1. You're now redirected to the job monitoring page.
### Check batch scoring results
-The job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified. See [Configure the output location](#configure-the-output-location) to know how to change the defaults. Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
+The job outputs are stored in cloud storage, either in the workspace's default blob storage, or the storage you specified. To learn how to change the defaults, see [Configure the output location](#configure-the-output-location). The following steps allow you to view the scoring results in Azure Storage Explorer when the job is completed:
-1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+1. Run the following code to open the batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
The job outputs will be stored in cloud storage, either in the workspace's defau
### Configure the output location
-The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
+By default, the batch scoring results are stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
# [Azure CLI](#tab/cli)
Use `output-path` to configure any folder in an Azure Machine Learning registere
# [Python](#tab/python)
-Use `params_override` to configure any folder in an Azure Machine Learning registered data store. Only registered data stores are supported as output paths. In this example we will use the default data store:
+Use `params_override` to configure any folder in an Azure Machine Learning registered data store. Only registered data stores are supported as output paths. In this example you use the default data store:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=get_data_store)]
-Once you identified the data store you want to use, configure the output as follows:
+Once you've identified the data store you want to use, configure the output as follows:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job_set_output)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
Once you identified the data store you want to use, configure the output as foll
1. Select __Create job__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
-
-1. On __Deployment__, select the deployment you want to execute.
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring." lightbox="media/how-to-use-batch-model-deployments/create-batch-job.png":::
-1. Select __Next__.
+1. For __Deployment__, select the deployment you want to execute.
-1. Check the option __Override deployment settings__.
+1. Select the option __Override deployment settings__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
-1. On __Select data source__, select the data input you want to use.
+1. Select __Next__.
+
+1. On the "Select data source" page, select the data input you want to use.
+
+1. Select __Next__.
-1. On __Configure output location__, check the option __Enable output configuration__.
+1. On the "Configure output location" page, select the option __Enable output configuration__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/configure-output-location.png" alt-text="Screenshot of optionally configuring output location." lightbox="media/how-to-use-batch-model-deployments/configure-output-location.png":::
1. Configure the __Blob datastore__ where the outputs should be placed.
Once you identified the data store you want to use, configure the output as foll
> You must use a unique output location. If the output file exists, the batch scoring job will fail. > [!IMPORTANT]
-> As opposite as for inputs, only Azure Machine Learning data stores running on blob storage accounts are supported for outputs.
+> Unlike inputs, outputs can be stored only in Azure Machine Learning data stores that run on blob storage accounts.
-## Overwrite deployment configuration per each job
+## Overwrite deployment configuration for each job
-Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance. The following settings can be configured in a per-job basis:
+When you invoke a batch endpoint, some settings can be overwritten to make best use of the compute resources and to improve performance. The following settings can be configured on a per-job basis:
-* Use __instance count__ to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
-* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
-* Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
+* __Instance count__: use this setting to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
+* __Mini-batch size__: use this setting to overwrite the number of files to include in each mini-batch. The number of mini batches is decided by the total input file counts and mini-batch size. A smaller mini-batch size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
+* Other settings, such as __max retries__, __timeout__, and __error threshold__ can be overwritten. These settings might impact the end-to-end batch scoring time for different workloads.
# [Azure CLI](#tab/cli)
Some settings can be overwritten when invoke to make best use of the compute res
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job_overwrite)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
Some settings can be overwritten when invoke to make best use of the compute res
1. Select __Create job__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+1. For __Deployment__, select the deployment you want to execute.
+
+1. Select the option __Override deployment settings__.
-1. On __Deployment__, select the deployment you want to execute.
+1. Configure the job parameters. Only the current job execution will be affected by this configuration.
1. Select __Next__.
-1. Check the option __Override deployment settings__.
+1. On the "Select data source" page, select the data input you want to use.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+1. Select __Next__.
-1. Configure the job parameters. Only the current job execution will be affected by this configuration.
+1. On the "Configure output location" page, select the option __Enable output configuration__.
+
+1. Configure the __Blob datastore__ where the outputs should be placed.
-## Adding deployments to an endpoint
+## Add deployments to an endpoint
-Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments don't affect one another.
-In this example, you'll learn how to add a second deployment __that solves the same MNIST problem but using a model built with Keras and TensorFlow__.
+In this example, you add a second deployment that uses a __model built with Keras and TensorFlow__ to solve the same MNIST problem.
-### Adding a second deployment
+### Add a second deployment
-1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You'll also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You also need to add the library `azureml-core`, as it's required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
# [Azure CLI](#tab/cli)
- The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
-
+ The environment definition is included in the deployment definition itself as an anonymous environment.
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml" range="12-15"::: # [Python](#tab/python)
- Let's get a reference to the environment:
+ Get a reference to the environment:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_environment_non_default)]
- # [Studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Environments__ tab on the side menu.
-
+ 1. Select the tab __Custom environments__ > __Create__.
-
+ 1. Enter the name of the environment, in this case `keras-batch-env`.
-
- 1. On __Select environment type__ select __Use existing docker image with conda__.
-
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.0`.
-
- 1. On __Customize__ section copy the content of the file `deployment-keras/environment/conda.yaml` included in the repository into the portal.
-
- 1. Select __Next__ and then on __Create__.
-
- 1. The environment is ready to be used.
+
+ 1. For __Select environment source__, select __Use existing docker image with optional conda file__.
+
+ 1. For __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
+
+ 1. Select **Next** to go to the "Customize" section.
+
+ 1. Copy the content of the file _deployment-keras/environment/conda.yaml_ from the GitHub repo into the portal.
+
+ 1. Select __Next__ until you get to the "Review page".
+
+ 1. Select __Create__ and wait until the environment is ready for use.
In this example, you'll learn how to add a second deployment __that solves the s
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_deployment_non_default)]
- # [Studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
In this example, you'll learn how to add a second deployment __that solves the s
1. Select __Add deployment__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
- 1. On the model list, select the model `mnist` and select __Next__.
+ 1. Select __Next__ to go to the "Model" page.
+
+ 1. From the model list, select the model `mnist` and select __Next__.
1. On the deployment configuration page, give the deployment a name.
- 1. On __Output action__, ensure __Append row__ is selected.
+ 1. Undo the selection for the option: __Make this new deployment the default for batch jobs__.
+
+ 1. For __Output action__, ensure __Append row__ is selected.
- 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. For __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
- 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. For __Mini batch size__, adjust the size of the files that will be included in each mini-batch. This will control the amount of data your scoring script receives for each batch.
- 1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. For __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
- 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
- 1. Once done, select __Next__.
+ 1. For __Max concurrency per instance__, configure the number of executors you want to have for each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
- 1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
+ 1. Select __Next__ to go to the "Code + environment" page.
- 1. Select the scoring script file on `deployment-keras/code/batch_driver.py`.
+ 1. For __Select a scoring script for inferencing__, browse to select the scoring script file *deployment-keras/code/batch_driver.py*.
- 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. For __Select environment__, select the environment you created in a previous step.
1. Select __Next__.
- 1. On the section __Compute__, select the compute cluster you created in a previous step.
+ 1. On the __Compute__ page, select the compute cluster you created in a previous step.
- 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
+ 1. For __Instance count__, enter the number of compute instances you want for the deployment. In this case, use 2.
1. Select __Next__. + 1. Create the deployment: # [Azure CLI](#tab/cli)
In this example, you'll learn how to add a second deployment __that solves the s
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_deployment_non_default" ::: > [!TIP]
- > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
+ > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, create a new deployment without setting it as default. Then verify it, and update the default deployment later.
# [Python](#tab/python)
- Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+ Using the `MLClient` created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_deployment_non_default)]
- # [Studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
In the wizard, select __Create__ to start the deployment process. ### Test a non-default batch deployment
-To test the new non-default deployment, you'll need to know the name of the deployment you want to run.
+To test the new non-default deployment, you need to know the name of the deployment you want to run.
# [Azure CLI](#tab/cli) :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="test_deployment_non_default" :::
-Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+Notice `--deployment-name` is used to specify the deployment to execute. This parameter allows you to `invoke` a non-default deployment without updating the default deployment of the batch endpoint.
# [Python](#tab/python) [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=test_deployment_non_default)]
-Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+Notice `deployment_name` is used to specify the deployment to execute. This parameter allows you to `invoke` a non-default deployment without updating the default deployment of the batch endpoint.
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
Notice `deployment_name` is used to specify the deployment we want to execute. T
1. Select __Create job__.
-1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
+1. For __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
1. Complete the job creation wizard to get the job started.
Notice `deployment_name` is used to specify the deployment we want to execute. T
### Update the default batch deployment
-Although you can invoke a specific deployment inside of an endpoint, you'll usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+Although you can invoke a specific deployment inside an endpoint, you'll typically want to invoke the endpoint itself and let the endpoint decide which deployment to useΓÇöthe default deployment. You can change the default deployment (and consequently, change the model serving the deployment) without changing your contract with the user invoking the endpoint. Use the following code to update the default deployment:
# [Azure CLI](#tab/cli)
Although you can invoke a specific deployment inside of an endpoint, you'll usua
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=update_default_deployment)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
Although you can invoke a specific deployment inside of an endpoint, you'll usua
1. Select __Update default deployment__.
- :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+ :::image type="content" source="./media/how-to-use-batch-model-deployments/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
-1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+1. For __Select default deployment__, select the name of the deployment you want to set as the default.
1. Select __Update__.
Although you can invoke a specific deployment inside of an endpoint, you'll usua
# [Azure CLI](#tab/cli)
-If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
+If you won't be using the old batch deployment, delete it by running the following code. `--yes` is used to confirm the deletion.
::: code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="delete_deployment" :::
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+Run the following code to delete the batch endpoint and all its underlying deployments. Batch scoring jobs won't be deleted.
::: code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="delete_endpoint" ::: # [Python](#tab/python)
-If you aren't going to use the old batch deployment, you should delete it by running the following code.
+If you won't be using the old batch deployment, delete it by running the following code.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=delete_deployment)]
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+Run the following code to delete the batch endpoint and all its underlying deployments. Batch scoring jobs won't be deleted.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=delete_endpoint)]
-# [Studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu.
Run the following code to delete the batch endpoint and all the underlying deplo
-## Next steps
+## Related content
* [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md). * [Authentication on batch endpoints](how-to-authenticate-batch-endpoint.md).
machine-learning How To Bulk Test Evaluate Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-bulk-test-evaluate-flow.md
If an evaluation method uses Large Language Models (LLMs) to measure the perform
> [!NOTE] > Some evaluation methods require GPT-4 or GPT-3 to run. You must provide valid connections for these evaluation methods before using them.
+> Some evaluation process may take up lots of tokens, so it's recommended to use a model which can support >=16k tokens.
After you finish the input mapping, select on **"Next"** to review your settings and select on **"Submit"** to start the batch run with evaluation.
You can select **Evaluate** to start another round of evaluation.
After setting up the configuration, you can select **"Submit"** for this new round of evaluation. After submission, you'll be able to see a new record in the prompt flow run list.
-After the evaluation run completed, similarly, you can check the result of evaluation in the **"Outputs"** tab of the batch run detail panel. You need select the new evaluation run to view its result.
+After the evaluation run completed, similarly, you can check the result of evaluation in the **"Outputs"** tab of the batch run detail panel. You need to select the new evaluation run to view its result.
:::image type="content" source="./media/how-to-bulk-test-evaluate-flow/batch-run-detail-output-new-evaluation.png" alt-text="Screenshot of batch run detail page on the output tab with checking the new evaluation output." lightbox = "./media/how-to-bulk-test-evaluate-flow/batch-run-detail-output-new-evaluation.png":::
System message, sometimes referred to as a metaprompt or [system prompt](../../c
## Further reading: Guidance for creating Golden Datasets used for Copilot quality assurance
-The creation of copilot that use Large Language Models (LLMs) typically involves grounding the model in reality using source datasets. However, to ensure that the LLMs provide the most accurate and useful responses to customer queries, a "Golden Dataset" is necessary.
+The creation of a copilot that use Large Language Models (LLMs) typically involves grounding the model in reality using source datasets. However, to ensure that the LLMs provide the most accurate and useful responses to customer queries, a "Golden Dataset" is necessary.
A Golden Dataset is a collection of realistic customer questions and expertly crafted answers. It serves as a Quality Assurance tool for LLMs used by your copilot. Golden Datasets are not used to train an LLM or inject context into an LLM prompt. Instead, they are utilized to assess the quality of the answers generated by the LLM.
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
You can also customize the environment that you use to run this flow by adding p
> You can change the location and even the file name of `requirements.txt`, but be sure to also change it in the `flow.dag.yaml` file in the flow folder. > > Don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image.
+>
+> `requirements.txt` didn't support local wheel file, you need build them in your image and update customize base image in `flow.dag.yaml`. Learn more [how to build custom base image](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime)
#### Add packages in a private feed in Azure DevOps
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
Select **Metrics** tab in the left navigation. Select **promptflow standard metr
## Troubleshoot endpoints deployed from prompt flow
+### Lack authorization to perform action "Microsoft.MachineLearningService/workspaces/datastores/read"
+
+If your flow contains Index Look Up tool, after deploying the flow, the endpoint needs to access workspace datastore to read MLIndex yaml file or FAISS folder containing chunks and embeddings. Hence, you need to manually grant the endpoint identity permission to do so.
+
+You can either grant the endpoint identity **AzureML Data Scientist** on workspace scope, or a custom role which contains "MachineLearningService/workspace/datastore/reader" action.
+ ### MissingDriverProgram Error If you deploy your flow with custom environment and encounter the following error, it might be because you didn't specify the `inference_config` in your custom environment definition.
If you deploy your flow with custom environment and encounter the following erro
There are 2 ways to fix this error.
-1. You can fix this error by adding `inference_config` in your custom environment definition. Learn more about [how to use customized environment](#use-customized-environment).
+- (Recommended) You can find the container image uri in your custom environment detail page, and set it as the flow base image in the flow.dag.yaml file. When you deploy the flow in UI, you just select **Use environment of current flow definition**, and the backend service will create the customized environment based on this base image and `requirement.txt` for your deployment. Learn more about [the environment specified in the flow definition](#use-environment-of-current-flow-definition).
+
+ :::image type="content" source="./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png" alt-text="Screenshot of custom environment detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png":::
+
+ :::image type="content" source="./media/how-to-deploy-for-real-time-inference/flow-environment-image.png" alt-text="Screenshot of specifying base image in raw yaml file of the flow. " lightbox = "./media/how-to-deploy-for-real-time-inference/flow-environment-image.png":::
++
+- You can fix this error by adding `inference_config` in your custom environment definition. Learn more about [how to use customized environment](#use-customized-environment).
Following is an example of customized environment definition.
inference_config:
path: /score ```
-2. You can find the container image uri in your custom environment detail page, and set it as the flow base image in the flow.dag.yaml file. When you deploy the flow in UI, you just select **Use environment of current flow definition**, and the backend service will create the customized environment based on this base image and `requirement.txt` for your deployment. Learn more about [the environment specified in the flow definition](#use-environment-of-current-flow-definition).
-
- :::image type="content" source="./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png" alt-text="Screenshot of custom environment detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png":::
-
- :::image type="content" source="./media/how-to-deploy-for-real-time-inference/flow-environment-image.png" alt-text="Screenshot of specifying base image in raw yaml file of the flow. " lightbox = "./media/how-to-deploy-for-real-time-inference/flow-environment-image.png":::
- ### Model response taking too long Sometimes, you might notice that the deployment is taking too long to respond. There are several potential factors for this to occur.
If you aren't going use the endpoint after completing this tutorial, you should
- [Iterate and optimize your flow by tuning prompts using variants](how-to-tune-prompts-using-variants.md) - [View costs for an Azure Machine Learning managed online endpoint](../how-to-view-online-endpoints-costs.md)
+- [Troubleshoot prompt flow deployments.](how-to-troubleshoot-prompt-flow-deployment.md)
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
request_settings:
- Learn more about [managed online endpoint schema](../reference-yaml-endpoint-online.md) and [managed online deployment schema](../reference-yaml-deployment-managed-online.md). - Learn more about how to [test the endpoint in UI](./how-to-deploy-for-real-time-inference.md#test-the-endpoint-with-sample-data) and [monitor the endpoint](./how-to-deploy-for-real-time-inference.md#view-managed-online-endpoints-common-metrics-using-azure-monitor-optional). - Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).
+- [Troubleshoot prompt flow deployments.](how-to-troubleshoot-prompt-flow-deployment.md)
- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, see [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md). - Learn more about [deploy flows to other platforms, such as a local development service, Docker container, Azure APP service, etc.](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/https://docsupdatetracker.net/index.html)
machine-learning How To Troubleshoot Prompt Flow Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-troubleshoot-prompt-flow-deployment.md
+
+ Title: Troubleshoot prompt flow deployments
+
+description: This article provides instructions on how to troubleshoot your prompt flow deployments.
+++ Last updated : 04/01/2024+++++
+# Troubleshoot prompt flow deployments
+
+This article provides instructions on how to troubleshoot your deployments from prompt flow.
+
+## Lack authorization to perform action "Microsoft.MachineLearningService/workspaces/datastores/read"
+
+If your flow contains Index Look Up tool, after deploying the flow, the endpoint needs to access workspace datastore to read MLIndex yaml file or FAISS folder containing chunks and embeddings. Hence, you need to manually grant the endpoint identity permission to do so.
+
+You can either grant the endpoint identity **AzureML Data Scientist** on workspace scope, or a custom role that contains "MachineLearningService/workspace/datastore/reader" action.
+
+## Upstream request timeout issue when consuming the endpoint
+
+If you use CLI or SDK to deploy the flow, you may encounter timeout error. By default the `request_timeout_ms` is 5000. You can specify at max to 5 minutes, which is 300,000 ms. Following is example showing how to specify request time-out in the deployment yaml file. To learn more, see [deployment schema](../reference-yaml-deployment-managed-online.md).
+
+```yaml
+request_settings:
+ request_timeout_ms: 300000
+```
+
+## OpenAI API hits authentication error
+
+If you regenerate your Azure OpenAI key and manually update the connection used in prompt flow, you may encounter errors like "Unauthorized. Access token is missing, invalid, audience is incorrect or have expired." when invoking an existing endpoint created before key regenerating.
+
+This is because the connections used in the endpoints/deployments won't be automatically updated. Any change for key or secrets in deployments should be done by manual update, which aims to avoid impacting online production deployment due to unintentional offline operation.
+
+- If the endpoint was deployed in the studio UI, you can just redeploy the flow to the existing endpoint using the same deployment name.
+- If the endpoint was deployed using SDK or CLI, you need to make some modification to the deployment definition such as adding a dummy environment variable, and then use `az ml online-deployment update` to update your deployment.
++
+## Vulnerability issues in prompt flow deployments
+
+For prompt flow runtime related vulnerabilities, following are approaches, which can help mitigate:
+
+- Update the dependency packages in your requirements.txt in your flow folder.
+- If you're using customized base image for your flow, you need to update the prompt flow runtime to latest version and rebuild your base image, then redeploy the flow.
+
+For any other vulnerabilities of managed online deployments, Azure Machine Learning fixes the issues in a monthly manner.
+
+## "MissingDriverProgram Error" or "Could not find driver program in the request"
+
+If you deploy your flow and encounter the following error, it might be related to the deployment environment.
+
+```text
+'error':
+{
+ 'code': 'BadRequest',
+ 'message': 'The request is invalid.',
+ 'details':
+ {'code': 'MissingDriverProgram',
+ 'message': 'Could not find driver program in the request.',
+ 'details': [],
+ 'additionalInfo': []
+ }
+}
+```
+
+```text
+Could not find driver program in the request
+```
+
+There are two ways to fix this error.
+
+- (Recommended) You can find the container image uri in your custom environment detail page, and set it as the flow base image in the flow.dag.yaml file. When you deploy the flow in UI, you just select **Use environment of current flow definition**, and the backend service will create the customized environment based on this base image and `requirement.txt` for your deployment. Learn more about [the environment specified in the flow definition](how-to-deploy-for-real-time-inference.md#use-environment-of-current-flow-definition).
+
+ :::image type="content" source="./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png" alt-text="Screenshot of custom environment detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/custom-environment-image-uri.png":::
+
+ :::image type="content" source="./media/how-to-deploy-for-real-time-inference/flow-environment-image.png" alt-text="Screenshot of specifying base image in raw yaml file of the flow. " lightbox = "./media/how-to-deploy-for-real-time-inference/flow-environment-image.png":::
+
+- You can fix this error by adding `inference_config` in your custom environment definition.
+
+ Following is an example of customized environment definition.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
+name: pf-customized-test
+build:
+ path: ./image_build
+ dockerfile_path: Dockerfile
+description: promptflow customized runtime
+inference_config:
+ liveness_route:
+ port: 8080
+ path: /health
+ readiness_route:
+ port: 8080
+ path: /health
+ scoring_route:
+ port: 8080
+ path: /score
+```
+
+## Model response taking too long
+
+Sometimes, you might notice that the deployment is taking too long to respond. There are several potential factors for this to occur.
+
+- The model used in the flow isn't powerful enough (example: use GPT 3.5 instead of text-ada)
+- Index query isn't optimized and taking too long
+- Flow has many steps to process
+
+Consider optimizing the endpoint with above considerations to improve the performance of the model.
+
+## Unable to fetch deployment schema
+
+After you deploy the endpoint and want to test it in the **Test tab** in the endpoint detail page, if the **Test tab** shows **Unable to fetch deployment schema**, you can try the following two methods to mitigate this issue:
++
+- Make sure you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint).
+- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime that was in old version as well. To update the runtime, follow [Update a runtime on the UI](./how-to-create-manage-runtime.md#update-a-runtime-on-the-ui) and rerun the flow in the latest runtime and then deploy the flow again.
+
+## Access denied to list workspace secret
+
+If you encounter an error like "Access denied to list workspace secret", check whether you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint).
+
+## Next steps
+
+- Learn more about [managed online endpoint schema](../reference-yaml-endpoint-online.md) and [managed online deployment schema](../reference-yaml-deployment-managed-online.md).
+- Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).
migrate Tutorial Discover Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md
After you have performed server discovery and software inventory using the Azure
- | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 8 GB RAM, with 30 GB storage, 4 Core CPU
- **Network Requirements** | Access to the following endpoints: <br/><br/>*.docker.io <br/></br>*.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure Arc-enabled Kubernetes network requirements](https://learn.microsoft.com/azure/azure-arc/kubernetes/network-requirements?tabs=azure-cloud) <br/><br/>[Azure CLI endpoints for proxy bypass](https://learn.microsoft.com/cli/azure/azure-cli-endpoints?tabs=azure-cloud)
+ **Network Requirements** | Access to the following endpoints: <br/><br/>*.docker.io <br/></br>*.docker.com <br/><br/>api.snapcraft.io <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) <br/><br/>[Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
After copying the script, you can go to your Linux server, save the script as *Deploy.sh* on the server.
After copying the script, you can go to your Linux server, save the script as *D
- | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 6 GB RAM, with 30 GB storage on root volume, 4 Core CPU
- **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](https://learn.microsoft.com/cli/azure/azure-cli-endpoints?tabs=azure-cloud)
+ **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
5. After copying the script, go to your Linux server, save the script as *Deploy.sh* on the server.
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Previously updated : 03/18/2024 Last updated : 04/02/2024
Using Azure DNS, you can host and resolve public domains, manage DNS resolution
### <a name="bastion"></a>Azure Bastion
-[Azure Bastion](../../bastion/bastion-overview.md) is a service that you can deploy to let you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you deploy inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software.
+[Azure Bastion](../../bastion/bastion-overview.md) is a service that you can deploy to let you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you deploy inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. There are a variety of different SKU/tiers available for Azure Bastion. The tier you select affects the features that are available. For more information, see [About Bastion configuration settings](../../bastion/configuration-settings.md).
:::image type="content" source="../../bastion/media/bastion-overview/architecture.png" alt-text="Diagram showing Azure Bastion architecture.":::
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
The following tables provide links to each Azure service relocation document. Th
| Product | Relocation with data | Relocation without data | Resource Mover | | | | | |
+[Azure Key Vault](relocation-key-vault.md)| ✅ | ✅| ❌ |
[Azure Event Hubs](relocation-event-hub.md)| ❌ | ✅| ❌ | [Azure Event Hubs Cluster](relocation-event-hub-cluster.md)| ❌ | ✅| ❌ | [Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |
+[Azure Site Recovery (Recovery Services vaults)](../site-recovery/move-vaults-across-regions.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
[Azure Virtual Network](./relocation-virtual-network.md)| ❌ | ✅| ✅ | [Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)| ❌ | ✅| ✅ |
The following tables provide links to each Azure service relocation document. Th
| Product | Relocation with data | Relocation without data | Resource Mover | | | | | |
-[Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ❌ | ✅| ❌ |
+[Azure API Management](../api-management/api-management-howto-migrate.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure App Service](../app-service/manage-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|❌ | ✅| ❌ |
+[Azure Backup (Recovery Services vault)](../backup/azure-backup-move-vaults-across-regions.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Batch](../batch/account-move.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Azure Cache for Redis](../azure-cache-for-redis/cache-moving-resources.md?toc=/azure/operational-excellence/toc.json)|❌ | ✅| ❌ |
+[Azure Container Registry](../container-registry/manual-regional-move.md)|✅ | ✅| ❌ |
+[Azure Cosmos DB](../cosmos-db/how-to-move-regions.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Azure Database for MariaDB Server](../mariadb/howto-move-regions-portal.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Azure Database for MySQL Server](../mysql/howto-move-regions-portal.md?toc=/azure/operational-excellence/toc.json)✅ | ✅| ❌ |
[Azure Database for PostgreSQL](./relocation-postgresql-flexible-server.md)| ✅ | ✅| ❌ |
+[Azure Functions](../azure-functions/functions-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|❌ | ✅| ❌ |
+[Azure Logic apps](../logic-apps/move-logic-app-resources.md?toc=/azure/operational-excellence/toc.json)|❌ | ✅| ❌ |
+[Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ❌ | ✅| ❌ |
[Azure Private Link Service](./relocation-private-link.md) | ❌ | ✅| ❌ | [Azure Storage Account](relocation-storage-account.md)| ✅ | ✅| ❌ | [Managed identities for Azure resources](relocation-storage-account.md)| ❌ | ✅| ❌ |
+[Azure Stream Analytics - Stream Analytics jobs](../stream-analytics/copy-job.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Stream Analytics - Stream Analytics cluster](../stream-analytics/move-cluster.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
### ![An icon that signifies this service is strategic.](./media/relocation/icon-strategic.svg) Strategic services
The following tables provide links to each Azure service relocation document. Th
| Product | Relocation with data | Relocation without data | Resource Mover | | | | | | [Azure Automation](./relocation-automation.md)| ✅ | ✅| ❌ |-
+[Azure IoT Hub](/azure/iot-hub/iot-hub-how-to-clone?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Power BI](/power-bi/admin/service-admin-region-move?toc=/azure/operational-excellence/toc.json)| ❌ | ✅| ❌ |
## Additional information
operational-excellence Relocation Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-hub.md
# Relocate Azure Event Hubs to another region - This article shows you how to to copy an Event Hubs namespace and configuration settings to another region. If you have other resources in the Azure resource group that contains the Event Hubs namespace, you may want to export the template at the resource group level so that all related resources can be moved to the new region in one step. To learn how to export a **resource group** to the template, see [Move resources across regions(from resource group)](/azure/resource-mover/move-region-within-resource-group).
If you have other resources in the Azure resource group that contains the Event
- If the Event Hubs namespace is in an **Event Hubs cluster**, [move the dedicated cluster](../event-hubs/move-cluster-across-regions.md) to the **target region** before you go through steps in this article. You can also use the [quickstart template on GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-cluster-namespace-eventhub/) to create an Event Hubs cluster. In the template, remove the namespace portion of the JSON to create only the cluster. -- Identify all resources dependencies. Depending on how you've deployed Event Hub, the following services *may* need deployment in the target region:
+- Identify all resources dependencies. Depending on how you've deployed Event Hubs, the following services *may* need deployment in the target region:
- [Public IP](/azure/virtual-network/move-across-regions-publicip-portal)
- - [Azure Private Link Service](./relocation-private-link.md)
- [Virtual Network](./relocation-virtual-network.md)
- - Event Hub Namespace
- - [Event Hub Cluster](./relocation-event-hub-cluster.md)
+ - Event Hubs Namespace
+ - [Event Hubs Cluster](./relocation-event-hub-cluster.md)
- [Storage Account](./relocation-storage-account.md) >[!TIP] >When Capture is enabled, you can either relocate a Storage Account from the source or use an existing one in the target region. -- Identify all dependent resources. Event Hub is a messaging system that lets applications publish and subscribe for messages. Consider whether or not your application at target requires messaging support for the same set of dependent services that it had at the source target.
+- Identify all dependent resources. Event Hubs is a messaging system that lets applications publish and subscribe for messages. Consider whether or not your application at target requires messaging support for the same set of dependent services that it had at the source target.
++
+## Considerations for Service Endpoints
+
+The virtual network service endpoints for Azure Event Hubs restrict access to a specified virtual network. The endpoints can also restrict access to a list of IPv4 (internet protocol version 4) address ranges. Any user connecting to the Event Hubs from outside those sources is denied access. If Service endpoints were configured in the source region for the Event Hubs resource, the same would need to be done in the target one.
+
+For a successful recreation of the Event Hubs to the target region, the VNet and Subnet must be created beforehand. In case the move of these two resources is being carried out with the Azure Resource Mover tool, the service endpoints wonΓÇÖt be configured automatically. Hence, they need to be configured manually, which can be done through the [Azure portal](/azure/key-vault/general/quick-create-portal), the [Azure CLI](/azure/key-vault/general/quick-create-cli), or [Azure PowerShell](/azure/key-vault/general/quick-create-powershell).
+
+## Considerations for Private Endpoint
+
+Azure Private Link provides private connectivity from a virtual network to [Azure platform as a service (PaaS), customer-owned, or Microsoft partner services](/azure/private-link/private-endpoint-overview). Private Link simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet.
+
+For a successful recreation of the Event Hubs in the target region, the VNet and Subnet must be created before the actual recreation occurs.
## Prepare
To get started, export a Resource Manager template. This template contains setti
This zip file contains the .json files that include the template and scripts to deploy the template.
+## Modify the template
+
+Modify the template by changing the Event Hubs namespace name and region.
+
+# [portal](#tab/azure-portal)
++++
+1. Select **Template deployment**.
+
+1. In the Azure portal, select **Create**.
+
+1. Select **Build your own template in the editor**.
+
+1. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+
+1. In the **template.json** file, name the Event Hubs namespace by setting the default value of the namespace name. This example sets the default value of the Event Hubs namespace name to `namespace-name`.
+
+ ```json
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "namespaces_name": {
+ "defaultValue": "namespace-name",
+ "type": "String"
+ },
+ },
+ ```
+1. Edit the **location** property in the **template.json** file to the target region. This example sets the target region to `centralus`.
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.KeyVault/vaults",
+ "apiVersion": "2023-07-01",
+ "name": "[parameters('vaults_name')]",
+ "location": "centralus",
+
+ },
+
+ ]
++
+ "resources": [
+ {
+ "type": "Microsoft.EventHub/namespaces",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[parameters('namespaces_name')]",
+ "location": "centralus",
+
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/authorizationrules",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/RootManageSharedAccessKey')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "rights": [
+ "Listen",
+ "Manage",
+ "Send"
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/networkrulesets",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/default')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "virtualNetworkRules": [
+ {
+ "subnet": {
+ "id": "[concat(parameters('virtualNetworks_vnet_akv_externalid'), '/subnets/default')]"
+ },
+ "ignoreMissingVnetServiceEndpoint": false
+ }
+ ],
+ "ipRules": [],
+ "trustedServiceAccessEnabled": false
+ }
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/privateEndpointConnections",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_peterheesbus_name'), '/81263915-15d5-4f14-8d65-25866d745a66')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_peterheesbus_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateEndpoint": {
+ "id": "[parameters('privateEndpoints_pvs_eventhub_externalid')]"
+ },
+ "privateLinkServiceConnectionState": {
+ "status": "Approved",
+ "description": "Auto-Approved"
+ }
+ }
+ }
+ ```
+
+ To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+
+1. Remove resources of type private endpoint in the template.
+
+ ```json
+ {
+ "type": "Microsoft.EventHub/namespaces/privateEndpointConnections",
+
+ }
+ ```
+
+1. If you configured a service endpoint in your Event Hubs, in the `networkrulesets` section, under `virtualNetworkRules`, add the rule for the target subnet. Ensure that the `ignoreMissingVnetServiceEndpoint`_ flag is set to `False`, so that the IaC fails to deploy the Event Hubs in case the service endpoint isnΓÇÖt configured in the target region.
+
+ \_parameter.json_
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+
+ "target_vnet_externalid": {
+ "value": "virtualnetwork-externalid"
+ },
+ "target_subnet_name": {
+ "value": "subnet-name"
+ }
+ }
+ }
+ ```
+
+ \_template.json
+ ```json
+ {
+ "type": "Microsoft.EventHub/namespaces/networkrulesets",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/default')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "virtualNetworkRules": [
+ {
+ "subnet": {
+ "id": "[concat(parameters('target_vnet_externalid), concat('/subnets/', parameters('target_subnet_name')]"
+ },
+ "ignoreMissingVnetServiceEndpoint": false
+ }
+ ],
+ "ipRules": [],
+ "trustedServiceAccessEnabled": false
+ }
+ },
+
+ ```
+1. Select **Save** to save the template.
+
+# [PowerShell](#tab/azure-powershell)
++
+1. In the **template.json** file, name the Event Hubs namespace by setting the default value of the namespace name. This example sets the default value of the Event Hubs namespace name to `namespace-name`.
+
+ ```json
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "namespaces_name": {
+ "defaultValue": "namespace-name",
+ "type": "String"
+ },
+
+ },
+ ```
+
+2. Edit the **location** property in the **template.json** file to the target region. This example sets the target region to `centralus`.
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.KeyVault/vaults",
+ "apiVersion": "2023-07-01",
+ "name": "[parameters('vaults_name')]",
+ "location": "centralus",
+
+ },
+
+ ]
++
+ "resources": [
+ {
+ "type": "Microsoft.EventHub/namespaces",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[parameters('namespaces_name')]",
+ "location": "centralus",
+
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/authorizationrules",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/RootManageSharedAccessKey')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "rights": [
+ "Listen",
+ "Manage",
+ "Send"
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/networkrulesets",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/default')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "virtualNetworkRules": [
+ {
+ "subnet": {
+ "id": "[concat(parameters('virtualNetworks_vnet_akv_externalid'), '/subnets/default')]"
+ },
+ "ignoreMissingVnetServiceEndpoint": false
+ }
+ ],
+ "ipRules": [],
+ "trustedServiceAccessEnabled": false
+ }
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/privateEndpointConnections",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_peterheesbus_name'), '/81263915-15d5-4f14-8d65-25866d745a66')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_peterheesbus_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateEndpoint": {
+ "id": "[parameters('privateEndpoints_pvs_eventhub_externalid')]"
+ },
+ "privateLinkServiceConnectionState": {
+ "status": "Approved",
+ "description": "Auto-Approved"
+ }
+ }
+ }
+ ```
+
+ You can obtain region codes by running the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command.
+
+ ```azurepowershell-interactive
+ Get-AzLocation | format-table
+ ```
+
+3. Remove resources of typ private endpoint in the template.
+
+ ```json
+ {
+ "type": "Microsoft.EventHub/namespaces/privateEndpointConnections",
+
+ }
+ ```
+
+4. If you configured a service endpoint in your Event Hubs, in the `networkrulesets` section, under `virtualNetworkRules`, add the rule for the target subnet. Ensure that the `ignoreMissingVnetServiceEndpoint` flag is set to False, so that the IaC fails to deploy the Event Hubs in case the service endpoint isnΓÇÖt configured in the target region.
+
+ \_parameter.json_
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ ...
+ "target_vnet_externalid": {
+ "value": "virtualnetwork-externalid"
+ },
+ "target_subnet_name": {
+ "value": "subnet-name"
+ }
+ }
+ }
+ ```
+
+ \_template.json
+
+ ```json
+ {
+ "type": "Microsoft.EventHub/namespaces/networkrulesets",
+ "apiVersion": "2023-01-01-preview",
+ "name": "[concat(parameters('namespaces_name'), '/default')]",
+ "location": "centralus",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_name'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "virtualNetworkRules": [
+ {
+ "subnet": {
+ "id": "[concat(parameters('target_vnet_externalid), concat('/subnets/', parameters('target_subnet_name')]"
+ },
+ "ignoreMissingVnetServiceEndpoint": false
+ }
+ ],
+ "ipRules": [],
+ "trustedServiceAccessEnabled": false
+ }
+ },
+ ```
+
+1. Select **Save** to save the template.
++ ## Redeploy
-Deploy the template to create an Event Hubs namespace in the target region.
+1. In the Azure portal, select **Create a resource**.
+1. In **Search the Marketplace**, type **template deployment**, and select **Template deployment (deploy using custom templates)**.
+
+1. Select **Build your own template in the editor**.
+
+1. Select **Load file**, and then follow the instructions to load the **template.json** file that you modified in the last section.
+
+1. On the **Custom deployment** page, follow these steps:
-1. In the Azure portal, select **Create a resource**.
-2. In **Search the Marketplace**, type **template deployment**, and select **Template deployment (deploy using custom templates)**.
-5. Select **Build your own template in the editor**.
-6. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
-1. Update the value of the `location` property to point to the new region. To obtain location codes, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, for example, `West US` is equal to `westus`.
-1. Select **Save** to save the template.
-1. On the **Custom deployment** page, follow these steps:
1. Select an Azure **subscription**. 2. Select an existing **resource group** or create one. If the source namespace was in an Event Hubs cluster, select the resource group that contains cluster in the target region. 3. Select the target **location** or region. If you selected an existing resource group, this setting is read-only.
Deploy the template to create an Event Hubs namespace in the target region.
``` /subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<CLUSTER'S RESOURCE GROUP>/providers/Microsoft.EventHub/clusters/<CLUSTER NAME> ```
- 3. If event hub in your namespace uses a Storage account for capturing events, specify the resource group name and the storage account for `StorageAccounts_<original storage account name>_external` field.
+ 3. If Event Hubs in your namespace uses a Storage account for capturing events, specify the resource group name and the storage account for `StorageAccounts_<original storage account name>_external` field.
``` /subscriptions/0000000000-0000-0000-0000-0000000000000/resourceGroups/<STORAGE'S RESOURCE GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE ACCOUNT NAME> ``` 5. Select **Review + create** at the bottom of the page.
- 1. On the **Review + create** page, review settings, and then select **Create**.
+ 6. On the **Review + create** page, review settings, and then select **Create**.
+
+1. Network configuration settings (private endpoints) need to be re-configured in the new Event Hubs.
## Discard or clean up After the deployment, if you want to start over, you can delete the **target Event Hubs namespace**, and repeat the steps described in the [Prepare](#prepare) and [Move](#redeploy) sections of this article.
To delete an Event Hubs namespace (source or target) by using the Azure portal:
In this how-to, you learned how to move an Event Hubs namespace from one region to another.
-See the [Relocate Event Hubs to another region](relocation-event-hub-cluster.md) article for instructions on moving an Event Hubs cluster from one region to another region.
+For instructions on moving an Event Hubs cluster from one region to another region, see [Relocate Event Hubs to another region](relocation-event-hub-cluster.md) article.
To learn more about moving resources between regions and disaster recovery in Azure, refer to:
operational-excellence Relocation Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-storage-account.md
This article shows you how to:
This article shows you how to relocate an Azure Storage Account to a new region by creating a copy of your storage account into another region. You also learn how to relocate your data to that account by using AzCopy, or another tool of your choice.
-### Prerequisites
+## Prerequisites
- Ensure that the services and features that your account uses are supported in the target region. - For preview features, ensure that your subscription is allowlisted for the target region.
operator-5g-core Quickstart Complete Prerequisites Deploy Nexus Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/quickstart-complete-prerequisites-deploy-nexus-azure-kubernetes-service.md
Previously updated : 03/07/2024 Last updated : 04/02/2024 #CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
Commands used in this article refer to the following resource groups:
- Managed resource group used to host Azure-ARC Kubernetes resources - Tenant resource groups: - Fabric - tenant networking resources (such as networks)
- - Compute - tenant compute resources (such as VMs, and Nexus AKS clusters)
+ - Compute - tenant compute resources (such as VMs, and Nexus Azure Kubernetes Services (AKS) clusters)
## Prerequisites Before provisioning a NAKS cluster: -- Configure external networks between CEs and PEs (or Telco Edge) that allow connectivity with the provider edge. Configuring access to external services like firewalls and services hosted on Azure (tenant not platform) is outside of the scope of this article.-- Configure elements on PEs/Telco Edge that aren't controlled by Nexus Network Fabric, such as Express Route Circuits configuration for tenant workloads connectivity to Azure.
+- Configure external networks between the Customer Edge (CE) and Provider Edge (PE) (or Telco Edge) devices that allow connectivity with the provider edge. Configuring access to external services like firewalls and services hosted on Azure (tenant not platform) is outside of the scope of this article.
+- Configure a jumpbox to connect routing domains. Configuring a jumpbox is outside of the scope of this article.
+- Configure network elements on PEs/Telco Edge that aren't controlled by Nexus Network Fabric, such as Express Route Circuits configuration for tenant workloads connectivity to Azure (optional for hybrid setups) or connectivity via the operator's core network.
- Review the [Nexus Kubernetes release calendar](../operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md) to identify available releases and support plans. - Review the [Nexus Kubernetes Cluster Overview](../operator-nexus/concepts-nexus-kubernetes-cluster.md). - Review the [Network Fabric Overview](../operator-nexus/concepts-network-fabric.md).
Complete these tasks to set up your internal network.
Use the following Azure CLI commands to create the isolation domain (ISD): ```azurecli
-export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export rgManagedFabric=ΓÇ¥<RG-MANAGED-FABRIC>ΓÇ¥
-export nnfId=ΓÇ¥<NETWORK-FABRIC-ID>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
-export region=ΓÇ¥<REGION>ΓÇ¥
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgManagedFabric="<RG-MANAGED-FABRIC>"
+export nnfId="<NETWORK-FABRIC-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
+export region="<REGION>"
az networkfabric l3domain create ΓÇôresource-name $l3Isd \ --resource-group $rgFabric \ --location $region \ nf-id ΓÇ£/subscriptions/$subscriptionId/resourceGroups/$rgManagedFabric/providers/Microsoft.ManagedNetworkFabric/networkFabrics/$nnfIdΓÇ¥ \ redistribute-connected-subnets ΓÇ£TrueΓÇ¥ \ redistribute-static-routes ΓÇ£TrueΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥
+--nf-id "/subscriptions/$subscriptionId/resourceGroups/$rgManagedFabric/providers/Microsoft.ManagedNetworkFabric/networkFabrics/$nnfId" \
+--redistribute-connected-subnets "True" \
+--redistribute-static-routes "True" \
+--subscription "$subscriptionId"
```
-To view the new isolation domain in the Azure portal:
+To view the new L3ISD isolation domain, enter the following command:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to for **Network Fabric (Operator Nexus)** resource.
-1. Select **network fabric** from the list.
-1. Select **Isolation Domain**.
-1. Select the relevant isolation domain (ISD).
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
+
+az networkfabric l3domain show --resource-name $l3Isd -g $rgFabric --subscription $subscriptionId
+```
## Create the internal network
Before creating or modifying the internal network, you must disable the ISD. Re-
Use the following commands to disable the ISD: ```azurecli
-export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
-# Disable ISD in order to create internal networks, wait for 5 minutes and check the status is Disabled
+# Disable ISD to create internal networks, wait for 5 minutes and check the status is Disabled
-az networkfabric l3domain update-admin-state ΓÇôresource-name ΓÇ£$l3IsdΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥ \
+az networkfabric l3domain update-admin-state ΓÇôresource-name "$l3Isd" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId" \
--state Disable # Check the status of the ISD
-az networkfabric l3domain show ΓÇôresource-name ΓÇ£$l3IsdΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥
+az networkfabric l3domain show ΓÇôresource-name "$l3Isd" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId"
```
-With the ISD disabled, you can add, modify, or remove the internal network. When you're finished making changes, re-enable ISD.
+With the ISD disabled, you can add, modify, or remove the internal network. When you're finished making changes, re-enable ISD as described in [Enable isolation domain](#enable-isolation-domain).
## Create the default Azure Container Network Interface internal network Use the following commands to create the default Azure Container Network Interface (CNI) internal network: ```azurecli
-export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export intnwDefCni=ΓÇ¥<DEFAULT-CNI-NAME>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
+export subscriptionId="<SUBSCRIPTION-ID>"
+export intnwDefCni="<DEFAULT-CNI-NAME>"
+export l3Isd="<ISD-NAME>"
+export rgFabric="<RG-FABRIC>"
export vlan=<VLAN-ID> export peerAsn=<PEER-ASN>
-export ipv4ListenRangePrefix=ΓÇ¥<DEFAULT-CNI-IPV4-PREFIX>/<PREFIX-LEN>ΓÇ¥
+export ipv4ListenRangePrefix="<DEFAULT-CNI-IPV4-PREFIX>/<PREFIX-LEN>"
export mtu=9000
-az networkfabric internalnetwork create ΓÇôresource-name ΓÇ£$intnwDefCniΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥ \ l3domain ΓÇ£$l3IsdΓÇ¥ \
+az networkfabric internalnetwork create ΓÇôresource-name "$intnwDefCni" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId" \
+--l3domain "$l3Isd" \
--vlan-id $vlan \ --mtu $mtu \ connected-ipv4-subnets ΓÇ£[{prefix:$ipv4ListenRangePrefix}]ΓÇ¥ \
+--connected-ipv4-subnets "[{prefix:$ipv4ListenRangePrefix}]" \
--bgp-configuration
+{peerASN:$peerAsn,allowAS:0,defaultRouteOriginate:True,ipv4ListenRangePrefixes:['$ipv4ListenRangePrefix']}"
```
-## Create internal networks for SMF ULB (S11/S5), UPF iPPE (N3, N6)
+## Create internal networks for User Plane Function (N3, N6) and Access and Mobility Management Function (N2) interfaces
-When creating the SMF ULB and UPF iPPE internal networks, make sure to include IP-v6 addressing. You don't need to configure the BGP fabric-side ASN. ASN is included in network fabric resource creation. Use the following commands to create these internal networks:
+When you're creating User Plane Function (UPF) internal networks, dual stack IPv4/IPv6 is supported. You don't need to configure the Border Gateway Protocol (BGP) fabric-side Autonomous System Number (ASN) because ASN is included in network fabric resource creation. Use the following commands to create these internal networks.
+
+> [!NOTE]
+> Create the number of internal networks as described in the [Prerequisites](#prerequisites) section.
```azurecli
-export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export intnwName=ΓÇ¥<INTNW-NAME>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
+export subscriptionId="<SUBSCRIPTION-ID>"
+export intnwName="<INTNW-NAME>"
+export l3Isd="<ISD-NAME>" // N2, N3, N6
+export rgFabric="<RG-FABRIC>"
export vlan=<VLAN-ID> export peerAsn=<PEER-ASN>
-export ipv4ListenRangePrefix=ΓÇ¥<IPV4-PREFIX>/<PREFIX-LEN>ΓÇ¥
-export ipv6ListenRangePrefix=ΓÇ¥<IPV6-PREFIX>/<PREFIX-LEN>ΓÇ¥
+export ipv4ListenRangePrefix="<IPV4-PREFIX>/<PREFIX-LEN>"
+export ipv6ListenRangePrefix="<IPV6-PREFIX>/<PREFIX-LEN>"
export mtu=9000
-az networkfabric internalnetwork create ΓÇôresource-name ΓÇ£$intnwNameΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥ \ l3domain ΓÇ£$l3IsdΓÇ¥ \
+az networkfabric internalnetwork create ΓÇôresource-name "$intnwName" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId" \
+--l3domain "$l3Isd" \
--vlan-id $vlan \ --mtu $mtu \ connected-ipv4-subnets ΓÇ£[{prefix:$ipv4ListenRangePrefix}]ΓÇ¥ \ connected-ipv6-subnets ΓÇ£[{prefix:ΓÇÖ$ipv6ListenRangePrefixΓÇÖ}]ΓÇ¥ \ bgp-configuration ΓÇ£{peerASN:$peerAsn,allowAS:0,defaultRouteOriginate:True,ipv4ListenRangePrefixes:[$ipv4ListenRangePrefix],ipv6ListenRangePrefixes:[ΓÇÿ$ipv6ListenRangePrefixΓÇÖ]}ΓÇ¥
+--connected-ipv4-subnets "[{prefix:$ipv4ListenRangePrefix}]" \
+--connected-ipv6-subnets "[{prefix:'$ipv6ListenRangePrefix'}]" \ //optional
+--bgp-configuration
+"{peerASN:$peerAsn,allowAS:0,defaultRouteOriginate:True,ipv4ListenRangePrefixes:[$ipv4ListenRangePrefix],ipv6ListenRangePrefixes:['$ipv6ListenRangePrefix']}"
```
-To view the fabric ASN from the Azure portal:
+To view the list of internal networks created, enter the following commands:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for the **Network Fabric (Operator Nexus)** resource.
-1. Select **network fabric** from the list.
-1. Review the ASN in properties ΓÇô **Fabric ASN** or in the **Internal Network** details.
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
+
+az networkfabric internalnetwork list -o table --subscription $subscriptionId -g $rgFabric --l3domain $l3Isd
+```
+
+To view the details of a specific internal network, enter the following commands:
+
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
+export intnwName="<INTNW-NAME>"
+
+az networkfabric internalnetwork show --resource-name $intnwName -g $rgFabric --l3domain $l3Isd
+```
### Enable isolation domain Use the following commands to enable the ISD: ```azurecli
-export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgFabric="<RG-FABRIC>"
+export l3Isd="<ISD-NAME>"
# Enable ISD, wait for 5 minutes and check the status is Enabled
-az networkfabric l3domain update-admin-state ΓÇôresource-name ΓÇ£$l3IsdΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥ \
+az networkfabric l3domain update-admin-state ΓÇôresource-name "$l3Isd" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId" \
--state Enable # Check the status of the ISD
-az networkfabric l3domain show ΓÇôresource-name ΓÇ£$l3IsdΓÇ¥ \
resource-group ΓÇ£$rgFabricΓÇ¥ \ subscription ΓÇ£$subscriptionIdΓÇ¥
+az networkfabric l3domain show ΓÇôresource-name "$l3Isd" \
+--resource-group "$rgFabric" \
+--subscription "$subscriptionId"
``` ### Recommended routing settings
-To configure BGP and BFD routing for internal networks, use the default settings. See [Nexus documentation](../operator-nexus/howto-configure-isolation-domain.md) for parameter descriptions.
+To configure BGP and Bidirectional Forwarding Detection (BFD) routing for internal networks, use the default settings. See the [Nexus documentation](../operator-nexus/howto-configure-isolation-domain.md) for parameter descriptions.
## Create L3 networks
-Before deploying the NAKS cluster, you must create NC L3 networking resources that map to network fabric (NF) resources.
-You must create L3 network NC resources for the default CNI interface, including the ISD/VLAN/IP prefix of a corresponding internal network. Attach these resources directly to VMs to perform VLAN tagging at the NIC (VF) level instead of the application level (access ports from application perspective) and/or if IP addresses are allocated by Nexus (using IP Address Management (ipam) functionality).
-An L3 network is used for the default CNI interface. Additional interfaces that require multiple VLANs per single interface must be trunk interfaces.
+Before deploying the NAKS cluster, you must create network cloud (NC) L3 networking resources that map to network fabric (NF) resources.
+You must create L3 network NC resources for the default CNI interface, including the ISD/VLAN/IP prefix of a corresponding internal network. Attach these resources directly to VMs to perform VLAN tagging at the Network Interface Card (NIC) Virtual Function (VF) level instead of the application level (access ports from application perspective) and/or if IP addresses are allocated by Nexus (using IP Address Management (ipam) functionality).
+
+An L3 network is used for the default CNI interface. Other interfaces that require multiple VLANs per single interface must be trunk interfaces.
Use the following commands to create the L3 network: ```azurecli
-Export subscriptionId=ΓÇ¥<SUBSCRIPTION-ID>ΓÇ¥
-export rgManagedUndercloudCluster=ΓÇ¥<RG-MANAGED-UNDERCLOUD-CLUSTER>ΓÇ¥
-export undercloudCustLocationName=ΓÇ¥<UNDERCLOUD-CUST-LOCATION-NAME>ΓÇ¥
-export rgFabric=ΓÇ¥<RG-FABRIC>ΓÇ¥
-export rgCompute=ΓÇ¥<RG-COMPUTE>ΓÇ¥
-export l3Name=ΓÇ¥<L3-NET-NAME>ΓÇ¥
-export l3Isd=ΓÇ¥<ISD-NAME>ΓÇ¥
-export region=ΓÇ¥<REGION>ΓÇ¥
+Export subscriptionId="<SUBSCRIPTION-ID>"
+export rgManagedUndercloudCluster="<RG-MANAGED-UNDERCLOUD-CLUSTER>"
+export undercloudCustLocationName="<UNDERCLOUD-CUST-LOCATION-NAME>"
+export rgFabric="<RG-FABRIC>"
+export rgCompute="<RG-COMPUTE>"
+export l3Name="<L3-NET-NAME>"
+export l3Isd="<ISD-NAME>"
+export region="<REGION>"
export vlan=<VLAN-ID>
-export ipAllocationType=ΓÇ¥IPV4ΓÇ¥ // DualStack, IPV4, IPV6
-export ipv4ConnectedPrefix=ΓÇ¥<DEFAULT-CNI-IPV4-PREFIX>/<PREFIX-LEN>ΓÇ¥ // if IPV4 or DualStack
-export ipv6ConnectedPrefix=ΓÇ¥<DEFAULT-CNI-IPV6-PREFIX>/<PREFIX-LEN>ΓÇ¥ // if IPV6 or DualStack
+export ipAllocationType="IPV4" // DualStack, IPV4, IPV6
+export ipv4ConnectedPrefix="<DEFAULT-CNI-IPV4-PREFIX>/<PREFIX-LEN>" // if IPV4 or DualStack
+export ipv6ConnectedPrefix="<DEFAULT-CNI-IPV6-PREFIX>/<PREFIX-LEN>" // if IPV6 or DualStack
az networkcloud l3network create ΓÇôl3-network-name $l3Name \ l3-isolation-domain-id ΓÇ£/subscriptions/$subscriptionId/resourceGroups/$rgFabric/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/$l3IsdΓÇ¥ \
+--l3-isolation-domain-id
+"/subscriptions/$subscriptionId/resourceGroups/$rgFabric/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/$l3Isd" \
--vlan $vlan \ --ip-allocation-type $ipAllocationType \ --ipv4-connected-prefix $ipv4ConnectedPrefix \ extended-location name=ΓÇ¥/subscriptions/$subscriptionId/resourceGroups/$rgManagedUndercloudCluster/providers/Microsoft.ExtendedLocation/customLocations/$undercloudCustLocationNameΓÇ¥ type=ΓÇ¥CustomLocationΓÇ¥ \
+--extended-location name="/subscriptions/$subscriptionId/resourceGroups/$rgManagedUndercloudCluster/providers/Microsoft.ExtendedLocation/customLocations/$undercloudCustLocationName" type="CustomLocation" \
--resource-group $rgCompute \ --location $region \ --subscription $subscriptionId \ interface-name ΓÇ£vlan-$vlanΓÇ¥
+--interface-name "vlan-$vlan"
+```
+
+To view the L3 network created, enter the following commands:
++
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgCompute="<RG-COMPUTE>"
+export l3Name="<L3-NET-NAME>"
+
+az networkcloud l3network show -n $l3Name -g $rgCompute --subscription $subscriptionId
``` ### Trunked networks A `trunkednetwork` network cloud resource is required if a single port/interface connected to a virtual machine must carry multiple virtual local area networks (VLANs). Tagging is performed at the application layer instead of NIC. A trunk interface can carry VLANs that are a part of different ISDs.
-You must create a trunked network for both SMF ULB (S11/S5) and UPF iPPE (N3, N6).
+
+You must create a trunked network for the Access and Management Mobility Function (AMF) (N2) and UPF (N3, N6).
Use the following commands to create a trunked network:
export vlanUlb=<VLAN-ULB-ID>
export region="<REGION>" az networkcloud trunkednetwork create --name $trunkName \ isolation-domain-ids "/subscriptions/$subscriptionId/resourceGroups/$rgFabric/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/$l3IsdUlb" \
+--isolation-domain-ids
+ "/subscriptions/$subscriptionId/resourceGroups/$rgFabric/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/$l3IsdUlb" \
--vlans $vlanUlb \ --extended-location name="/subscriptions/$subscriptionId/resourceGroups/$rgManagedUndercloudCluster/providers/Microsoft.ExtendedLocation/customLocations/$undercloudCustLocationName" type="CustomLocation" \ --resource-group $rgCompute \
az networkcloud trunkednetwork create --name $trunkName \
--interface-name "trunk-ulb" \ --subscription $subscriptionId ```
+To view the trunked network resource created, enter the following command:
+
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgCompute="<RG-COMPUTE>"
+export trunkName="<TRUNK-NAME>"
+
+az networkcloud trunkednetwork show -n $trunkName -g $rgCompute --subscription $subscriptionId
+```
## Configure the Cloud Services Network proxy and allowlisted domains A Cloud Services Network proxy (CSN proxy) is used to access Azure and internet destinations. You must explicitly add these domains to an allowlist in the CSN configuration for a NAKS cluster to access Azure services and for Arc integration.
-### Azure Operator Service Manager/Network Function Manager-based Cloud Services Networks endpoints
-
-Add the following egress points for AOSM/NFM based deployment support (HybridNetwork RP, CustomLocation RP reachability, ACR, Arc):
+### Network Function Manager-based Cloud Services Networks endpoints
+
+Add the following egress points for Network Function Manager (NFM) based deployment support (HybridNetwork Resource Provider (RP), CustomLocation RP reachability, ACR, Arc):
+
+- .azurecr.io / port 80
+- .azurecr.io / port 443
+- .mecdevice.azure.com / port 443
+- eastus-prod.mecdevice.azure.com / port 443
+- .microsoftmetrics.com / port 443
+- crprivatemobilenetwork.azurecr.io / port 443
+- .guestconfiguration.azure.com / port 443
+- .kubernetesconfiguration.azure.com / port 443
+- eastus.obo.arc.azure.com / port 8084
+- .windows.net / port 80
+- .windows.net / port 443
+- .k8connecthelm.azureedge.net / port 80
+- .k8connecthelm.azureedge.net / port 443
+- .k8sconnectcsp.azureedge.net / port 80
+- .k8sconnectcsp.azureedge.net / port 443
+- .arc.azure.net / port 80
+- .arc.azure.net / port 443
-```azurecli
-.azurecr.io / port 80
-.azurecr.io / port 443
-.mecdevice.azure.com / port 443
-eastus-prod.mecdevice.azure.com / port 443
-.microsoftmetrics.com / port 443
-crprivatemobilenetwork.azurecr.io / port 443
-.guestconfiguration.azure.com / port 443
-.kubernetesconfiguration.azure.com / port 443
-eastus.obo.arc.azure.com / port 8084
-.windows.net / port 80
-.windows.net / port 443
-.k8connecthelm.azureedge.net / port 80
-.k8connecthelm.azureedge.net / port 443
-.k8sconnectcsp.azureedge.net / port 80
-.k8sconnectcsp.azureedge.net / port 443
-.arc.azure.net / port 80
-.arc.azure.net / port 443
-```
### Python Cloud Services Networks endpoints
-For python packages installation (part of fed-kube_addons pod-node_config command list used for NAKS), add the following commands:
+For python packages installation (part of the fed-kube_addons pod-node_config command list used for NAKS), add the following endpoints:
-```python
-pypi.org / port 443
-files.pythonhosted.org / port 443
-```
+- pypi.org / port 443
+- files.pythonhosted.org / port 443
> [!NOTE]
-> Additional ADX endpoints may need to be included in the allowlist if there is a requirement to inject data into Azure ADX.
+> Additional Azure Detat Explorer (ADX) endpoints may need to be included in the allowlist if there is a requirement to inject data into ADX.
### Optional Cloud Services Networks endpoints Use the following destination to run containers that have their endpoints stored in public container registries or to install more packages for the auxiliary virtual machines:
-```azurecli
-.ghcr.io / port 80
-.ghcr.io / port 443
-.k8s.gcr.io / port 80
-.k8s.gcr.io / port 443
-.k8s.io / port 80
-.k8s.io / port 443
-.docker.io / port 80
-.docker.io / port 443
-.docker.com / port 80
-.docker.com / port 443
-.pkg.dev / port 80
-.pkg.dev / port 443
-.ubuntu.com / port 80
-.ubuntu.com / port 443
-```
+- .ghcr.io / port 80
+- .ghcr.io / port 443
+- .k8s.gcr.io / port 80
+- .k8s.gcr.io / port 443
+- .k8s.io / port 80
+- .k8s.io / port 443
+- .docker.io / port 80
+- .docker.io / port 443
+- .docker.com / port 80
+- .docker.com / port 443
+- .pkg.dev / port 80
+- .pkg.dev / port 443
+- .ubuntu.com / port 80
+- .ubuntu.com / port 443
## Create Cloud Services Networks You must create a separate CSN instance for each NAKS cluster when you deploy Azure Operator 5G Core Preview on the Nexus platform.
-Adjust the additional-egress-endpoints list based on the previous description and lists.
+
+> [!NOTE]
+> Adjust the `additional-egress-endpoints` list based on the description and lists provided in the previous sections.
```azurecli export subscriptionId="<SUBSCRIPTION-ID>"
az networkcloud cloudservicesnetwork create --cloud-services-network-name $csnNa
]' 07- ```
-After you create the CSN, verify the `egress-endpoints` from the Azure portal. In the search bar, enter **Cloud Services Networks (Operator Nexus)** resource. Select **Overview**, then navigate to **Enabled egress endpoints** to see the list of endpoints you created.
+After you create the CSN, verify the `egress-endpoints` using the following commands at the command line:
+
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgCompute="<RG-COMPUTE>"
+export csnName="<CSN-NAME>"
+
+az networkcloud cloudservicesnetwork show -n $csnName -g $rgCompute --subscription $subscriptionId
+```
## Create a Nexus Azure Kubernetes Services Cluster
-Nexus related resource providers must deploy self-managed resource groups that are used to deploy the necessary resources created by customers. When Nexus AKS clusters are provisioned, they must be Arc-enabled. The Network Cloud resource provider creates its own managed resource group and deploys it in an Azure Arc Kubernetes cluster resource. Following this deployment, this cluster resource is linked to the NAKS cluster resource.
+Nexus related resource providers must deploy self-managed resource groups used to deploy the necessary resources created by customers. When Nexus AKS clusters are provisioned, they must be Arc-enabled. The Network Cloud resource provider creates its own managed resource group and deploys it in an Azure Arc Kubernetes cluster resource. Following this deployment, this cluster resource is linked to the NAKS cluster resource.
> [!NOTE]
-> After the NAKS cluster deploys, and the managed resource group is created, you may need to grant privileges to all a user/entra group/service principal access to the managed resource group. This action is contingent upon the subscription level IAM settings.
+> After the NAKS cluster deploys, and the managed resource group is created, you may need to grant privileges to all a user/entra group/service principal access to the managed resource group. This action is contingent upon the subscription level Identity Access Management (IAM) settings.
Use the following Azure CLI commands to create the NAKS cluster:
az networkcloud kubernetescluster create --name $naksName \
--subscription $subscriptionId ```
-After the cluster is created, you can enable the Network Function Manager (NFM) extension and set a custom location so the cluster can be deployed via Azure Operator Service Manager (AOSM) or NFM.
+To verify the list of created Nexus clusters, enter the following command:
+
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+
+az networkcloud kubernetescluster list -o table --subscription $subscriptionId
+```
+
+To verify the details of a created cluster, enter the following command:
+
+```azurecli
+export subscriptionId="<SUBSCRIPTION-ID>"
+export rgCompute="<RG-COMPUTE>"
+export naksName="<NAKS-NAME>"
+
+az networkcloud kubernetescluster show -n $naksName -g $rgCompute --subscription $subscriptionId
+```
+
+After the cluster is created, you can enable the NFM extension and set a custom location so the cluster can be deployed via AOSM or NFM.
## Access the Nexus Azure Kubernetes Services cluster There are several ways to access the Tenant NAKS cluster's API server: -- Directly from the IP address/port (from a jumpbox) -- Use the Azure CLI and connectedk8s proxy option as described under the link to access clusters directly.
- You must have a custom role assigned to the managed resource group created by the Network Cloud RP. One of the following actions must be enabled in this role:
+- Directly from the IP address/port (from a jumpbox that has connectivity to the Tenant NAKS API server)
+- Using the Azure CLI and connectedk8s proxy option as described under the link to access clusters directly. The Service Principal's or user's EntraID/AAD group (used with Azure CLI) must be provided during the NAKS cluster provisioning. Additionally, you must have a custom role assigned to the managed resource group created by the Network Cloud RP. One of the following actions must be enabled in this role:
- Microsoft.Kubernetes/connectedClusters/listClusterUserCredential/action - A user or service provider as a contributor to the managed resource group
-## Prepare the cluster for workloads via AO5GC resource provider/Azure Operator Service Manager/Network Function Manager
-
-Before [Azure Operator Services Manager](https://azure.microsoft.com/products/operator-service-manager) (AOSM) and [Azure Network Function Manager](https://azure.microsoft.com/products/azure-network-function-manager) (NFM) can be used to deploy applications on top of Nexus Azure Kubernetes clusters, you must enable the Network Function Operator extension and set a custom location. For more information, see the following sections.
-
-### Enable the Network Function Operator extension
+## Azure Edge services
-You must enable the Network Function Operator Kubernetes Arc extension so that Azure NFM service can install workloads on top of NAKS clusters. Enable the extension at Azure Arc connected cluster level in the managed resource group created by Network Cloud RP.
+Azure Operator 5G Core is a telecommunications workload that enables you to offer services to consumer and enterprise end-users. The Azure Operator 5G Core workloads run on a Network Functions Virtualization Infrastructure (NFVI) layer and may depend on other NFVI services.
-1. Enter the following Azure CLI commands to enable the NF Operator extension:
+### Edge NFVI functions (running on Azure Operator Nexus)
- ```azurecli
- az k8s-extension create -g <NAKS-MANAGED-RESOURCE-GRUP> \
- -c <NAKS-ARC-CLUSTER-NAME> \
- --cluster-type connectedClusters \
- --cluster-resource-provider ΓÇ£Microsoft.Kubernetes/connectedClustersΓÇ¥ \
- --name networkfunction-operator \
- --extension-type Microsoft.Azure.HybridNetwork \
- --auto-upgrade-minor-version true \
- --scope cluster \
- --release-namespace azurehybridnetwork \
- --release-train preview \
- --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator
- ```
-
-1. Enter the following command and note the connected cluster ID:
- `az connectedk8s show -n <NAKS-CLUSTER-NAME> -g <NAKS-RESOURCE-GRUP> --query id -o tsv`
-1. Enter the following command and note the cluster extension ID for which to enable the custom location:
- `az k8s-extension show -c <NAKS-CLUSTER-NAME> -g <NAKS-RESOURCE-GRUP> -t connectedClusters -n networkfunction-operator`
+> [!NOTE]
+> The Edge NFVI related services may be updated occasionally. For more information about these services, see the specific service's documentation.
-### Set the custom location
+- **Azure Operator Nexus** - Azure Operator Nexus is a carrier-grade, next-generation hybrid cloud platform for telecommunication operators. Azure Operator Nexus is purpose-built for operators' network-intensive workloads and mission-critical applications.
+
+- Any other hardware and services Azure Operator Nexus may depend on.
-A [custom location](/azure/azure-arc/kubernetes/conceptual-custom-locations) must be enabled for Nexus AKS clusters so that these clusters can be used as target locations for deploying Azure services instances.
-Refer to (link) to learn how to enable a customer location.
+- **Azure Arc** - Provides a unified management and governance platform for Azure Operator 5G Core applications and services across Azure and on-premises environments.
-> [!IMPORTANT]
-> A custom location must to be created in a resource group where NAKS cluster is created.
+- **Azure Monitor** - Provides a comprehensive solution for monitoring the performance and health of Azure Operator 5G Core applications and services across Azure and on-premises environments.
-Enter the following Azure CLI commands to set a custom location. Replace the connectedClusterID and clusterExtensionID variables with the names noted when you enabled the Network Function Operator extension.
+- **EntraID** - Provides identity and access management for Azure Operator 5G Core users and administrators across Azure and on-premises environments.
-```azurecli
-az customlocation create -n <CUSTOM-LOCATION-NAME> \
--g <NAKS-RESOURCE-GRUP> \ --l eastus \ namespace azurehybridnetwork \ host-resource-id <CONNECTED-CLUSTER-ID> \ cluster-extension-ids <CLUSTER-EXTENSION-ID>
-```
+- **Azure Key Vault** - Provides a secure and centralized store for managing encryption keys and secrets for Azure Operator 5G Core across Azure and on-premises environments.
## Related content
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Azure Database for PostgreSQL flexible server has introduced an [in-place major
## Related content > [!div class="nextstepaction"]
-> [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0)
+> [feedback forum](https://aka.ms/pgfeedback)
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
These alerts appear in Defender for Cloud's security alerts page and include:
- Recommended actions for how to investigate and mitigate the threat - Options for continuing your investigations with Microsoft Sentinel
-> [!NOTE]
-> Microsoft Defender for Azure Database for PostgreSQL - Flexible Server currently has following limitations:
-> - No Azure CLI or PowerShell support.
-> - No ability to enable Cloud Defender for Azure Database for PostgreSQL - Flexible Server on subscription level.
+ ### Microsoft Defender for Cloud and Brute Force Attacks
postgresql How To Perform Major Version Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-major-version-upgrade-cli.md
description: This article describes how to perform a major version upgrade in Az
---++ Previously updated : 01/02/2024 Last updated : 04/02/2024 # Major version upgrade of Azure Database for PostgreSQL - Flexible Server with Azure CLI
You can run the following command to perform major version upgrade on an existin
> Major Version Upgrade action is irreversible. Please perform a point-in-time recovery (PITR) of your production server and test the upgrade in the non-production environment.
-**Usage**
+**Upgrade the major version of a flexible server.**
```azurecli
-az postgres flexible-server upgrade --source-server
- [--resource-group]
- [--postgres-version]
+az postgres flexible-server upgrade --version {16, 15, 14, 13, 12}
+ [--ids]
+ [--name] [-n]
+ [--resource-group] [-g]
+ [--subscription]
+ [--version] [-v]
+ [--yes] [-y]
+
```
-**Example:**
-Upgrade a server from this PG 11 to PG 14
+### Example
+
+**Upgrade server 'testsvr' to PostgreSQL major version 16**
```azurecli
-az postgres server upgrade -g myresource-group -n myservername -v mypgversion
+az postgres flexible-server upgrade -g testgroup -n testsvr -v 16 -y
```
postgresql Reference Pg Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/reference-pg-azure-storage.md
LIMIT 5;
## Related content - [overview](concepts-storage-extension.md)-- [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0)
+- [feedback forum](https://aka.ms/pgfeedback)
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Azure assigns service categories as foundational, mainstream, and strategic at g
Azure services are presented in the following tables by category. Note that some services are non-regional. That means they're available globally regardless of region. For information and a complete list of non-regional services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). > [!div class="mx-tableFixed"]
-> | ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational | ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream |
+> | ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational | ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream |
> |-||
-> | Azure Application Gateway | Azure API Management |
-> | Azure Backup | Azure App Configuration |
-> | Azure Cosmos DB | Azure App Service |
-> | Azure Event Hubs | Microsoft Entra Domain Services |
+> | Azure Application Gateway | Azure API Management |
+> | Azure Backup | Azure App Configuration |
+> | Azure Cosmos DB | Azure App Service |
+> | Azure Event Hubs | Microsoft Entra Domain Services |
> | Azure ExpressRoute | Azure Bastion |
-> | Azure Key Vault | Azure Batch |
-> | Azure Load Balancer | Azure Cache for Redis |
-> | Azure Public IP | Azure AI Search |
-> | Azure Service Bus | Azure Container Registry |
+> | Azure Key Vault | Azure Batch |
+> | Azure Load Balancer | Azure Cache for Redis |
+> | Azure Public IP | Azure AI Search |
+> | Azure Service Bus | Azure Container Registry |
> | Azure Service Fabric | Azure Container Instances |
-> | Azure Site Recovery | Azure Data Explorer |
-> | Azure SQL | Azure Data Factory |
-> | Azure Storage: Disk Storage | Azure Database for MySQL |
-> | Azure Storage Accounts | Azure Database for PostgreSQL |
-> | Azure Storage: Blob Storage | Azure DDoS Protection |
-> | Azure Storage Data Lake Storage | Azure Event Grid |
-> | Azure Virtual Machines | Azure Firewall |
+> | Azure Site Recovery | Azure Data Explorer |
+> | Azure SQL | Azure Data Factory |
+> | Azure Storage: Disk Storage | Azure Database for MySQL |
+> | Azure Storage Accounts | Azure Database for PostgreSQL |
+> | Azure Storage: Blob Storage | Azure DDoS Protection |
+> | Azure Storage Data Lake Storage | Azure Event Grid |
+> | Azure Virtual Machines | Azure Firewall |
> | Azure Virtual Machine Scale Sets | Azure Firewall Manager | > | Virtual Machines: Av2-series | Azure Functions |
-> | Virtual Machines: Bs-series | Azure HDInsight |
-> | Virtual Machines: Dv2 and DSv2-series | Azure IoT Hub |
-> | Virtual Machines: Dv3 and DSv3-series | Azure Kubernetes Service (AKS) |
-> | Virtual Machines: ESv3 abd ESv3-series | Azure Logic Apps |
-> | Azure Virtual Network | Azure Media Services |
-> | Azure VPN Gateway | Azure Monitor: Application Insights |
-> | | Azure Monitor: Log Analytics |
-> | | Azure Network Watcher |
-> | | Azure Private Link |
+> | Virtual Machines: Bs-series | Azure HDInsight |
+> | Virtual Machines: Dv2 and DSv2-series | Azure IoT Hub |
+> | Virtual Machines: Dv3 and DSv3-series | Azure Kubernetes Service (AKS) |
+> | Virtual Machines: ESv3 abd ESv3-series | Azure Logic Apps |
+> | Azure Virtual Network | Azure Media Services |
+> | Azure VPN Gateway | Azure Monitor: Application Insights |
+> | | Azure Monitor: Log Analytics |
+> | | Azure Network Watcher |
+> | | Azure Private Link |
> | | Azure Storage: Files Storage |
-> | | Azure Virtual WAN |
-> | | Premium Blob Storage |
-> | | Virtual Machines: Ddsv4-series |
-> | | Virtual Machines: Ddv4-series |
-> | | Virtual Machines: Dsv4-series |
-> | | Virtual Machines: Dv4-series |
-> | | Virtual Machines: Edsv4-series |
-> | | Virtual Machines: Edv4-series |
-> | | Virtual Machines: Esv4-series |
-> | | Virtual Machines: Ev4-series |
-> | | Virtual Machines: Fsv2-series |
-> | | Virtual Machines: M-series |
+> | | Azure Virtual WAN |
+> | | Premium Blob Storage |
+> | | Virtual Machines: Ddsv4-series |
+> | | Virtual Machines: Ddv4-series |
+> | | Virtual Machines: Dsv4-series |
+> | | Virtual Machines: Dv4-series |
+> | | Virtual Machines: Edsv4-series |
+> | | Virtual Machines: Edv4-series |
+> | | Virtual Machines: Esv4-series |
+> | | Virtual Machines: Ev4-series |
+> | | Virtual Machines: Fsv2-series |
+> | | Virtual Machines: M-series |
### Strategic services As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and strategic. Service categories are assigned at general availability. Often, services start their lifecycle as a strategic service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists strategic services. > [!div class="mx-tableFixed"]
-> | ![An icon that signifies this service is strategic.](media/icon-strategic.svg)
-
-> Strategic |
-> ||
+> | ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Foundational |
+> |-|
> | Azure API for FHIR | > | Azure Analysis Services | > | Azure AI services |
resource-mover Move Region Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md
#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover.
-# Move Azure VMs to an availability zone in another region
+# Move Azure VMs to an availability zone in another region with Azure Resource Mover
-In this article, learn how to move Azure VMs (and related network/storage resources) to an availability zone in a different Azure region, using [Azure Resource Mover](overview.md).
+In this article, learn how to move Azure VMs (and related network/storage resources) to an availability zone in a different Azure region, using [Azure Resource Mover](overview.md). To use migration methods other than Resource Mover, see [Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support](../reliability/migrate-vm.md).
[Azure availability zones](../availability-zones/az-overview.md#availability-zones) help protect your Azure deployment from datacenter failures. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, thereΓÇÖs a minimum of three separate zones in all [enabled regions](../availability-zones/az-region.md). Using Resource Mover, you can move:
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-within-resource-group.md
Last updated 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure resources to a different Azure region using Azure Resource Mover.
+#Customer intent: As an Azure admin, I want to relocate Azure resources to a different Azure region with Azure Resource Mover
+
-# Move resources across regions (from resource group)
+# Move resources across regions (from resource group) with Azure Resource Mover
+
+In this article, learn how to move resources in a specific resource group to a different Azure region with [Azure Resource Mover](overview.md). In the resource group, you select the resources you want to move.
-In this article, learn how to move resources in a specific resource group to a different Azure region. In the resource group, you select the resources you want to move. Then, you move them using [Azure Resource Mover](overview.md).
+To move services and resources manually or to move services and resources that aren't supported by Azure Resource Mover, see [Azure services relocation guidance](/azure/operational-excellence/overview-relocation).
## Prerequisites
Delete as follows:
- The cache storage account name is ```resmovecache<guid>``` - The vault name is ```ResourceMove-<sourceregion>-<target-region>-GUID```.
-## Next steps
-
+## Related content
-[Learn about](about-move-process.md) the move process.
+- [Azure services relocation guidance](/azure/operational-excellence/overview-relocation)
+- [Cloud Adoption Framework - Relocate cloud workloads](/azure/cloud-adoption-framework/relocate/)
+- [Learn about](about-move-process.md) the move process with Resource Mover.
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
Using Resource Mover, you can currently move the following resources across regi
- Internal and public load balancers - Azure SQL databases and elastic pools
+To move over services and resource not supported by Resource Mover or to move any service and resource by manual methods, see:
+
+- [Availability zone migration guidance overview for Microsoft Azure products and services](../reliability/availability-zones-migration-overview.md).
+- [Azure services relocation guidance overview](/azure/operational-excellence/overview-relocation)
+ ## Next steps
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Previously updated : 11/15/2023 Last updated : 04/01/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
Depending on the selected actions, the attribute might be found in different pla
> [!div class="mx-tableFixed"] > | Attribute source | Description | Code | > | | | |
-> | [Environment](#environment-attributes) | Attribute is associated with the environment of the request, such as the network origin of the request or the current date and time.</br>***(Environment attributes are currently in preview.)*** | `@Environment` |
+> | [Environment](#environment-attributes) | Attribute is associated with the environment of the request, such as the network origin of the request or the current date and time.</br> | `@Environment` |
> | [Principal](#principal-attributes) | Attribute is a custom security attribute assigned to the principal, such as a user or enterprise application (service principal). | `@Principal` | > | [Request](#request-attributes) | Attribute is part of the action request, such as setting the blob index tag. | `@Request` | > | [Resource](#resource-attributes) | Attribute is a property of the resource, such as a container name. | `@Resource` |
For a complete list of the storage attributes you can use in conditions, see:
Environment attributes are associated with the circumstances under which the access request is made, such as the date and time of day or the network environment. The network environment might be whether access is over a specific private endpoint or a virtual network subnet, or perhaps over any private link.
-> [!IMPORTANT]
-> Environment attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- The following table lists the supported environment attributes for conditions. | Display name | Description | Attribute | Type |
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Previously updated : 12/01/2023 Last updated : 04/01/2024 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
For more information about the format of conditions, see [Azure role assignment
## Status of condition features
-Some features of conditions are still in preview. The following table lists the status of condition features:
+The following table lists the status of condition features:
| Feature | Status | Date | | | | |
-| Use [environment attributes](conditions-format.md#environment-attributes) in a condition | Preview | April 2023 |
+| Use [environment attributes](conditions-format.md#environment-attributes) in a condition | GA | April 2024 |
| Add conditions using the [condition editor in the Azure portal](conditions-role-assignments-portal.md) | GA | October 2022 | | Add conditions using [Azure PowerShell](conditions-role-assignments-powershell.md), [Azure CLI](conditions-role-assignments-cli.md), or [REST API](conditions-role-assignments-rest.md) | GA | October 2022 | | Use [resource and request attributes](conditions-format.md#attributes) for specific combinations of Azure storage resources, access attribute types, and storage account performance tiers. For more information, see [Status of condition features in Azure Storage](../storage/blobs/storage-auth-abac.md#status-of-condition-features-in-azure-storage). | GA | October 2022 |
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-portal.md
Previously updated : 11/15/2023 Last updated : 04/01/2024
Once you have the Add role assignment condition page open, you can review the ba
1. In the **Attribute source** list, select where the attribute can be found.
- - **Environment** (preview) indicates that the attribute is associated with the network environment over which the resource is accessed such as a private link, or the current date and time.
+ - **Environment** indicates that the attribute is associated with the network environment over which the resource is accessed such as a private link, or the current date and time.
- **Resource** indicates that the attribute is on the resource, such as container name. - **Request** indicates that the attribute is part of the action request, such as setting the blob index tag. - **Principal** indicates that the attribute is a Microsoft Entra custom security attribute principal, such as a user, enterprise application (service principal), or managed identity.
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Azure Route Server Keepalive timer is 60 seconds and the Hold timer is 180 secon
Azure Route Server supports ***NO_ADVERTISE*** BGP community. If a network virtual appliance (NVA) advertises routes with this community string to the route server, the route server doesn't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes sent from Azure Route Server to ExpressRoute.
-### When a VNet peering is created between your hub and spoke VNet, does this cause a BGP soft reset between Azure Route Server and its peered NVAs?
+### When a VNet peering is created between my hub VNet and spoke VNet, does this cause a BGP soft reset between Azure Route Server and its peered NVAs?
-Yes. If a VNet peering is created between your hub and spoke VNet, Azure Route Server will perform a BGP soft reset by sending route refresh requests to all its peered NVAs. If the NVAs do not support BGP route refresh, then Azure Route Server will perform a BGP hard reset with the peered NVAs, which may cause connectivity disruption for traffic traversing the NVAs.
+Yes. If a VNet peering is created between your hub VNet and spoke VNet, Azure Route Server will perform a BGP soft reset by sending route refresh requests to all its peered NVAs. If the NVAs do not support BGP route refresh, then Azure Route Server will perform a BGP hard reset with the peered NVAs, which may cause connectivity disruption for traffic traversing the NVAs.
### What Autonomous System Numbers (ASNs) can I use?
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Previously updated : 01/16/2024 Last updated : 04/02/2024 # High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server
[1999351]:https://launchpad.support.sap.com/#/notes/1999351 [2388694]:https://launchpad.support.sap.com/#/notes/2388694 [401162]:https://launchpad.support.sap.com/#/notes/401162
+[2235581]:https://launchpad.support.sap.com/#/notes/2235581
+[2684254]:https://launchpad.support.sap.com/#/notes/2684254
[sles-for-sap-bp]:https://documentation.suse.com/sbp-supported.html
+[sles-for-sap-bp12]:https://documentation.suse.com/sbp/sap-12/
+[sles-for-sap-bp15]:https://documentation.suse.com/sbp/sap-15/
[sap-swcenter]:https://launchpad.support.sap.com/#/softwarecenter
Before you begin, read the following SAP Notes and papers:
- The supported SAP software, operating system (OS), and database combinations. - The required SAP kernel versions for Windows and Linux on Microsoft Azure. - SAP Note [2015553] lists the prerequisites for SAP-supported SAP software deployments in Azure.-- SAP Note [2205917] has recommended OS settings for SUSE Linux Enterprise Server (SLES) for SAP Applications.-- SAP Note [1944799] has SAP HANA guidelines for SLES for SAP Applications.
+- SAP Note [2205917] has recommended OS settings for SUSE Linux Enterprise Server 12 (SLES 12) for SAP Applications.
+- SAP Note [2684254] has recommended OS settings for SUSE Linux Enterprise Server 15 (SLES 15) for SAP Applications.
+- SAP Note [2235581] has SAP HANA supported Operating systems
- SAP Note [2178632] has detailed information about all the monitoring metrics that are reported for SAP in Azure. - SAP Note [2191498] has the required SAP host agent version for Linux in Azure. - SAP Note [2243692] has information about SAP licensing for Linux in Azure.
Before you begin, read the following SAP Notes and papers:
- [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] guide. - [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] guide. - [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] guide.-- [SUSE Linux Enterprise Server for SAP Applications best practices guides][sles-for-sap-bp]:
+- [SUSE Linux Enterprise Server for SAP Applications 15 best practices guides][sles-for-sap-bp15] and [SUSE Linux Enterprise Server for SAP Applications 12 best practices guides][sles-for-sap-bp12]:
- Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP Applications). The guide contains all the required information to set up SAP HANA system replication for on-premises development. Use this guide as a baseline. - Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP Applications).
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
When the receiving client fails to process a message but wants the message to be
If a receiving client fails to process a message and knows that redelivering the message and retrying the operation won't help, it can reject the message, which moves it into the dead-letter queue by calling the [DeadLetter](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.deadlettermessageasync) API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
+> [!NOTE]
+> A dead-letter subqueue exists for a queue or a topic subscription only when you have the [dead-letter feature](service-bus-dead-letter-queues.md) enabled for the queue or subscription.
+ A special case of settlement is deferral, which is discussed in a [separate article](message-deferral.md). The `Complete`, `DeadLetter`, or `RenewLock` operations might fail due to network issues, if the held lock has expired, or there are other service-side conditions that prevent settlement. In one of the latter cases, the service sends a negative acknowledgment that surfaces as an exception in the API clients. If the reason is a broken network connection, the lock is dropped since Service Bus doesn't support recovery of existing AMQP links on a different connection.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Title: Azure Service Bus premium and standard tiers
+ Title: Azure Service Bus premium messaging tier
description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. Last updated 05/02/2023
-# Service Bus Premium and Standard messaging tiers
+# Service Bus premium messaging tier
Service Bus Messaging, which includes entities such as queues and topics, combines enterprise messaging capabilities with rich publish-subscribe semantics at cloud scale. Service Bus Messaging is used as the communication backbone for many sophisticated cloud solutions.
The *Premium* tier of Service Bus Messaging addresses common customer requests a
Some high-level differences are highlighted in the following table.
-| Premium | Standard |
-| | |
-| High throughput |Variable throughput |
-| Predictable performance |Variable latency |
-| Fixed pricing |Pay as you go variable pricing |
-| Ability to scale workload up and down |N/A |
-| Message size up to 100 MB. For more information, see [Large message support](#large-messages-support). |Message size up to 256 KB |
+| Criteria | Premium | Standard |
+| | | |
+| Throughout | High throughput |Variable throughput |
+| Performance | Predictable performance |Variable latency |
+| Pricing | Fixed pricing |Pay as you go variable pricing |
+| Scale | Ability to scale workload up and down |N/A |
+| Message size | Message size up to 100 MB. For more information, see [Large message support](#large-messages-support). |Message size up to 256 KB |
+ **Service Bus Premium Messaging** provides resource isolation at the CPU and memory level so that each customer workload runs in isolation. This resource container is called a *messaging unit*. Each premium namespace is allocated at least one messaging unit. You can purchase 1, 2, 4, 8 or 16 messaging units for each Service Bus Premium namespace. A single workload or entity can span multiple messaging units and the number of messaging units can be changed at will. The result is predictable and repeatable performance for your Service Bus-based solution.
Azure Service Bus premium tier namespaces support the ability to send large mess
Here are some considerations when sending large messages on Azure Service Bus - - Supported on Azure Service Bus premium tier namespaces only.-- Supported only when using the AMQP protocol. Not supported when using SBMP or HTTP protocols, in the premium tier, the maximum message size for these protocols is 1MB.
+- Supported only when using the AMQP protocol. Not supported when using SBMP or HTTP protocols, in the premium tier, the maximum message size for these protocols is 1 MB.
- Supported when using [Java Message Service (JMS) 2.0 client SDK](how-to-use-java-message-service-20.md) and other language client SDKs. - Sending large messages result in decreased throughput and increased latency.-- While 100 MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
+- While 100-MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
- The max message size is enforced only for messages sent to the queue or topic. The size limit isn't enforced for the receive operation. It allows you to update the max message size for a given queue (or topic). - Batching isn't supported. - Service Bus Explorer doesn't support sending or receiving large messages.
The following network security features are available only in the premium tier.
Configuring IP firewall using the Azure portal is available only for the premium tier namespaces. However, you can configure IP firewall rules for other tiers using Azure Resource Manager templates, CLI, PowerShell, or REST API. For more information, see [Configure IP firewall](service-bus-ip-filtering.md). ## Encryption of data at rest
-Azure Service Bus Premium provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). Service Bus Premium uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md).
+Azure Service Bus Premium provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). Service Bus Premium uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as customer managed key (CMD) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the CMK feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md).
## Partitioning There are some differences between the standard and premium tiers when it comes to partitioning. -- Partitioning is available at entity creation for all queues and topics in basic or standard SKUs. A namespace can have both partitioned and non-partitioned entities. Partitioning is available at namespace creation for the premium tier, and all queues and topics in that namespace will be partitioned. Any previously migrated partitioned entities in premium namespaces continue to work as expected.
+- Partitioning is available at entity creation for all queues and topics in basic or standard SKUs. A namespace can have both partitioned and nonpartitioned entities. Partitioning is available at namespace creation for the premium tier, and all queues and topics in that namespace will be partitioned. Any previously migrated partitioned entities in premium namespaces continue to work as expected.
- When partitioning is enabled in the Basic or Standard SKUs, Service Bus creates 16 partitions. When partitioning is enabled in the premium tier, the number of partitions is specified during namespace creation. For more information, see [Partitioning in Service Bus](service-bus-partitioning.md).
Azure Service Bus spreads the risk of catastrophic failures of individual machin
For a premium tier namespace, the outage risk is further spread across three physically separated facilities availability zones, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of a datacenter. The all-active Azure Service Bus cluster model within a failure domain along with the availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
-The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region typically involves several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus premium tier.
+The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good without having to change your application configurations. Abandoning an Azure region typically involves several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus premium tier.
For more information, see [Azure Service Bus Geo-disaster recovery](service-bus-geo-dr.md).
The standard tier supports only JMS 1.1 subset focused on queues. For more infor
## Next steps
-To learn more about Service Bus Messaging, see the following links:
+See the following article: [Automatically update messaging units](automate-update-messaging-units.md).
-- [Automatically update messaging units](automate-update-messaging-units.md).-- [Introducing Azure Service Bus Premium Messaging (blog post)](https://azure.microsoft.com/blog/introducing-azure-service-bus-premium-messaging/)
service-connector How To Use Service Connector In Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-use-service-connector-in-aks.md
+
+ Title: How Service Connector helps Azure Kubernetes Service (AKS) connect to other Azure services
+description: Learn how to use Service Connector in Azure Kubernetes Service (AKS).
+++ Last updated : 03/01/2024++
+# How to use Service Connector in Azure Kubernetes Service (AKS)
+
+Azure Kubernetes Service (AKS) is one of the compute services supported by Service Connector. This article aims to help you understand:
+
+* What operations are made on the cluster when creating a service connection.
+* How to use the kubernetes resources Service Connector creates.
+* How to troubleshoot and view logs of Service Connector in an AKS cluster.
+
+## Prerequisites
+
+* This guide assumes that you already know the [basic concepts of Service Connector](concept-service-connector-internals.md).
+
+## What operations Service Connector makes on the cluster
+
+Depending on the different target services and authentication types selected when creating a service connection, Service Connector makes different operations on the AKS cluster. The following lists the possible operations made by Service Connector.
+
+### Add the Service Connector kubernetes extension
+
+A kubernetes extension named `sc-extension` is added to the cluster the first time a service connection is created. Later on, the extension helps create kubernetes resources in user's cluster, whenever a service connection request comes to Service Connector. You can find the extension in your AKS cluster in the Azure portal, in the Extensions + applications menu.
++
+The extension is also where the cluster connections metadata are stored. Uninstalling the extension makes all the connections in the cluster unavailable. The extension operator is hosted in the cluster namespace `sc-system`.
+
+### Create kubernetes resources
+
+Service Connector creates some kubernetes resources to the namespace the user specified when creating a service connection. The kubernetes resources store the connection information, which is needed by the user's workload definitions or application code to talk to target services. Depending on different authentication types, different kubernetes resources are created. For the `Connection String` and `Service Principal` auth types, a kubernetes secret is created. For the `Workload Identity` auth type, a kubernetes service account is also created in addition to a kubernetes secret.
+
+You can find the kubernetes resources created by Service Connector for each service connection on the Azure portal in your kubernetes resource, in the Service Connector menu.
++
+Deleting a service connection doesn't delete the associated Kubernetes resource. If necessary, remove your resource manually, using for example the kubectl delete command.
+
+### Enable the `azureKeyvaultSecretsProvider` addon
+
+If target service is Azure Key Vault and the Secret Store CSI Driver is enabled when creating a service connection, Service Connector enables the `azureKeyvaultSecretsProvider` add-on for the cluster.
++
+Follow the [Connect to Azure Key Vault using CSI driver tutorial](./tutorial-python-aks-keyvault-csi-driver.md)to set up a connection to Azure Key Vault using Secret Store CSI driver.
+
+### Enable workload identity and OpenID Connect (OIDC) issuer
+
+If the authentication type is `Workload Identity` when creating a service connection, Service Connector enables workload identity and OIDC issuer for the cluster.
++
+When the authentication type is `Workload Identity`, a user-assigned managed identity is needed to create the federated identity credential. Learn more from [what are workload identities](/entr)to set up a connection to Azure Storage using workload identity.
+
+## How to use the Service Connector created kubernetes resources
+
+Different kubernetes resources are created when the target service type and authentication type are different. The following sections show how to use the Service Connector created kubernetes resources in your cluster workloads definition and application codes.
+
+#### Kubernetes secret
+
+A kubernetes secret is created when the authentication type is `Connection String` or `Service Principal`. Your cluster workload definition can reference the secret directly. The following snnipet is an example.
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ namespace: default
+ name: sc-sample-job
+spec:
+ template:
+ spec:
+ containers:
+ - name: raw-linux
+ image: alpine
+ command: ['printenv']
+ envFrom:
+ - secretRef:
+ name: <SecretCreatedByServiceConnector>
+ restartPolicy: OnFailure
+
+```
+
+Then, your application codes can consume the connection string in the secret from environment variable. You can check the [sample code](./how-to-integrate-storage-blob.md) to learn more about the environment variable names and how to use them in your application codes to authenticate to different target services.
+
+#### Kubernetes service account
+
+Both a kubernetes service account and a secret are created when the authentication type is `Workload Identity`. Your cluster workload definition can reference the service account and secret to authenticate through workload identity. The following snipet provides an example.
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ namespace: default
+ name: sc-sample-job
+ labels:
+ azure.workload.identity/use: "true"
+spec:
+ template:
+ spec:
+ serviceAccountName: <ServiceAccountCreatedByServiceConnector>
+ containers:
+ - name: raw-linux
+ image: alpine
+ command: ['printenv']
+ envFrom:
+ - secretRef:
+ name: <SecretCreatedByServiceConnector>
+ restartPolicy: OnFailure
+```
+
+You may check the tutorial to learn [how to connect to Azure Storage using workload identity](tutorial-python-aks-storage-workload-identity.md).
+
+## How to troubleshoot and view logs
+
+If an error happens and couldn't be mitigated by retrying when creating a service connection, the following methods can help gather more information for troubleshooting.
+
+### Check Service Connector kubernetes extension
+
+Service Connector kubernetes extension is built on top of [Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md). Use the following commands to investigate if there are any errors during the extension installation or updating.
+
+1. Install the `k8s-extension` Azure CLI extension.
+
+```azurecli
+az extension add --name k8s-extension
+```
+
+1. Get the Service Connector extension status. Check the `statuses` property in the command output to see if there are any errors.
+
+```azurecli
+az k8s-extension show \
+ --resource-group MyClusterResourceGroup \
+ --cluster-name MyCluster \
+ --cluster-type managedClusters \
+ --name sc-extension
+```
+
+### Check kubernetes cluster logs
+
+If there's an error during the extension installation, and the error message in the `statuses` property doesn't provide enough information about what happened, you can further check the kubernetes logs with the followings steps.
+
+1. Connect to your AKS cluster.
+
+ ```azurecli
+ az aks get-credentials \
+ --resource-group MyClusterResourceGroup \
+ --name MyCluster
+ ```
+1. Service Connector extension is installed in the namespace `sc-system` through helm chart, check the namespace and the helm release by following commands.
+
+ - Check the namespace exists.
+
+ ```Bash
+ kubectl get ns
+ ```
+
+ - Check the helm release status.
+
+ ```Bash
+ helm list -n sc-system
+ ```
+1. During the extension installation or updating, a kubernetes job called `sc-job` creates the kubernetes resources for the service connection. The job execution failure usually causes the extension failure. Check the job status by running the following commands. If `sc-job` doesn't exist in `sc-system` namespace, it should have been executed successfully. This job is designed to be automatically deleted after successful execution.
+
+ - Check the job exists.
+
+ ```Bash
+ kubectl get job -n sc-system
+ ```
+
+ - Get the job status.
+
+ ```Bash
+ kubectl describe job/sc-job -n sc-system
+ ```
+
+ - View the job logs.
+
+ ```Bash
+ kubectl logs job/sc-job -n sc-system
+ ```
+
+## Next steps
+
+Learn how to integrate different target services and read about their configuration settings and authentication methods.
+
+> [!div class="nextstepaction"]
+> [Learn about how to integrate storage blob](./how-to-integrate-storage-blob.md)
service-connector Quickstart Cli Aks Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-aks-connection.md
+
+ Title: Quickstart - Create a service connection in Azure Kubernetes Service (AKS) with the Azure CLI
+description: Quickstart showing how to create a service connection in Azure Kubernetes Service (AKS) with the Azure CLI
++++ Last updated : 03/01/2024
+ms.devlang: azurecli
++
+# Quickstart: Create a service connection in AKS cluster with the Azure CLI
+
+This quickstart shows you how to connect Azure Kubernetes Service (AKS) to other Cloud resources using Azure CLI and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
+++
+* This quickstart requires version 2.30.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* This quickstart assumes that you already have an AKS cluster. If you don't have one yet, [create an AKS cluster](../aks/learn/quick-kubernetes-deploy-cli.md).
+* This quickstart assumes that you already have an Azure Storage account. If you don't have one yet, [create an Azure Storage account](../storage/common/storage-account-create.md).
+
+## Initial set-up
+
+1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
+
+ ```azurecli
+ az provider register -n Microsoft.ServiceLinker
+ ```
+
+ > [!TIP]
+ > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered.
+
+1. Optionally, use the Azure CLI command to get a list of supported target services for AKS cluster.
+
+ ```azurecli
+ az aks connection list-support-types --output table
+ ```
+
+## Create a service connection
+
+### [Using an access key](#tab/Using-access-key)
+
+Run the following Azure CLI command to create a service connection to an Azure Blob Storage with an access key, providing the following information.
+
+```azurecli
+az aks connection create storage-blob --secret
+```
+
+Provide the following information as prompted:
+
+* **Source compute service resource group name:** the resource group name of the AKS cluster.
+* **AKS cluster name:** the name of your AKS cluster that connects to the target service.
+* **Target service resource group name:** the resource group name of the Blob Storage.
+* **Storage account name:** the account name of your Blob Storage.
+
+> [!NOTE]
+> If you don't have a Blob Storage, you can run `az aks connection create storage-blob --new --secret` to provision a new one and directly get connected to your aks cluster.
+
+### [Using a workload identity](#tab/Using-Managed-Identity)
+
+> [!IMPORTANT]
+> Using Managed Identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have the permission, your connection creation will fail. You can ask your subscription owner for the permission or use an access key to create the connection.
+
+Use the Azure CLI command to create a service connection to a Blob Storage with a workload identity, providing the following information:
+
+* **Source compute service resource group name:** the resource group name of the AKS cluster.
+* **AKS cluster name:** the name of your AKS cluster that connects to the target service.
+* **Target service resource group name:** the resource group name of the Blob Storage.
+* **Storage account name:** the account name of your Blob Storage.
+* **User-assigned identity subscription ID:** the subscription ID of the user assigned identity that used to create workload identity
+* **User-assigned identity client ID:** the client ID of the user assigned identity used to create workload identity
+
+```azurecli
+az aks connection create storage-blob \
+ --workload-identity client-id="<your-user-assigned-identity-client-id>" subs-id="<your-user-assigned-identity-subscription-id>"
+```
+
+> [!NOTE]
+> If you don't have a Blob Storage, you can run `az aks connection create storage-blob --new --workload-identity client-id="<your-user-assigned-identity-client-id>" subs-id="<your-user-assigned-identity-subscription-id>"` to provision a new one and get connected to your function app straightaway.
+++
+## View connections
+
+Use the Azure CLI [az aks connection list](/cli/azure/functionapp/connection#az-functionapp-connection-list) command to list connections to your AKS Cluster, providing the following information:
+
+* **Source compute service resource group name:** the resource group name of the AKS cluster.
+* **AKS cluster name:** the name of your AKS cluster that connects to the target service.
+
+```azurecli
+az aks connection list \
+ -g "<your-aks-cluster-resource-group>" \
+ -n "<your-aks-cluster-name>" \
+ --output table
+```
+
+## Next steps
+
+Go to the following tutorials to start connecting AKS cluster to Azure services with Service Connector.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure Key Vault using CSI driver](./tutorial-python-aks-keyvault-csi-driver.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure Storage using workload identity](./tutorial-python-aks-storage-workload-identity.md)
service-connector Quickstart Portal Aks Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-aks-connection.md
+
+ Title: Quickstart - Create a service connection in Azure Kubernetes Service (AKS) from the Azure portal
+description: Quickstart showing how to create a service connection in Azure Kubernetes Service (AKS) from the Azure portal
++++ Last updated : 03/01/2024+
+# Quickstart: Create a service connection in an AKS cluster from the Azure portal
+
+Get started with Service Connector by using the Azure portal to create a new service connection in an Azure Kubernetes Service (AKS) cluster.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An AKS cluster in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create an AKS cluster](../aks/learn/quick-kubernetes-deploy-cli.md).
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Create a new service connection in AKS cluster
+
+1. To create a new service connection in AKS cluster, select the **Search resources, services and docs (G +/)** search bar at the top of the Azure portal, type *AKS*, and select **Kubernetes services**.
+ :::image type="content" source="./media/aks-quickstart/select-aks-cluster.png" alt-text="Screenshot of the Azure portal, selecting AKS cluster.":::
+
+1. Select the AKS cluster you want to connect to a target resource.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
+ :::image type="content" source="./media/aks-quickstart/select-service-connector.png" alt-text="Screenshot of the Azure portal, selecting Service Connector and creating new connection.":::
+
+1. Select or enter the following settings.
+
+ | Setting | Example | Description |
+ ||-|-|
+ | **Kubernetes namespace**| *default* | The namespace where you need the connection in the cluster. |
+ | **Service type** | Storage - Blob | The target service type. If you don't have a Microsoft Blob Storage, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Connection name** | *my_connection* | The connection name that identifies the connection between your AKS cluster and target service. Use the connection name provided by Service Connector or choose your own connection name. |
+ | **Subscription** | My subscription | The subscription for your target service (the service you want to connect to). The default value is the subscription for this AKS cluster. |
+ | **Storage account** | *my_storage_account* | The target storage account you want to connect to. Target service instances to choose from vary according to the selected service type. |
+ | **Client type** | *python* | The code language or framework you use to connect to the target service. |
+
+1. Select **Next: Authentication** to choose an authentication method.
+
+ ### [Workload identity](#tab/UMI)
+
+ Select **Workload identity** to authenticate through [Microsoft Entra workload identity](/entra/workload-id/workload-identities-overview) to one or more instances of an Azure service. Then select a user-assigned managed identity to enable workload identity.
+
+ ### [Connection string](#tab/CS)
+
+ Select **Connection string** to generate or configure one or multiple key-value pairs with pure secrets or tokens.
+
+ ### [Service principal](#tab/SP)
+
+ Select **Service principal** to use a service principal that defines the access policy and permissions for the user/application.
+
+
+
+1. Select **Next: Networking** to configure the network access to your target service and select **Configure firewall rules to enable access to your target service**.
+1. Select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. This operation may take a minute to complete.
+
+## View service connections in AKS cluster
+
+1. The **Service Connector** tab displays existing connections in this cluster.
+1. Select **Network View** to see all the service connections in a network topology view.
+ :::image type="content" source="./media/aks-quickstart/list-and-view.png" alt-text="Screenshot of the Azure portal, listing and viewing the connections.":::
+
+## Next steps
+
+Follow the following tutorials to start connecting to Azure services on AKS cluster with Service Connector.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure Key Vault using CSI driver](./tutorial-python-aks-keyvault-csi-driver.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure Storage using workload identity](./tutorial-python-aks-storage-workload-identity.md)
service-connector Tutorial Python Aks Keyvault Csi Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-keyvault-csi-driver.md
+
+ Title: 'Tutorial: Use the Azure Key Vault provider for Secrets Store CSI Driver in an AKS cluster with Service Connector'
+description: Learn how to connect to Azure Key Vault using CSI driver in an AKS cluster with the help of Service Connector.
+++++ Last updated : 03/01/2024++
+# Tutorial: Use the Azure Key Vault provider for Secrets Store CSI Driver in an Azure Kubernetes Service (AKS) cluster
+
+Learn how to connect to Azure Key Vault using CSI driver in an Azure Kubernetes Service (AKS) cluster with the help of Service Connector. In this tutorial, you complete the following tasks:
+
+> [!div class="checklist"]
+>
+> * Create an AKS cluster and an Azure Key Vault.
+> * Create a connection between the AKS cluster and the Azure Key Vault with Service Connector.
+> * Create a `SecretProviderClass` CRD and a `pod` consuming the CSI provider to test the connection.
+> * Clean up resources.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+* [Install](/cli/azure/install-azure-cli) the Azure CLI, and sign in to Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+* Install [Docker](https://docs.docker.com/get-docker/)and [kubectl](https://kubernetes.io/docs/tasks/tools/), to manage container image and Kubernetes resources.
+* A basic understanding of container and AKS. Get started from [preparing an application for AKS](../aks/tutorial-kubernetes-prepare-app.md).
+
+## Create Azure resources
+
+1. Create a resource group for this tutorial.
+
+ ```azurecli
+ az group create \
+ --name MyResourceGroup \
+ --location eastus
+ ```
+
+1. Create an AKS cluster with the following command, or referring to the [tutorial](../aks/learn/quick-kubernetes-deploy-cli.md). This is the cluster where we create the service connection, pod definition and deploy the sample application to.
+
+ ```azurecli
+ az aks create \
+ --resource-group MyResourceGroup \
+ --name MyAKSCluster \
+ --enable-managed-identity \
+ --node-count 1
+ ```
+
+1. Connect to the cluster with the following command.
+
+ ```azurecli
+ az aks get-credentials \
+ --resource-group MyResourceGroup \
+ --name MyAKSCluster
+ ```
+
+1. Create an Azure Key Vault with the following command, or referring to the [tutorial](../key-vault/general/quick-create-cli.md). This is the target service that is connected to the AKS cluster and the CSI driver synchronize secrets from.
+
+ ```azurecli
+ az keyvault create \
+ --resource-group MyResourceGroup \
+ --name MyKeyVault \
+ --location EastUS
+ ```
+
+1. Create a secret in the Key Vault with the following command.
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name MyKeyVault \
+ --name ExampleSecret \
+ --value MyAKSExampleSecret
+ ```
+
+## Create a service connection with Service Connector
+
+Create a service connection between an AKS cluster and an Azure Key Vault using the Azure portal or the Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. Open your **Kubernetes service** in the Azure portal and select **Service Connector** from the left menu.
+
+1. Select **Create** and fill in the settings as shown below. Leave the other settings with their default values.
+
+ | Setting | Choice | Description |
+ ||--|-|
+ | **Kubernetes namespace**| *default* | The namespace where you need the connection in the cluster. |
+ | **Service type** | *Key Vault (enable CSI)* | Choose Key Vault as the target service type and check the option to enable CSI. |
+ | **Connection name** | *keyvault_conn* | Use the connection name provided by Service Connector or choose your own connection name. |
+ | **Subscription** | `<MySubscription>` | The subscription for your Azure Key Vault target service. |
+ | **Key vault** | `<MyKeyVault>` | The target key vault you want to connect to. |
+ | **Client type** | *Python* | The code language or framework you use to connect to the target service. |
+
+1. Once the connection has been created, the Service Connector page displays information about the new connection.
+
+ :::image type="content" source="./media/aks-tutorial/kubernetes-resources.png" alt-text="Screenshot of the Azure portal, viewing kubernetes resources created by Service Connector.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the following Azure CLI command to create a service connection to an Azure Key Vault.
+
+```azurecli
+az aks connection create keyvault --enable-csi
+```
+
+Provide the following information as prompted:
+
+* **Source compute service resource group name:** the resource group name of the AKS cluster.
+* **AKS cluster name:** the name of your AKS cluster that connects to the target service.
+* **Target service resource group name:** the resource group name of the Azure Key Vault.
+* **Key vault name:** the Azure Key Vault that is connected.
+++
+## Test the connection
+
+1. Clone the sample repository:
+
+ ```Bash
+ git clone https://github.com/Azure-Samples/serviceconnector-aks-samples.git
+ ```
+
+1. Go to the repository's sample folder for Azure Key Vault:
+
+ ```Bash
+ cd serviceconnector-aks-samples/azure-keyvault-csi-provider
+ ```
+
+1. Replace the placeholders in the `secret_provider_class.yaml` file in the `azure-keyvault-csi-provider` folder.
+
+ * Replace `<AZURE_KEYVAULT_NAME>` with the name of the key vault we created and connected. You may get the value from Azure portal of Service Connector.
+ * Replace `<AZURE_KEYVAULT_TENANTID>` with the tenant ID of the key vault. You may get the value from Azure portal of Service Connector.
+ * Replace `<AZURE_KEYVAULT_CLIENTID>` with identity client ID of the `azureKeyvaultSecretsProvider` addon. You may get the value from Azure portal of Service Connector.
+ * Replace `<KEYVAULT_SECRET_NAME>` with the key vault secret name we created, for example, `ExampleSecret`
+
+1. Deploy the Kubernetes resources to your cluster with the `kubectl apply` command. Install `kubectl` locally using the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command if it isn't installed.
+
+ 1. Deploy the `SecretProviderClass` CRD.
+
+ ```Bash
+ kubectl apply -f secret_provider_class.yaml
+ ```
+
+ 1. Deploy the `pod`. The command creates a pod named `sc-demo-keyvault-csi` in the default namespace of your AKS cluster.
+
+ ```Bash
+ kubectl apply -f pod.yaml
+ ```
+
+1. Check the deployment is successful by viewing the pod with `kubectl`.
+
+ ```Bash
+ kubectl get pod/sc-demo-keyvault-csi
+ ```
+
+1. After the pod starts, the mounted content at the volume path specified in your deployment YAML is available. Use the following commands to validate your secrets and print a test secret.
+
+ * Show secrets held in the secrets store using the following command.
+
+ ```Bash
+ kubectl exec sc-demo-keyvault-csi -- ls /mnt/secrets-store/
+ ```
+
+ * Display a secret in the store using the following command. This example command shows the test secret `ExampleSecret`.
+
+ ```Bash
+ kubectl exec sc-demo-keyvault-csi -- cat /mnt/secrets-store/ExampleSecret
+ ```
+
+## Clean up resources
+
+If you don't need to reuse the resources you've created in this tutorial, delete all the resources you created by deleting your resource group.
+
+```azurecli
+az group delete \
+ --resource-group MyResourceGroup
+```
+
+## Next steps
+
+Read the following articles to learn more about Service Connector concepts and how it helps AKS connect to services.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
+
+> [!div class="nextstepaction"]
+> [Use Service Connector to connect an AKS cluster to other cloud services](./how-to-use-service-connector-in-aks.md)
service-connector Tutorial Python Aks Storage Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-storage-workload-identity.md
+
+ Title: 'Tutorial: Connect to Azure storage account in Azure Kubernetes Service (AKS) with Service Connector using workload identity'
+description: Learn how to connect to an Azure storage account using workload identity with the help of Service Connector
+++++ Last updated : 03/01/2024++
+# Tutorial: Connect to Azure storage account in Azure Kubernetes Service (AKS) with Service Connector using workload identity
+
+Learn how to create a pod in an AKS cluster, which talks to an Azure storage account using workload identity with the help of Service Connector. In this tutorial, you complete the following tasks:
+
+> [!div class="checklist"]
+>
+> * Create an AKS cluster and an Azure storage account.
+> * Create a connection between the AKS cluster and the Azure storage account with Service Connector.
+> * Clone a sample application that will talk to the Azure storage account from an AKS cluster.
+> * Deploy the application to a pod in AKS cluster and test the connection.
+> * Clean up resources.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+* [Install](/cli/azure/install-azure-cli) the Azure CLI, and sign in to Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+* Install [Docker](https://docs.docker.com/get-docker/)and [kubectl](https://kubernetes.io/docs/tasks/tools/), to manage container image and Kubernetes resources.
+* A basic understanding of container and AKS. Get started from [preparing an application for AKS](../aks/tutorial-kubernetes-prepare-app.md).
+* A basic understanding of [workload identity](/entra/workload-id/workload-identities-overview).
+
+## Create Azure resources
+
+1. Create a resource group for this tutorial.
+
+```azurecli
+az group create \
+ --name MyResourceGroup \
+ --location eastus
+```
+
+1. Create an AKS cluster with the following command, or referring to the [tutorial](../aks/learn/quick-kubernetes-deploy-cli.md). We create the service connection, pod definition and deploy the sample application to this cluster.
+
+ ```azurecli
+ az aks create \
+ --resource-group MyResourceGroup \
+ --name MyAKSCluster \
+ --enable-managed-identity \
+ --node-count 1
+ ```
+
+1. connect to the cluster with the following command.
+
+ ```azurecli
+ az aks get-credentials \
+ --resource-group MyResourceGroup \
+ --name MyAKSCluster
+ ```
+
+1. Create an Azure storage account with the following command, or referring to the [tutorial](../storage/common/storage-account-create.md). This is the target service that is connected to the AKS cluster and sample application interacts with.
+
+```azurecli
+az storage account create \
+ --resource-group MyResourceGroup \
+ --name MyStorageAccount \
+ --location eastus \
+ --sku Standard_LRS
+```
+
+1. Create an Azure container registry with the following command, or referring to the [tutorial](../container-registry/container-registry-get-started-portal.md). The registry hosts the container image of the sample application, which will be consumed by the AKS pod definition.
+
+```azurecli
+az acr create \
+ --resource-group MyResourceGroup \
+ --name MyRegistry \
+ --sku Standard
+```
+
+And enable anonymous pull so that AKS cluster can consume the images in the registry.
+
+```azurecli
+az acr update \
+ --resource-group MyResourceGroup \
+ --name MyRegistry \
+ --anonymous-pull-enabled
+```
+
+1. Create a user-assigned managed identity with the following command, or referring to the [tutorial](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). The user-assigned managed identity is used in service connection creation to enable workload identity for AKS workloads.
+
+```azurecli
+az identity create \
+ --resource-group MyResourceGroup \
+ --name MyIdentity
+```
+
+## Create service connection with Service Connector
+
+Create a service connection between an AKS cluster and an Azure storage account using the Azure portal or the Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. Open your **Kubernetes service** in the Azure portal and select **Service Connector** from the left menu.
+
+1. Select **Create** and fill in the settings as shown below. Leave the other settings with their default values.
+
+ Basics tab:
+
+ | Setting | Choice | Description |
+ ||-|-|
+ | **Kubernetes namespace**| *default* | The namespace where you need the connection in the cluster. |
+ | **Service type** | *Storage - Blob* | The target service type. |
+ | **Connection name** | *storage_conn* | Use the connection name provided by Service Connector or choose your own connection name. |
+ | **Subscription** | `<MySubscription>` | The subscription for your Azure Blob Storage target service. |
+ | **Storage account** | `<MyStorageAccount>` | The target storage account you want to connect to. |
+ | **Client type** | *Python* | The code language or framework you use to connect to the target service. |
+
+ Authentication tab:
+
+ | Authentication Setting | Choice | Description |
+ |||-|
+ | **Authentication type** | *Workload Identity* | Service Connector authentication type. |
+ | **User assigned managed identity** | `<MyIdentity>` | A user assigned managed identity is needed to enable workload identity. |
+
+1. Once the connection has been created, the Service Connector page displays information about the new connection.
++
+### [Azure CLI](#tab/azure-cli)
+
+Run the following Azure CLI command to create a service connection to the Azure storage account, providing the following information:
+
+```azurecli
+az aks connection create storage-blob \
+ --workload-identity <user-identity-resource-id>
+```
+
+Provide the following information as prompted:
+
+* **Source compute service resource group name:** the resource group name of the AKS cluster.
+* **AKS cluster name:** the name of your AKS cluster that connects to the target service.
+* **Target service resource group name:** the resource group name of the Azure storage account.
+* **Storage account name:** the Azure storage account that is connected.
+* **User-assigned identity subscription ID:** the subscription ID of the user-assigned identity used to create workload identity.
+* **User-assigned identity client ID:** the client ID of the user-assigned identity used to create workload identity.
+++
+## Clone sample application
+
+1. Clone the sample repository:
+
+ ```Bash
+ git clone https://github.com/Azure-Samples/serviceconnector-aks-samples.git
+ ```
+
+1. Go to the repository's sample folder for Azure storage:
+
+ ```Bash
+ cd serviceconnector-aks-samples/azure-storage-workload-identity
+ ```
+
+## Build and push container image
+
+1. Build and push the images to your container registry using the Azure CLI [`az acr build`](/cli/azure/acr#az_acr_build) command.
+
+```azurecli
+az acr build --registry <MyRegistry> --image sc-demo-storage-identity:latest ./
+```
+
+1. View the images in your container registry using the [`az acr repository list`](/cli/azure/acr/repository#az_acr_repository_list) command.
+
+```azurecli
+az acr repository list --name <MyRegistry> --output table
+```
+
+## Run application and test connection
+
+1. Replace the placeholders in the `pod.yaml` file in the `azure-storage-identity` folder.
+
+ * Replace `<YourContainerImage>` with the image name we build in last step, for example, `<MyRegistry>.azurecr.io/sc-demo-storage-identity:latest`.
+ * Replace `<ServiceAccountCreatedByServiceConnector>` with the service account created by Service Connector after the connection creation. You may check the service account name in the Azure portal of Service Connector.
+ * Replace `<SecretCreatedByServiceConnector>` with the secret created by Service Connector after the connection creation. You may check the secret name in the Azure portal of Service Connector.
+
+1. Deploy the pod to your cluster with `kubectl apply` command. Install `kubectl` locally using the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command if it isn't installed. The command creates a pod named `sc-demo-storage-identity` in the default namespace of your AKS cluster.
+
+ ```Bash
+ kubectl apply -f pod.yaml
+ ```
+
+1. Check the deployment is successful by viewing the pod with `kubectl`.
+
+ ```Bash
+ kubectl get pod/sc-demo-storage-identity.
+ ```
+
+1. Check connection is established by viewing the logs with `kubectl`.
+
+ ```Bash
+ kubectl logs pod/sc-demo-storage-identity
+ ```
+
+## Clean up resources
+
+If you don't need to reuse the resources you've created in this tutorial, delete all the resources you created by deleting your resource group.
+
+```azurecli
+az group delete \
+ --resource-group MyResourceGroup
+```
+
+## Next steps
+
+Read the following articles to learn more about Service Connector concepts and how it helps AKS connect to services.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
+
+> [!div class="nextstepaction"]
+> [Use Service Connector to connect an AKS cluster to other cloud services](./how-to-use-service-connector-in-aks.md)
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
Previously updated : 08/10/2023 Last updated : 04/02/2024
-# Quickstart: Building your first static site with Azure Static Web Apps
+# Quickstart: Build your first static site with Azure Static Web Apps
Azure Static Web Apps publishes a website by building an app from a code repository. In this quickstart, you deploy an application to Azure Static Web apps using the Visual Studio Code extension.
If you don't already have the [Azure Static Web Apps extension for Visual Studio
1. Select **View** > **Extensions**. 1. In the **Search Extensions in Marketplace**, type **Azure Static Web Apps**. 1. Select **Install** for **Azure Static Web Apps**.
-2. The extension installs into Visual Studio Code.
## Create a static web app
If you don't already have the [Azure Static Web Apps extension for Visual Studio
> [!NOTE] > You are required to sign in to Azure and GitHub in Visual Studio Code to continue. If you are not already authenticated, the extension prompts you to sign in to both services during the creation process.
-2. Select <kbd>F1</kbd> to open the Visual Studio Code command palette.
+1. Select <kbd>F1</kbd> to open the Visual Studio Code command palette.
-3. Enter **Create static web app** in the command box.
+1. Enter **Create static web app** in the command box.
-4. Select *Azure Static Web Apps: Create static web app...*.
+1. Select *Azure Static Web Apps: Create static web app...*.
- # [No Framework](#tab/vanilla-javascript)
+1. Select your Azure subscription.
- | Setting | Value |
- | | |
- | Name | Enter **my-first-static-web-app** |
- | Region | Select the region closest to you. |
- | Framework | Select **Custom**. |
+1. Enter **my-first-static-web-app** for the application name.
- # [Angular](#tab/angular)
-
- | Setting | Value |
- | | |
- | Name | Enter **my-first-static-web-app** |
- | Region | Select the region closest to you. |
- | Framework | Select **Angular**. |
+1. Select the region closest to you.
- # [Blazor](#tab/blazor)
-
- | Setting | Value |
- | | |
- | Name | Enter **my-first-static-web-app** |
- | Region | Select the region closest to you. |
- | Framework | Select **Blazor**. |
-
- # [React](#tab/react)
-
- | Setting | Value |
- | | |
- | Name | Enter **my-first-static-web-app** |
- | Region | Select the region closest to you. |
- | Framework | Select **React**. |
-
- # [Vue](#tab/vue)
-
- | Setting | Value |
- | | |
- | Name | Enter **my-first-static-web-app** |
- | Region | Select the region closest to you. |
- | Framework | Select **Vue.js**. |
-
-
-
-5. Enter the settings values that match your framework preset choice.
+1. Enter the settings values that match your framework choice.
# [No Framework](#tab/vanilla-javascript) | Setting | Value | | | |
- | Location of application code | Enter **/src** |
- | Build location | Enter **/src** |
+ | Framework | Select **Custom** |
+ | Location of application code | Enter `/src` |
+ | Build location | Enter `/src` |
# [Angular](#tab/angular) | Setting | Value | | | |
- | Location of application code | Enter **/** |
- | Build location | Enter **dist/angular-basic** |
+ | Framework | Select **Angular** |
+ | Location of application code | Enter `/` |
+ | Build location | Enter `dist/angular-basic` |
# [Blazor](#tab/blazor) | Setting | Value | | | |
- | Location of application code | Enter **Client** |
- | Build location | Enter **wwwroot** |
+ | Framework | Select **Blazor** |
+ | Location of application code | Enter `Client` |
+ | Build location | Enter `wwwroot` |
# [React](#tab/react) | Setting | Value | | | |
- | Location of application code | Enter **/** |
- | Build location | Enter **build** |
+ | Framework | Select **React** |
+ | Location of application code | Enter `/` |
+ | Build location | Enter `build` |
# [Vue](#tab/vue) | Setting | Value | | | |
- | Location of application code | Enter **/** |
- | Build location | Enter **dist** |
+ | Framework | Select **Vue.js** |
+ | Location of application code | Enter `/` |
+ | Build location | Enter `dist` |
-6. Once the app is created, a confirmation notification is shown in Visual Studio Code.
+1. Once the app is created, a confirmation notification is shown in Visual Studio Code.
:::image type="content" source="media/getting-started/extension-confirmation.png" alt-text="Created confirmation":::
+ If GitHub presents you with a button labeled **Enable Actions on this repository**, select the button to allow the build action to run on your repository.
+ As the deployment is in progress, the Visual Studio Code extension reports the build status to you. :::image type="content" source="media/getting-started/extension-waiting-for-deployment.png" alt-text="Waiting for deployment"::: Once the deployment is complete, you can navigate directly to your website.
-7. To view the website in the browser, right-click the project in the Static Web Apps extension, and select **Browse Site**.
+1. To view the website in the browser, right-click the project in the Static Web Apps extension, and select **Browse Site**.
:::image type="content" source="media/getting-started/extension-browse-site.png" alt-text="Browse site":::
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Here are the settings to add this condition using the Azure portal.
> | Operator | [ForAllOfAnyValues:StringEqualsIgnoreCase](../../role-based-access-control/conditions-format.md#forallofanyvalues) | > | Value | {'metadata',ΓÇ»'snapshots',ΓÇ»'versions'} | + # [Portal: Code editor](#tab/portal-code-editor) To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
Here are the settings to add this condition using the Azure portal.
> | Operator | [ForAllOfAllValues:StringNotEquals](../../role-based-access-control/conditions-format.md#forallofallvalues) | > | Value | {'metadata'} | + # [Portal: Code editor](#tab/portal-code-editor) To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Previously updated : 01/26/2024 Last updated : 04/01/2024
The [Azure role assignment condition format](../../role-based-access-control/con
## Status of condition features in Azure Storage
-Currently, Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access only to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request`, `resource`, and `principal` attributes in the standard storage account performance tier. It's either not available or in PREVIEW for other storage account performance tiers, resource types, and attributes.
+Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request`, `resource`, `environment`, and `principal` attributes in both the standard and premium storage account performance tiers. Currently, the container metadata resource attribute and the list blob include request attribute are in PREVIEW.
-See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The following table shows the current status of ABAC by storage account performance tier, storage resource type, and attribute type. Exceptions for specific attributes are also shown.
+The following table shows the current status of ABAC by storage resource type and attribute type. Exceptions for specific attributes are also shown.
-| Performance tier | Resource types | Attribute types | Attributes | Availability |
-||||||
-| Standard | Blobs<br/>Data Lake Storage Gen2<br/>Queues | request<br/>resource<br/>principal | All attributes except for the snapshot resource attribute for Data Lake Storage Gen2 | GA |
-| Standard | Data Lake Storage Gen2 | resource | snapshot | Preview |
-| Standard | Blobs<br/>Data Lake Storage Gen2<br/>Queues | environment | All attributes | Preview |
-| Premium | Blobs<br/>Data Lake Storage Gen2<br/>Queues | environment<br/>principal<br/>request<br/>resource | All attributes | Preview |
+| Resource types | Attribute types | Attributes | Availability |
+|||||
+| Blobs<br/>Data Lake Storage Gen2<br/>Queues | Request<br/>Resource<br/>Environment<br/>Principal | All attributes except those noted in this table | GA |
+| Data Lake Storage Gen2 | Resource | [Snapshot](storage-auth-abac-attributes.md#snapshot) | Preview |
+| Blobs<br/>Data Lake Storage Gen2 | Resource | [Container metadata](storage-auth-abac-attributes.md#container-metadata) | Preview |
+| Blobs | Request | [List blob include](storage-auth-abac-attributes.md#list-blob-include) | Preview |
+See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> [!NOTE] > Some storage features aren't supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace (HNS). To learn more, see [Blob storage feature support](storage-feature-support-in-storage-accounts.md).
The following table shows the current status of ABAC by storage account performa
> - [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys) > - [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) > - [Version ID](storage-auth-abac-attributes.md#version-id)
+> - [List blob include](storage-auth-abac-attributes.md#list-blob-include)
## Next steps
storage Files Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-manage-namespaces.md
description: Common DFS-N use cases with Azure Files
Previously updated : 3/02/2021 Last updated : 04/02/2024 # How to use DFS Namespaces with Azure Files
-[Distributed File Systems Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview), commonly referred to as DFS Namespaces or DFS-N, is a Windows Server server role that is widely used to simplify the deployment and maintenance of SMB file shares in production. DFS Namespaces is a storage namespace virtualization technology, which means that it enables you to provide a layer of indirection between the UNC path of your file shares and the actual file shares themselves. DFS Namespaces works with SMB file shares, agnostic of where those file shares are hosted: it can be used with SMB shares hosted on an on-premises Windows File Server with or without Azure File Sync, Azure file shares directly, SMB file shares hosted in Azure NetApp Files or other third-party offerings, and even with file shares hosted in other clouds.
+
+[Distributed File Systems Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview), commonly referred to as DFS Namespaces or DFS-N, is a Windows Server server role that's widely used to simplify the deployment and maintenance of SMB file shares in production. DFS Namespaces is a storage namespace virtualization technology, which means that it enables you to provide a layer of indirection between the UNC path of your file shares and the actual file shares themselves. DFS Namespaces works with SMB file shares, agnostic of where those file shares are hosted. It can be used with SMB shares hosted on an on-premises Windows File Server with or without Azure File Sync, Azure file shares directly, SMB file shares hosted in Azure NetApp Files or other third-party offerings, and even with file shares hosted in other clouds.
At its core, DFS Namespaces provides a mapping between a user-friendly UNC path, like `\\contoso\shares\ProjectX` and the underlying UNC path of the SMB share like `\\Server01-Prod\ProjectX` or `\\storageaccount.file.core.windows.net\projectx`. When the end user wants to navigate to their file share, they type in the user-friendly UNC path, but their SMB client accesses the underlying SMB path of the mapping. You can also extend this basic concept to take over an existing file server name, such as `\\MyServer\ProjectX`. You can use this capability to achieve the following scenarios:
At its core, DFS Namespaces provides a mapping between a user-friendly UNC path,
- Extend a logical set of data across size, IO, or other scale thresholds. This is common when dealing with user directories, where every user gets their own folder on a share, or with scratch shares, where users get arbitrary space to handle temporary data needs. With DFS Namespaces, you stitch together multiple folders into a cohesive namespace. For example, `\\contoso\shares\UserShares\user1` maps to `\\storageaccount.file.core.windows.net\user1`, `\\contoso\shares\UserShares\user2` maps to `\\storageaccount.file.core.windows.net\user2`, and so on.
-You can see an example of how to use DFS Namespaces with your Azure Files deployment in the following video overview.
+You can see an example of how to use DFS Namespaces with your Azure Files deployment in the following video overview. Note that Azure Active Directory is now Microsoft Entra ID. For more information, see [New name for Azure AD](https://aka.ms/azureadnewname).
[![Demo on how to set up DFS-N with Azure Files - click to play!](./media/files-manage-namespaces/video-snapshot-dfsn.png)](https://www.youtube.com/watch?v=jd49W33DxkQ) > [!NOTE] > Skip to 10:10 in the video to see how to set up DFS Namespaces.
-If you already have a DFS Namespace in place, no special steps are required to use it with Azure Files and File Sync. If you're accessing your Azure file share from on-premises, normal networking considerations apply; see [Azure Files networking considerations](./storage-files-networking-overview.md) for more information.
+If you already have a DFS Namespace in place, no special steps are required to use it with Azure Files and File Sync. If you're accessing your Azure file share from on-premises, normal networking considerations apply. See [Azure Files networking considerations](./storage-files-networking-overview.md) for more information.
## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
If you already have a DFS Namespace in place, no special steps are required to u
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Namespace types+ DFS Namespaces provides two main namespace types: -- **Domain-based namespace**: A namespace hosted as part of your Windows Server AD domain. Namespaces hosted as part of AD will have a UNC path containing the name of your domain, for example, `\\contoso.com\shares\myshare`, if your domain is `contoso.com`. Domain-based namespaces support larger scale limits and built-in redundancy through AD. Domain-based namespaces can't be a clustered resource on a failover cluster.
+- **Domain-based namespace**: A namespace hosted as part of your Windows Server AD domain. Namespaces hosted as part of AD will have a UNC path containing the name of your domain, for example, `\\contoso.com\shares\myshare`, if your domain is `contoso.com`. Domain-based namespaces support larger scale limits and built-in redundancy through AD. Domain-based namespaces can't be a clustered resource on a failover cluster.
- **Standalone namespace**: A namespace hosted on an individual server, not hosted as part of Windows Server AD. Standalone namespaces will have a name based on the name of the standalone server, such as `\\MyStandaloneServer\shares\myshare`, where your standalone server is named `MyStandaloneServer`. Standalone namespaces support lower scale targets than domain-based namespaces but can be hosted as a clustered resource on a failover cluster. ## Requirements+ To use DFS Namespaces with Azure Files and File Sync, you must have the following resources: - An Active Directory domain. This can be hosted anywhere you like, such as on-premises, in an Azure virtual machine (VM), or even in another cloud. - A Windows Server that can host the namespace. A common pattern deployment pattern for DFS Namespaces is to use the Active Directory domain controller to host the namespaces, however the namespaces can be setup from any server with the DFS Namespaces server role installed. DFS Namespaces is available on all supported Windows Server versions.-- An SMB file share hosted in a domain-joined environment, such as an Azure file share hosted within a domain-joined storage account, or a file share hosted on a domain-joined Windows File Server using Azure File Sync. For more on domain-joining your storage account, see [Identity-based authentication](storage-files-active-directory-overview.md). Windows File Servers are domain-joined the same way regardless of whether you are using Azure File Sync.-- The SMB file shares you want to use with DFS Namespaces must be reachable from your on-premises networks. This is primarily a concern for Azure file shares, however, technically applies to any file share hosted in Azure or any other cloud. For more information on networking, see [Networking considerations for direct access](storage-files-networking-overview.md).
+- An SMB file share hosted in a domain-joined environment, such as an Azure file share hosted within a domain-joined storage account, or a file share hosted on a domain-joined Windows File Server using Azure File Sync. For more on domain-joining your storage account, see [Identity-based authentication](storage-files-active-directory-overview.md). Windows File Servers are domain-joined the same way regardless of whether you're using Azure File Sync.
+- The SMB file shares you want to use with DFS Namespaces must be reachable from your on-premises networks. This is primarily a concern for Azure file shares, however, it technically applies to any file share hosted in the cloud. For more information, see [Networking considerations for direct access](storage-files-networking-overview.md).
## Install the DFS Namespaces server role
-If you are already using DFS Namespaces, or wish to set up DFS Namespaces on your domain controller, you may safely skip these steps.
+
+If you're already using DFS Namespaces, or wish to set up DFS Namespaces on your domain controller, you may safely skip these steps.
# [Portal](#tab/azure-portal)
-To install the DFS Namespaces server role, open the Server Manager on your server. Select **Manage**, and then select **Add Roles and Features**. The resulting wizard guides you through the installation of the necessary Windows components to run and manage DFS Namespaces.
+To install the DFS Namespaces server role, open the Server Manager on your server. Select **Manage**, and then select **Add Roles and Features**. The resulting wizard guides you through the installation of the necessary Windows components to run and manage DFS Namespaces.
-In the **Installation Type** section of the installation wizard, select the **Role-based or feature-based installation** radio button and select **Next**. On the **Server Selection** section, select the desired server(s) on which you would like to install the DFS Namespaces server role, and select **Next**.
+In the **Installation Type** section of the installation wizard, select the **Role-based or feature-based installation** radio button and select **Next**. On the **Server Selection** section, select the desired server(s) on which you want to install the DFS Namespaces server role, and select **Next**.
-In the **Server Roles** section, select and check the **DFS Namespaces** role from role list. You can find this under **File and Storage Services** > **File and ISCSI Services**. When you select the DFS Namespaces server role, it may also add any required supporting server roles or features that you don't already have installed.
+In the **Server Roles** section, select and check the **DFS Namespaces** role from role list. You can find this under **File and Storage Services** > **File and ISCSI Services**. When you select the DFS Namespaces server role, it might also add any required supporting server roles or features that you don't already have installed.
![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](./media/files-manage-namespaces/dfs-namespaces-install.png)
Install-WindowsFeature -Name "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"
## Take over existing server names with root consolidation
-An important use for DFS Namespaces is to take over an existing server name for the purposes of refactoring the physical layout of the file shares. For example, you may wish to consolidate file shares from multiple old file servers together on a single file server during a modernization migration. Traditionally, end user familiarity and document-linking limit your ability to consolidate file shares from disparate file servers together on one host, but the DFS Namespaces root consolidation feature allows you to stand-up a single server to multiple server names and route to the appropriate share name.
-Although useful for various datacenter migration scenarios, root consolidation is especially useful for adopting cloud-native Azure file shares because:
+An important use for DFS Namespaces is to take over an existing server name for the purposes of refactoring the physical layout of the file shares. For example, you may wish to consolidate file shares from multiple old file servers together on a single file server during a modernization migration. Traditionally, end-user familiarity and document-linking limit your ability to consolidate file shares from disparate file servers together on one host. However, the DFS Namespaces root consolidation feature allows you to stand up a single server to multiple server names and route to the appropriate share name.
+
+Although useful for various data center migration scenarios, root consolidation is especially useful for adopting cloud-native Azure file shares because:
- Azure file shares don't allow you to keep existing on-premises server names.-- Azure file shares must be accessed via the fully qualified domain name (FQDN) of the storage account. For example, an Azure file share called `share` in storage account `storageaccount` is always accessed through the `\\storageaccount.file.core.windows.net\share` UNC path. This can be confusing to end users who expect a short name (ex. `\\MyServer\share`) or a name that is a subdomain of the organization's domain name (ex. `\\MyServer.contoso.com\share`).
+- Azure file shares must be accessed via the fully qualified domain name (FQDN) of the storage account. For example, an Azure file share called `share` in storage account `storageaccount` is always accessed through the `\\storageaccount.file.core.windows.net\share` UNC path. This can be confusing to end users who expect a short name (ex. `\\MyServer\share`) or a name that is a subdomain of the organization's domain name (for example `\\MyServer.contoso.com\share`).
-Root consolidation may only be used with standalone namespaces. If you already have an existing domain-based namespace for your file shares, you do not need to create a root consolidated namespace.
+Root consolidation may only be used with standalone namespaces. If you already have an existing domain-based namespace for your file shares, you don't need to create a root consolidated namespace.
### Enabling root consolidation
-Root consolidation can be enabled by setting the following registry keys from an elevated PowerShell session (or using PowerShell remoting).
+
+Enable root consolidation by setting the following registry keys from an elevated PowerShell session (or using PowerShell remoting).
```PowerShell New-Item `
Set-ItemProperty `
``` ### Creating DNS entries for existing file server names
-In order for DFS Namespaces to respond to existing file server names, create alias (CNAME) records for your existing file servers that point at the DFS Namespaces server name. The exact procedure for updating your DNS records may depend on what servers your organization is using and if your organization is using custom tooling to automate the management of DNS. The following steps are shown for the DNS server included with Windows Server and automatically used by Windows AD.
+
+In order for DFS Namespaces to respond to existing file server names, create alias (CNAME) records for your existing file servers that point at the DFS Namespaces server name. The exact procedure for updating your DNS records may depend on what servers your organization is using and if your organization is using custom tooling to automate the management of DNS. The following steps are for the DNS server included with Windows Server and automatically used by Windows AD.
# [Portal](#tab/azure-portal)
-On a Windows DNS server, open the DNS management console. This can be found by selecting the **Start** button and typing **DNS**. Navigate to the forward lookup zone for your domain. For example, if your domain is `contoso.com`, the forward lookup zone can be found under **Forward Lookup Zones** > **`contoso.com`** in the management console. The exact hierarchy shown in this dialog will depend on the DNS configuration for your network.
+On a Windows DNS server, open the DNS management console. You can find this by selecting the **Start** button and typing **DNS**. Navigate to the forward lookup zone for your domain. For example, if your domain is `contoso.com`, the forward lookup zone can be found under **Forward Lookup Zones** > **`contoso.com`** in the management console. The exact hierarchy shown in this dialog depends on the DNS configuration for your network.
Right-click on your forward lookup zone and select **New Alias (CNAME)**. In the resulting dialog, enter the short name for the file server you're replacing (the fully qualified domain name will be auto-populated in the textbox labeled **Fully qualified domain name**). In the textbox labeled **Fully qualified domain name (FQDN) for the target host**, enter the name of the DFS-N server you have set up. You can use the **Browse** button to help you select the server if desired. Select **OK** to create the CNAME record for your server.
Add-DnsServerResourceRecordCName `
## Create a namespace
-The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the starting point of the namespace, such that in the UNC path `\\contoso.com\Public\`, the namespace root is `Public`.
-If you are using DFS Namespaces to take over an existing server name with root consolidation, the name of the namespace should be the name of server name you want to take over, prepended with the `#` character. For example, if you wanted to take over an existing server named `MyServer`, you would create a DFS-N namespace called `#MyServer`. The PowerShell section below takes care of prepending the `#`, but if you create via the DFS Management console, you will need to prepend as appropriate.
+The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the starting point of the namespace, such that in the UNC path `\\contoso.com\Public\`, the namespace root is `Public`.
+
+If you're using DFS Namespaces to take over an existing server name with root consolidation, the name of the namespace should be the name of server name you want to take over, prepended with the `#` character. For example, if you want to take over an existing server named `MyServer`, you would create a DFS-N namespace called `#MyServer`. The PowerShell section below takes care of prepending the `#`, but if you create via the DFS Management console, you'll need to prepend as appropriate.
# [Portal](#tab/azure-portal)
-To create a new namespace, open the **DFS Management** console. This can be found by selecting the **Start** button and typing **DFS Management**. The resulting management console has two sections **Namespaces** and **Replication**, which refer to DFS Namespaces and DFS Replication (DFS-R) respectively. Azure File Sync provides a modern replication and synchronization mechanism that may be used in place of DFS-R if replication is also desired.
+To create a new namespace, open the **DFS Management** console. You can find this by selecting the **Start** button and typing **DFS Management**. The resulting management console has two sections called **Namespaces** and **Replication**, which refer to DFS Namespaces and DFS Replication (DFS-R) respectively. Azure File Sync provides a modern replication and synchronization mechanism that may be used in place of DFS-R if replication is also desired.
-Select the **Namespaces** section, and select the **New Namespace** button (you may also right-click on the **Namespaces** section). The resulting **New Namespace Wizard** walks you through creating a namespace.
+Select the **Namespaces** section, and select the **New Namespace** button (you may also right-click on the **Namespaces** section). The resulting **New Namespace Wizard** walks you through creating a namespace.
-The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple servers can host a namespace, but you will need to set up DFS Namespaces with one server at a time. Enter the name of the desired DFS Namespace server and select **Next**. In the **Namespace Name and Settings** section, you can enter the desired name of your namespace and select **Next**.
+The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple servers can host a namespace, but you'll need to set up DFS Namespaces with one server at a time. Enter the name of the desired DFS Namespace server and select **Next**. In the **Namespace Name and Settings** section, enter the desired name of your namespace and select **Next**.
-The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. If you intend to use DFS Namespaces to preserve an existing file server/NAS device name, you should select the standalone namespace option. For any other scenarios, you should select a domain-based namespace. Refer to [namespace types](#namespace-types) above for more information on choosing between namespace types.
+The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. If you intend to use DFS Namespaces to preserve an existing file server/NAS device name, you should select the standalone namespace option. For any other scenario, select a domain-based namespace. Refer to [namespace types](#namespace-types) for more information on choosing between namespace types.
![A screenshot of selecting between a domain-based namespace and a standalone namespace in the **New Namespace Wizard**.](./media/files-manage-namespaces/dfs-namespace-type.png)
New-DfsnRoot -Path $namespacePath -TargetPath $targetPath -Type $type
## Configure folders and folder targets+ For a namespace to be useful, it must have folders and folder targets. Each folder can have one or more folder targets, which are pointers to the SMB file share(s) that host that content. When users browse a folder with folder targets, the client computer receives a referral that transparently redirects the client computer to one of the folder targets. You can also have folders without folder targets to add structure and hierarchy to the namespace.
-You can think of DFS Namespaces folders as analogous to file shares.
+You can think of DFS Namespaces folders as analogous to file shares.
# [Portal](#tab/azure-portal) In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog will allow you to create both the folder and its targets. ![A screenshot of the **New Folder** dialog.](./media/files-manage-namespaces/dfs-folder-targets.png)
-In the textbox labeled **Name** provide the name of the folder. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to the desired folder. Select **OK** on the **Add Folder Target** dialog. If you are adding a UNC path to an Azure file share, you may receive a message reporting that the server `storageaccount.file.core.windows.net` cannot be contacted. This is expected; select **Yes** to continue. Finally, select **OK** on the **New Folder** dialog to create the folder and folder targets.
+In the textbox labeled **Name** provide the name of the folder. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to the desired folder. Select **OK** on the **Add Folder Target** dialog. If you are adding a UNC path to an Azure file share, you might receive a message reporting that the server `storageaccount.file.core.windows.net` can't be contacted. This is expected; select **Yes** to continue. Finally, select **OK** on the **New Folder** dialog to create the folder and folder targets.
# [PowerShell](#tab/azure-powershell) ```PowerShell
New-DfsnFolder -Path $sharePath -TargetPath $targetUNC
-Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file share through DFS Namespaces. If you are using a domain-based namespace, the full path for your share should be `\\<domain-name>\<namespace>\<share>`. If you are using a standalone namespace, the full path for your share should be `\\<DFS-server>\<namespace>\<share>`. If you are using a standalone namespace with root consolidation, you can access directly through your old server name, such as `\\<old-server>\<share>`.
+Now that you've created a namespace, a folder, and a folder target, you should be able to mount your file share through DFS Namespaces. If you're using a domain-based namespace, the full path for your share should be `\\<domain-name>\<namespace>\<share>`. If you are using a standalone namespace, the full path for your share should be `\\<DFS-server>\<namespace>\<share>`. If you're using a standalone namespace with root consolidation, you can access directly through your old server name, such as `\\<old-server>\<share>`.
## Access-Based Enumeration (ABE)
If each user is a subfolder *after* the redirection, ABE won't work:
`\\DFSServer\SomePath\users --> \\SA.file.core.windows.net\users` ## See also+ - Deploying an Azure file share: [Planning for an Azure Files deployment](storage-files-planning.md) and [How to create an file share](storage-how-to-create-file-share.md). - Configuring file share access: [Identity-based authentication](storage-files-active-directory-overview.md) and [Networking considerations for direct access](storage-files-networking-overview.md). - [Windows Server Distributed File System Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview)
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines
## Videos
-To help you set up identity-based authentication for some common use cases, we published two videos with step-by-step guidance for the following scenarios:
+To help you set up identity-based authentication for some common use cases, we published two videos with step-by-step guidance for the following scenarios. Note that Azure Active Directory is now Microsoft Entra ID. For more info, see [New name for Azure AD](https://aka.ms/azureadnewname).
| Replacing on-premises file servers with Azure Files (including setup on private link for files and AD authentication) | Using Azure Files as the profile container for Azure Virtual Desktop (including setup on AD authentication and FSLogix configuration) | |-|-|
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
The goal is to move the shares on your NAS appliance to Azure and have them beco
The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares and configure networking. Then you'll migrate your files using Azure DataBox, and RoboCopy to catch-up with changes. Finally, you'll cut-over your users and apps to the newly created Azure file shares. The following sections describe the phases of the migration process in detail. > [!TIP]
-> If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
+> If you're returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
## Phase 1: Identify how many Azure file shares you need
To save time, you should proceed with this phase while you wait for your DataBox
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
- The video references dedicated documentation for some topics:
+ The video references dedicated documentation for the following topics. Note that Azure Active Directory is now Microsoft Entra ID. For more information, see [New name for Azure AD](https://aka.ms/azureadnewname).
* [Identity overview](storage-files-active-directory-overview.md) * [How to domain join a storage account](storage-files-identity-auth-active-directory-enable.md)
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md
With the information in this phase, you'll be able to decide how your servers an
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
- The video references dedicated documentation for some topics:
+ The video references dedicated documentation for the following topics. Note that Azure Active Directory is now Microsoft Entra ID. For more information, see [New name for Azure AD](https://aka.ms/azureadnewname).
* [Identity overview](storage-files-active-directory-overview.md) * [How to domain join a storage account](storage-files-identity-auth-active-directory-enable.md)
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
description: An overview of networking options for Azure Files.
Previously updated : 05/23/2022 Last updated : 04/02/2024 # Azure Files networking considerations+ You can access your Azure file shares over the public internet accessible endpoint, over one or more private endpoints on your network(s), or by caching your Azure file share on-premises with Azure File Sync (SMB file shares only). This article focuses on how to configure Azure Files for direct access over public and/or private endpoints. To learn how to cache your Azure file share on-premises with Azure File Sync, see [Introduction to Azure File Sync](../file-sync/file-sync-introduction.md).
-We recommend reading [Planning for an Azure Files deployment](storage-files-planning.md) prior to reading this conceptual guide.
+We recommend reading [Planning for an Azure Files deployment](storage-files-planning.md) prior to reading this guide.
-Directly accessing the Azure file share often requires additional thought with respect to networking:
+Directly accessing an Azure file share often requires additional thought with respect to networking:
- SMB file shares communicate over port 445, which many organizations and internet service providers (ISPs) block for outbound (internet) traffic. This practice originates from legacy security guidance about deprecated and non-internet safe versions of the SMB protocol. Although SMB 3.x is an internet-safe protocol, organizational or ISP policies may not be possible to change. Therefore, mounting an SMB file share often requires additional networking configuration to use outside of Azure. - NFS file shares rely on network-level authentication and are therefore only accessible via restricted networks. Using an NFS file share always requires some level of networking configuration.
-Configuring public and private endpoints for Azure Files is done on the top-level management object for Azure Files, the Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple Azure file shares, as well as the storage resources for other Azure storage services, such as blob containers or queues.
+Configuring public and private endpoints for Azure Files is done on the top-level management object for Azure Files, the Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple Azure file shares, as well as the storage resources for other Azure storage services, such as blob containers or queues.
:::row::: :::column::: > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ] :::column-end::: :::column:::
- This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps. The sections below provide links and additional context to the documentation referenced in the video.
+ This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps. The sections below provide links and additional context to the documentation referenced in the video. Note that Azure Active Directory is now Microsoft Entra ID. For more information, see [New name for Azure AD](https://aka.ms/azureadnewname).
:::column-end::: :::row-end::: ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
Configuring public and private endpoints for Azure Files is done on the top-leve
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Secure transfer+ By default, Azure storage accounts require secure transfer, regardless of whether data is accessed over the public or private endpoint. For Azure Files, the **require secure transfer** setting is enforced for all protocol access to the data stored on Azure file shares, including SMB, NFS, and FileREST. You can disable the **require secure transfer** setting to allow unencrypted traffic. In the Azure portal, you may also see this setting labeled as **require secure transfer for REST API operations**. The SMB, NFS, and FileREST protocols have slightly different behavior with respect to the **require secure transfer** setting:
The SMB, NFS, and FileREST protocols have slightly different behavior with respe
- When secure transfer is required, the FileREST protocol may only be used with HTTPS. FileREST is only supported on SMB file shares today. ## Public endpoint+ The public endpoint for the Azure file shares within a storage account is an internet exposed endpoint. The public endpoint is the default endpoint for a storage account, however, it can be disabled if desired. The SMB, NFS, and FileREST protocols can all use the public endpoint. However, each has slightly different rules for access: -- SMB file shares are accessible from anywhere in the world via the storage account's public endpoint with SMB 3.x with encryption. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of the Azure region. If SMB 2.1 or SMB 3.x without encryption is desired, two conditions must be met:
+- SMB file shares are accessible from anywhere in the world via the storage account's public endpoint with SMB 3.x with encryption. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of the Azure region. If SMB 2.1 or SMB 3.x without encryption is desired, two conditions must be met:
1. The storage account's **require secure transfer** setting must be disabled. 2. The request must originate from inside of the Azure region. As previously mentioned, encrypted SMB requests are allowed from anywhere, inside or outside of the Azure region.
The SMB, NFS, and FileREST protocols can all use the public endpoint. However, e
- FileREST is accessible via the public endpoint. If secure transfer is required, only HTTPS requests are accepted. If secure transfer is disabled, HTTP requests are accepted by the public endpoint regardless of origin. ### Public endpoint firewall settings+ The storage account firewall restricts access to the public endpoint for a storage account. Using the storage account firewall, you can restrict access to certain IP addresses/IP address ranges, to specific virtual networks, or disable the public endpoint entirely. When you restrict the traffic of the public endpoint to one or more virtual networks, you are using a capability of the virtual network called *service endpoints*. Requests directed to the service endpoint of Azure Files are still going to the storage account public IP address; however, the networking layer is doing additional verification of the request to validate that it is coming from an authorized virtual network. The SMB, NFS, and FileREST protocols all support service endpoints. Unlike SMB and FileREST, however, NFS file shares can only be accessed with the public endpoint through use of a *service endpoint*.
When you restrict the traffic of the public endpoint to one or more virtual netw
To learn more about how to configure the storage account firewall, see [configure Azure storage firewalls and virtual networks](storage-files-networking-endpoints.md#restrict-access-to-the-public-endpoint-to-specific-virtual-networks). ### Public endpoint network routing+ Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File Sync. ## Private endpoints
-In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints. A private endpoint is an endpoint that is only accessible within an Azure virtual network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network.
+
+In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints. A private endpoint is an endpoint that is only accessible within an Azure virtual network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network.
An individual private endpoint is associated with a specific Azure virtual network subnet. A storage account may have private endpoints in more than one virtual network. Using private endpoints with Azure Files enables you to:+ - Securely connect to your Azure file shares from on-premises networks using a VPN or ExpressRoute connection with private-peering. - Secure your Azure file shares by configuring the storage account firewall to block all connections on the public endpoint. By default, creating a private endpoint does not block connections to the public endpoint. - Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network (and peering boundaries).
Using private endpoints with Azure Files enables you to:
To create a private endpoint, see [Configuring private endpoints for Azure Files](storage-files-networking-endpoints.md#create-a-private-endpoint). ### Tunneling traffic over a virtual private network or ExpressRoute+ To use private endpoints to access SMB or NFS file shares from on-premises, you must establish a network tunnel between your on-premises network and Azure. A [virtual network](../../virtual-network/virtual-networks-overview.md), or VNet, is similar to a traditional on-premises network. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed in a resource group. Azure Files supports the following mechanisms to tunnel traffic between your on-premises workstations and servers and Azure SMB/NFS file shares:
Azure Files supports the following mechanisms to tunnel traffic between your on-
- [Site-to-Site (S2S) VPN](../../vpn-gateway/design.md#s2smulti), which are VPN connections between Azure and your organization's network. A S2S VPN connection enables you to configure a VPN connection once for a VPN server or device hosted on your organization's network, rather than configuring a connection for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md). - [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
-> [!Note]
+> [!NOTE]
> Although we recommend using private endpoints to assist in extending your on-premises network into Azure, it is technically possible to route to the public endpoint over the VPN connection. However, this requires hard-coding the IP address for the public endpoint for the Azure storage cluster that serves your storage account. Because storage accounts may be moved between storage clusters at any time and new clusters are frequently added and removed, this requires regularly hard-coding all the possible Azure storage IP addresses into your routing rules. ### DNS configuration+ When you create a private endpoint, by default we also create a (or update an existing) private DNS zone corresponding to the `privatelink` subdomain. Strictly speaking, creating a private DNS zone is not required to use a private endpoint for your storage account. However, it is highly recommended in general and explicitly required when mounting your Azure file share with an Active Directory user principal or accessing it from the FileREST API.
-> [!Note]
-> This article uses the storage account DNS suffix for the Azure Public regions, `core.windows.net`. This commentary also applies to Azure Sovereign clouds such as the Azure US Government cloud and the Microsoft Azure operated by 21Vianet cloud - just substitute the appropriate suffixes for your environment.
+> [!NOTE]
+> This article uses the storage account DNS suffix for the Azure Public regions, `core.windows.net`. This commentary also applies to Azure Sovereign clouds such as the Azure US Government cloud and the Microsoft Azure operated by 21Vianet cloud - just substitute the appropriate suffixes for your environment.
In your private DNS zone, we create an A record for `storageaccount.privatelink.file.core.windows.net` and a CNAME record for the regular name of the storage account, which follows the pattern `storageaccount.file.core.windows.net`. Because your Azure private DNS zone is connected to the virtual network containing the private endpoint, you can observe the DNS configuration by calling the `Resolve-DnsName` cmdlet from PowerShell in an Azure VM (alternately `nslookup` in Windows and Linux):
This reflects the fact that the storage account can expose both the public endpo
- Forward the `core.windows.net` zone from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](storage-files-networking-dns.md). ## SMB over QUIC+ Windows Server 2022 Azure Edition supports a new transport protocol called QUIC for the SMB server provided by the File Server role. QUIC is a replacement for TCP that is built on top of UDP, providing numerous advantages over TCP while still providing a reliable transport mechanism. One key advantage for the SMB protocol is that instead of using port 445, all transport is done over port 443, which is widely open outbound to support HTTPS. This effectively means that SMB over QUIC offers an "SMB VPN" for file sharing over the public internet. Windows 11 ships with an SMB over QUIC capable client. At this time, Azure Files doesn't directly support SMB over QUIC. However, you can get access to Azure file shares via Azure File Sync running on Windows Server as in the diagram below. This also gives you the option to have Azure File Sync caches both on-premises or in different Azure datacenters to provide local caches for a distributed workforce. To learn more about this option, see [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md) and [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic).
At this time, Azure Files doesn't directly support SMB over QUIC. However, you c
:::image type="content" source="media/storage-files-networking-overview/smb-over-quic.png" alt-text="Diagram for creating a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition V M using Azure File Sync." border="false"::: ## See also+ - [Azure Files overview](storage-files-introduction.md) - [Planning for an Azure Files deployment](storage-files-planning.md)
trusted-signing Tutorial Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/tutorial-assign-roles.md
Last updated 03/21/2023 #Required; mm/dd/yyyy format.
# Assigning roles in Trusted Signing
-The Trusting Signing service has a few Trusted Signing specific roles (in addition to the standard Azure roles). Use [Azure role-based access control (RBAC)](https://docs.microsoft.com/azure/role-based-access-control/overview) to assign user and group roles for the Trusted Signing specific roles. In this tutorial, you review the different Trusted Signing supported roles and assign roles to your Trusted Signing account on the Azure portal.
+The Trusting Signing service has a few Trusted Signing specific roles (in addition to the standard Azure roles). Use [Azure role-based access control (RBAC)](../role-based-access-control/overview.md) to assign user and group roles for the Trusted Signing specific roles. In this tutorial, you review the different Trusted Signing supported roles and assign roles to your Trusted Signing account on the Azure portal.
## Supported roles with Trusting Signing The following table lists the roles that Trusted Signing supports, including what each role can access within the serviceΓÇÖs resources.
Complete the following steps to assign roles in Trusted Signing.
2. Select on the **Roles** tab and search "Trusted Signing". You can see in the screenshot below the two custom roles. ![Screenshot of Azure portal UI with the Trusted Signing custom RBAC roles.](./media/trusted-signing-rbac-roles.png)
-3. To assign these roles, select on the **Add** drop down and select **Add role assignment**. Follow the [Assign roles in Azure](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current) guide to assign the relevant roles to your identities.
+3. To assign these roles, select on the **Add** drop down and select **Add role assignment**. Follow the [Assign roles in Azure](../role-based-access-control/role-assignments-portal.md) guide to assign the relevant roles to your identities.
## Related content
-* [What is Azure role-based access control (RBAC)?](https://docs.microsoft.com/azure/role-based-access-control/overview)
+* [What is Azure role-based access control (RBAC)?](../role-based-access-control/overview.md)
* [Trusted Signing Quickstart](quickstart.md)
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 03/19/2024 Last updated : 04/02/2024 # What's new in the Remote Desktop client for Windows
virtual-machines Disks Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-whats-new.md
Azure Disk Storage regularly receives updates for new features and enhancements.
#### Generally available: Azure VM Backup Support for Ultra Disks and Premium SSD v2
-Azure Backup enabled support on Azure VMs using Ultra Disks and Premium SSD v2 that offers high throughput, high IOPS, and low latency. Azure VM Backup support allows you to ensure business continuity for your virtual machines and to recover from any disasters or ransomware attacks. Enabling backup on VMs using Ultra Disks and Premium SSD v2 is available in all regions where creation of Ultra disks and Premium SSD v2 are supported. To learn more, refer to the [documentation](https://learn.microsoft.com/azure/backup/backup-support-matrix-iaas#vm-storage-support) and enable backup on your Azure VMs.
+Azure Backup enabled support on Azure VMs using Ultra Disks and Premium SSD v2 that offers high throughput, high IOPS, and low latency. Azure VM Backup support allows you to ensure business continuity for your virtual machines and to recover from any disasters or ransomware attacks. Enabling backup on VMs using Ultra Disks and Premium SSD v2 is available in all regions where creation of Ultra disks and Premium SSD v2 are supported. To learn more, refer to the [documentation](../backup/backup-support-matrix-iaas.md#vm-storage-support) and enable backup on your Azure VMs.
#### Generally available: Trusted launch support for Ultra Disks and Premium SSD v2
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
> [!TIP] > Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-[HBv4-series](hbv4-series.md) VMs are optimized for various HPC workloads such as computational fluid dynamics, finite element analysis, frontend and backend EDA, rendering, molecular dynamics, computational geoscience, weather simulation, and financial risk analysis. HBv4 VMs will feature up to 176 AMD EPYCΓäó 9004-series (Genoa) CPU cores, 688 GB of RAM, and no simultaneous multithreading. HBv4-series VMs also provide 800 GB/s of DDR5 memory bandwidth and 768MB L3 cache per VM, up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
+[HBv4-series](hbv4-series.md) VMs are optimized for various HPC workloads such as computational fluid dynamics, finite element analysis, frontend , rendering, molecular dynamics, computational geoscience, weather simulation, and financial risk analysis. HBv4 VMs feature up to 176 AMD EPYCΓäó 9V33X (GenoaX) CPU cores with AMD's 3D-V Cache, 768 GB of RAM, and no simultaneous multithreading. HBv4-series VMs also provide 780 GB/s of DDR5 memory bandwidth and 2304 MB L3 cache per VM, to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
All HBv4-series VMs feature 400 Gb/s NDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. NDR continues to support features like Adaptive Routing and the Dynamically Connected Transport (DCT). This newest generation of InfiniBand also brings greater support for offload of MPI collectives, optimized real-world latencies due to congestion control intelligence, and enhanced adaptive routing capabilities. These features enhance application performance, scalability, and consistency, and their usage is recommended.
-[HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.5 GHz.
+[HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (MilanX) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.5 GHz.
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series
[HC-series](hc-series.md) VMs are optimized for applications driven by dense computation, such as implicit finite element analysis, molecular dynamics, and computational chemistry. HC VMs feature 44 Intel Xeon Platinum 8168 processor cores, 8 GB of RAM per CPU core, and no hyperthreading. The Intel Xeon Platinum platform supports IntelΓÇÖs rich ecosystem of software tools such as the Intel Math Kernel Library.
-[HX-series](hx-series.md) VMs are optimized for workloads that require significant memory capacity with twice the memory capacity as HBv4. For example, workloads such as silicon design can use HX-series VMs to enable EDA customers targeting the most advanced manufacturing processes to run their most memory-intensive workloads. HX VMs feature up to 176 AMD EPYC 9004-series (Genoa) CPU cores, 1408 GB of RAM, and no simultaneous multithreading. HX-series VMs also provide 800 GB/s of DDR5 memory bandwidth and 768 MB L3 cache per VM, up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
+[HX-series](hx-series.md) VMs are optimized for workloads that require significant memory capacity with twice the memory capacity as HBv4. For example, workloads such as silicon design can use HX-series VMs to enable EDA customers targeting the most advanced manufacturing processes to run their most memory-intensive workloads. HX VMs feature up to 176 AMD EPYCΓäó 9V33X (GenoaX) CPU cores, 1408 GB of RAM, and no simultaneous multithreading. HX-series VMs also provide 780 GB/s of DDR5 memory bandwidth and 2304 MB L3 cache per VM, up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
> [!NOTE] > All HBv4, HBv3, HBv2, HB, HC and HX-series VMs have exclusive access to the physical servers. There is only 1 VM per physical server and there is no shared multi-tenancy with any other VMs for these VM sizes.
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
The ability to access backup storage across regions is an important aspect of bu
When you're using Azure Files with either the Server Message Block (SMB) protocol or the Network File System (NFS) 4.1 protocol to mount as backup storage, Azure Files doesn't support RA-GRS or RA-GZRS.
-If the backup storage requirement is greater than 5 tebibytes (TiB), Azure Files requires you to enable the [large file shares](../../../storage/files/storage-files-planning.md) feature. This feature doesn't support GRS or GZRS redundancy. It supports only LRS.
+Azure Files backup storage can scale up to 100 (TiB), with support for LRS, GRS, and GZRS redundancy options.
Azure Blob Storage mounted via the NFS 3.0 protocol currently supports only LRS and ZRS redundancy. Azure Blob Storage configured with any redundancy option can be mounted via Blobfuse.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as default outbound access.
-Examples of explicit outbound connectivity are virtual machines:
+Examples of explicit outbound connectivity for virtual machines are:
* Created within a subnet associated to a NAT gateway.
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftPurviewPolicyDistribution** | This tag should be used within the outbound security rules for a data source (e.g. Azure SQL MI) configured with private endpoint to retrieve policies from Microsoft Purview | Outbound| No | No | | **PowerBI** | Power BI platform backend services and API endpoints.<br/><br/>**Note:** does not include frontend endpoints at the moment (e.g., app.powerbi.com).<br/><br/>Access to frontend endpoints should be provided through AzureCloud tag (Outbound, HTTPS, can be regional). | Both | No | Yes | | **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Both | Yes | Yes |
-| **PowerPlatformPlex** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Inbound | Yes | Yes |
+| **PowerPlatformPlex** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Both | Yes | Yes |
| **PowerQueryOnline** | Power Query Online. | Both | No | Yes | | **Scuba** | Data connectors for Microsoft security products (Sentinel, Defender, etc). | Inbound | No | No| | **SerialConsole** | Limit access to boot diagnostics storage accounts from only Serial Console service tag | Inbound | No | Yes |
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
In the following steps, you'll use commands to add a static route to the virtual
```azurepowershell-interactive $hubVnetConnection = Get-AzVirtualHubVnetConnection -Name "[HubconnectionName]" -ParentResourceName "[Hub Name]" -ResourceGroupName "[resource group name]" ```
- 1. Add a static route to the virtual hub's route table. (The next hop is a virtual network connection.)
+ 1. Add a static route to the virtual hub's route table with next hop Virtual Network Connection. Existing routes in the route table are preserved with the following sample script.
```azurepowershell-interactive
+ $routeTable = Get-AzVHubRouteTable -ResourceGroupName "[Resource group name]" -VirtualHubName "[Virtual hub name]" -Name "defaultRouteTable"
$Route2 = New-AzVHubRoute -Name "[Route Name]" -Destination ΓÇ£[@("Destination prefix")]ΓÇ¥ -DestinationType "CIDR" -NextHop $hubVnetConnection.Id -NextHopType "ResourceId"
+ $routeTable.Routes.add($Route2)
```
+ 1. Verify the route table has the new routes:
+ ```azurepowershell-interactive
+ $routeTable.Routes
+ ```
+
1. Update the hub's current default route table:
- ```azurepowershell-interactive
- Update-AzVHubRouteTable -ResourceGroupName "[resource group name]"-VirtualHubName [ΓÇ£Hub NameΓÇ¥] -Name "defaultRouteTable" -Route @($Route2)
- ```
-
- 1. Update the route in the virtual network connection to specify the next hop as an IP address.
+ ```azurepowershell-interactive
+ Update-AzVHubRouteTable -ResourceGroupName "[resource group name]"-VirtualHubName [ΓÇ£Hub NameΓÇ¥] -Name "defaultRouteTable" -Route @($routeTable.Routes)
+ ```
+
+ 1. Update the route in the virtual network connection to specify the next hop as an IP address. This sample script adds a new route to the VNET connection (preserving any existing routes).
> [!NOTE] > The route name should be the same as the one you used when you added a static route earlier. Otherwise, you'll create two routes in the routing table: one without an IP address and one with an IP address.
- ```azurepowershell-interactive
+ ```azurepowershell-interactive
$newroute = New-AzStaticRoute -Name "[Route Name]" -AddressPrefix "[@("Destination prefix")]" -NextHopIpAddress "[Destination NVA IP address]"
- $newroutingconfig = New-AzRoutingConfiguration -AssociatedRouteTable $hubVnetConnection.RoutingConfiguration.AssociatedRouteTable.id -Id $hubVnetConnection.RoutingConfiguration.PropagatedRouteTables.Ids[0].id -Label @("default") -StaticRoute @($newroute)
-
- Update-AzVirtualHubVnetConnection -ResourceGroupName $rgname -VirtualHubName "[Hub Name]" -Name "[Virtual hub connection name]" -RoutingConfiguration $newroutingconfig
+ $hubVNetConnection.RoutingConfiguration.VnetRoutes.StaticRoutes.add($newroute)
- ```
-
- This update command removes the previous manual configuration route in your routing table.
-
+ Update-AzVirtualHubVnetConnection -ResourceGroupName $rgname -VirtualHubName "[Hub Name]" -Name "[Virtual hub connection name]" -RoutingConfiguration $hubVNetConnection.RoutingConfiguration
+ ```
1. Verify that the static route is established to a next-hop IP address. ```azurepowershell-interactive
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
You can review per peer and instance metrics by selecting **Apply splitting** an
| **Tunnel Ingress Bytes** | Incoming bytes of a tunnel.| | **Tunnel Ingress Packet** | Incoming packet count of a tunnel.| | **Tunnel Peak PPS** | Number of packets per second per link connection in the last minute.|
-| **Tunnel Flow Count** | Number of distinct 3-tupe (protocol, local IP address, remote IP address) flows created per link connection.|
+| **Tunnel Flow Count** | Number of distinct 3-tuple (protocol, local IP address, remote IP address) flows created per link connection.|
### <a name="p2s-metrics"></a>Point-to-site VPN gateway metrics
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
These SKUs are available in Azure regions that have Azure availability zones. Fo
### Can I change/migrate/upgrade my existing virtual network gateways to zone-redundant or zonal gateways?
-Migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
+* VPN gateway - migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
+* ExpressRoute gateway - migrating your existing ExpressRoute virtual network gateway to a zone-redundant or zonal gateway is currently in public preview. For more information, see [Migrate to an availability zone enabled ExpressRoute virtual network gateway](../expressroute/gateway-migration.md).
### Can I deploy both VPN and ExpressRoute gateways in same virtual network?