Updates from: 10/31/2024 02:06:04
Service Microsoft Docs article Related commit history on GitHub Change details
api-center Import Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/import-api-management-apis.md
This article shows two options for using the Azure CLI to add APIs to your API c
After importing API definitions or APIs from API Management, you can add metadata and documentation in your API center to help stakeholders discover, understand, and consume the API.
+> [!TIP]
+> You can also set up automatic synchronization of APIS from API Management to your API center. For more information, see [Link an API Management instance to synchronize APIs to your API center](synchronize-api-management-apis.md).
+ ## Prerequisites * An API center in your Azure subscription. If you haven't created one, see [Quickstart: Create your API center](set-up-api-center.md).
az apim api export --api-id my-api --resource-group myResourceGroup \
``` ```azurecli
-#! PowerShell syntax
+# Formatted for PowerShell
az apim api export --api-id my-api --resource-group myResourceGroup ` --service-name myAPIManagement --export-format OpenApiJsonFile ` --file-path '/path/to/folder'
link=$(az apim api export --api-id my-api --resource-group myResourceGroup \
``` ```azurecli
-# PowerShell syntax
+# Formatted for PowerShell
$link=$(az apim api export --api-id my-api --resource-group myResourceGroup ` --service-name myAPIManagement --export-format OpenApiJsonUrl --query properties.value.link ` --output tsv)
az apic api definition import-specification \
``` ```azurecli
-# PowerShell syntax
+# Formatted for PowerShell
az apic api definition import-specification ` --resource-group myResourceGroup --service-name myAPICenter ` --api-id my-api --version-id v1-0-0 `
When you add APIs from an API Management instance to your API center using `az a
### Add a managed identity in your API center
-For this scenario, your API center uses a [managed identity](/entra/identity/managed-identities-azure-resources/overview) to access APIs in your API Management instance. Depending on your needs, configure either a system-assigned or one or more user-assigned managed identities.
-
-The following examples show how to configure a system-assigned managed identity by using the Azure portal or the Azure CLI. At a high level, configuration steps are similar for a user-assigned managed identity.
-
-#### [Portal](#tab/portal)
-
-1. In the [portal](https://azure.microsoft.com), navigate to your API center.
-1. In the left menu, under **Security**, select **Managed identities**.
-1. Select **System assigned**, and set the status to **On**.
-1. Select **Save**.
-
-#### [Azure CLI](#tab/cli)
-
-Set the system-assigned identity in your API center using the following [az apic update](/cli/azure/apic#az-apic-update) command. Substitute the names of your API center and resource group:
-
-```azurecli
-az apic update --name <api-center-name> --resource-group <resource-group-name> --identity '{"type": "SystemAssigned"}'
-```
- ### Assign the managed identity the API Management Service Reader role
-To allow import of APIs, assign your API center's managed identity the **API Management Service Reader** role in your API Management instance. You can use the [portal](../role-based-access-control/role-assignments-portal-managed-identity.yml) or the Azure CLI.
-
-#### [Portal](#tab/portal)
-
-1. In the [portal](https://azure.microsoft.com), navigate to your API Management instance.
-1. In the left menu, select **Access control (IAM)**.
-1. Select **+ Add role assignment**.
-1. On the **Add role assignment** page, set the values as follows:
- 1. On the **Role** tab - Select **API Management Service Reader**.
- 1. On the **Members** tab, in **Assign access to** - Select **Managed identity** > **+ Select members**.
- 1. On the **Select managed identities** page - Select the system-assigned managed identity of your API center that you added in the previous section. Click **Select**.
- 1. Select **Review + assign**.
-
-#### [Azure CLI](#tab/cli)
-
-1. Get the principal ID of the identity. For a system-assigned identity, use the [az apic show](/cli/azure/apic#az-apic-show) command.
-
- ```azurecli
- #! /bin/bash
- apicObjID=$(az apic show --name <api-center-name> \
- --resource-group <resource-group-name> \
- --query "identity.principalId" --output tsv)
- ```
-
- ```azurecli
- # PowerShell syntax
- $apicObjID=$(az apic show --name <api-center-name> `
- --resource-group <resource-group-name> `
- --query "identity.principalId" --output tsv)
- ```
-
-1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
-
- ```azurecli
- #! /bin/bash
- apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query "id" --output tsv)
- ```
-
- ```azurecli
- # PowerShell syntax
- $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query "id" --output tsv)
- ```
-
-1. Assign the managed identity the **API Management Service Reader** role in your API Management instance using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
-
- ```azurecli
- #! /bin/bash
- scope="${apimID:1}"
-
- az role assignment create \
- --role "API Management Service Reader Role" \
- --assignee-object-id $apicObjID \
- --assignee-principal-type ServicePrincipal \
- --scope $scope
- ```
-
- ```azurecli
- #! PowerShell syntax
- $scope=$apimID.substring(1)
-
- az role assignment create `
- --role "API Management Service Reader Role" `
- --assignee-object-id $apicObjID `
- --assignee-principal-type ServicePrincipal `
- --scope $scope
- ### Import APIs from API Management
az apic import-from-apim --service-name <api-center-name> --resource-group <reso
``` ```azurecli
-# PowerShell syntax
+# Formatted for PowerShell
az apic import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> ` --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> ` --apim-apis '*'
az apic import-from-apim --service-name <api-center-name> --resource-group <reso
```azurecli
-# PowerShell syntax
+# Formatted for PowerShell
az apic import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> ` --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> ` --apim-apis 'petstore-api'
api-center Synchronize Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/synchronize-api-management-apis.md
+
+ Title: Synchronize APIs from Azure API Management instance
+description: Link an API Management instance to Azure API Center for automatic synchronization of APIs to the inventory.
+++ Last updated : 10/30/2024++
+# Customer intent: As an API program manager, I want to integrate my Azure API Management instance with my API center and synchronize API Management APIs to my inventory.
++
+# Synchronize APIs from an API Management instance
+
+This article shows how to create a link (preview) to an API Management instance so that the instances's APIs are continuously kept up to date in your [API center](overview.md) inventory.
+
+## About linking an API Management instance
+
+Although you can use the Azure CLI to [import](import-api-management-apis.md) APIs on demand from Azure API Management to Azure API Center, linking an API Management instance enables continuous synchronization so that the API inventory stays up to date.
+
+When you link an API Management instance as an API source, the following happens:
+
+1. All APIs, and optionally API definitions (specs), from the API Management instance are added to the API center inventory.
+1. You configure an [environment](key-concepts.md#environment) of type *Azure API Management* in the API center.
+1. An associated [deployment](key-concepts.md#deployment) is created for each synchronized API definition from API Management.
+
+API Management APIs automatically synchronize to the API center whenever existing APIs' settings change (for example, new versions are added), new APIs are created, or APIs are deleted. This synchronization is one-way from API Management to your Azure API center, meaning API updates in the API center aren't synchronized back to the API Management instance.
+
+> [!NOTE]
+> * API updates in API Management can take a few minutes to synchronize to your API center.
+> * There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/api-center/toc.json&bc=/azure/api-center/breadcrumb/toc.json#api-center-limits) for the number of linked API Management instances (API sources).
+
+### Entities synchronized from API Management
+
+You can add or update metadata properties and documentation in your API center to help stakeholders discover, understand, and consume the synchronized APIs. Learn more about Azure API Center's [built-in and custom metadata properties](add-metadata-properties.md).
+
+The following table shows entity properties that can be modified in Azure API Center and properties that are determined based on their values in a linked Azure API Management instance. Also, entities' resource or system identifiers in Azure API Center are generated automatically and can't be modified.
+
+| Entity | Properties configurable in API Center | Properties determined in API Management |
+|--|--|--|
+| API | summary<br/>lifecycleStage<br/>termsOfService<br/>license<br/>externalDocumentation<br/>customProperties | title<br/>description<br/>kind |
+| API version | lifecycleStage | title |
+| Environment | title<br/>description<br/>kind</br>server.managementPortalUri<br/>onboarding<br/>customProperties | server.type
+| Deployment | title<br/>description<br/>server<br/>state<br/>customProperties | server.runtimeUri |
+
+For property details, see the [Azure API Center REST API reference](/rest/api/apicenter).
++
+## Prerequisites
+
+* An API center in your Azure subscription. If you haven't created one, see [Quickstart: Create your API center](set-up-api-center.md).
+
+* An Azure API Management instance, in the same or a different subscription. The instance must be in the same directory.
+
+* For Azure CLI:
+ [!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+
+ [!INCLUDE [install-apic-extension](includes/install-apic-extension.md)]
+
+ > [!NOTE]
+ > Azure CLI command examples in this article can run in PowerShell or a bash shell. Where needed because of different variable syntax, separate command examples are provided for the two shells.
++
+## Add a managed identity in your API center
++
+## Assign the managed identity the API Management Service Reader role
++
+## Link an API Management instance
+
+You can link an API Management instance using the portal.
+
+1. In the [portal](https://portal.azure.com), navigate to your API center.
+1. Under **Assets**, select **Environments**.
+1. Select **Links (preview)** > **+ New link**.
+1. In the **Link your Azure API Management Service** page:
+ 1. Select the **Subscription**, **Resource group**, and **Azure API Management service** that you want to link.
+ 1. In **Link details**, enter an identifier.
+ 1. In **Environment details**, enter an **Environment title** (name), **Environment type**, and optional **Environment description**.
+ 1. In **API details**, select a **Lifecycle stage** for the synchronized APIs. (You can update this value for your APIs after they're added to your API center.) Also, select whether to synchronize API definitions.
+1. Select **Create**.
++
+The environment is added in your API center. The API Management APIs are imported to the API center inventory.
+++
+## Delete a link
+
+While an API Management instance is linked, you can't delete synchronized APIs from your API center. If you need to, you can delete the link. When you delete a link:
+
+* The synchronized API Management APIs in your API center inventory are deleted
+* The environment and deployments associated with the API Management instance are deleted
+
+To delete an API Management link:
+
+1. In the [portal](https://portal.azure.com), navigate to your API center.
+1. Under **Assets**, select **Environments** > **Link (preview)**.
+1. Select the link, and then select **Delete** (trash can icon).
+
+## Related content
+
+* [Manage API inventory with Azure CLI commands](manage-apis-azure-cli.md)
+* [Import APIs from API Management to your Azure API center](import-api-management-apis.md)
+* [Azure API Management documentation](../api-management/index.yml)
api-management Import And Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-and-publish.md
Previously updated : 06/15/2023 Last updated : 10/29/2024
[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)]
-This tutorial shows how to import an OpenAPI specification backend API in JSON format into Azure API Management. Microsoft provides the backend API used in this example, and hosts it on Azure at `https://conferenceapi.azurewebsites.net`.
+This tutorial shows how to import an OpenAPI specification backend API in JSON format into Azure API Management. For this example, you import the open source [Petstore API](https://petstore3.swagger.io/).
Once you import the backend API into API Management, your API Management API becomes a façade for the backend API. You can customize the façade to your needs in API Management without touching the backend API. For more information, see [Transform and protect your API](transform-api.md).
In this tutorial, you learn how to:
After import, you can manage the API in the Azure portal. ## Prerequisites
This section shows how to import and publish an OpenAPI specification backend AP
|Setting|Value|Description| |-|--|--|
- |**OpenAPI specification**|*https:\//conferenceapi.azurewebsites.net?format=json*|Specifies the backend service implementing the API and the operations that the API supports. <br/><br/>The backend service URL appears later as the **Web service URL** on the API's **Settings** page.<br/><br/>After import, you can add, edit, rename, or delete operations in the specification. |
+ |**OpenAPI specification**|*https:\//petstore3.swagger.io/api/v3/openapi.json*|Specifies the backend service implementing the API and the operations that the API supports. <br/><br/>The backend service URL appears later as the **Web service URL** on the API's **Settings** page.<br/><br/>After import, you can add, edit, rename, or delete operations in the specification. |
| **Include query parameters in operation templates** | Selected (default) | Specifies whether to import required query parameters in the specification as template parameters in API Management. | |**Display name**|After you enter the OpenAPI specification URL, API Management fills out this field based on the JSON.|The name displayed in the [developer portal](api-management-howto-developer-portal.md).| |**Name**|After you enter the OpenAPI specification URL, API Management fills out this field based on the JSON.|A unique name for the API.| |**Description**|After you enter the OpenAPI specification URL, API Management fills out this field based on the JSON.|An optional description of the API.| |**URL scheme**|**HTTPS**|Which protocols can access the API.|
- |**API URL suffix**|*conference*|The suffix appended to the base URL for the API Management service. API Management distinguishes APIs by their suffix, so the suffix must be unique for every API for a given publisher.|
+ |**API URL suffix**|*petstore*|The suffix appended to the base URL for the API Management service. API Management distinguishes APIs by their suffix, so the suffix must be unique for every API for a given publisher.|
|**Tags**| |Tags for organizing APIs for searching, grouping, or filtering.|
- |**Products**|**Unlimited**|Association of one or more APIs. Each API Management instance comes with two sample products: **Starter** and **Unlimited**. You publish an API by associating the API with a product, **Unlimited** in this example.<br/><br/> You can include several APIs in a product and offer product [subscriptions](api-management-subscriptions.md) to developers through the developer portal. To add this API to another product, type or select the product name. Repeat this step to add the API to multiple products. You can also add APIs to products later from the **Settings** page.<br/><br/> For more information about products, see [Create and publish a product](api-management-howto-add-products.md).|
+ |**Products**|**Unlimited**|Association of one or more APIs. In certain tiers, API Management instance comes with two sample products: **Starter** and **Unlimited**. You publish an API in the developer portal by associating the API with a product.<br/><br/> You can include several APIs in a product and offer product [subscriptions](api-management-subscriptions.md) to developers through the developer portal. To add this API to another product, type or select the product name. Repeat this step to add the API to multiple products. You can also add APIs to products later from the **Settings** page.<br/><br/> For more information about products, see [Create and publish a product](api-management-howto-add-products.md).|
|**Gateways**|**Managed**|API gateway(s) that expose the API. This field is available only in **Developer** and **Premium** tier services.<br/><br/>**Managed** indicates the gateway built into the API Management service and hosted by Microsoft in Azure. [Self-hosted gateways](self-hosted-gateway-overview.md) are available only in the Premium and Developer service tiers. You can deploy them on-premises or in other clouds.<br/><br/> If no gateways are selected, the API won't be available and your API requests won't succeed.| |**Version this API?**|Select or deselect|For more information, see [Publish multiple versions of your API](api-management-get-started-publish-versions.md).|
- > [!NOTE]
- > To publish the API to API consumers, you must associate it with a product.
- 1. Select **Create** to create your API. If you have problems importing an API definition, see the [list of known issues and restrictions](api-management-api-import-restrictions.md).
If you have problems importing an API definition, see the [list of known issues
You can call API operations directly from the Azure portal, which provides a convenient way to view and test the operations. In the portal's test console, by default, APIs are called by using a key from the built-in all-access subscription. You can also test API calls by using a subscription key scoped to a product.
-1. In the left navigation of your API Management instance, select **APIs** > **Demo Conference API**.
-1. Select the **Test** tab, and then select **GetSpeakers**. The page shows **Query parameters** and **Headers**, if any.
+1. In the left navigation of your API Management instance, select **APIs** > **Swagger Petstore**.
+1. Select the **Test** tab, and then select **Finds Pets by status**. The page shows the *status* **Query parameter**. Select one of the available values, such as *pending*. You can also add query parameters and headers here.
In the **HTTP request** section, the **Ocp-Apim-Subscription-Key** header is filled in automatically for you, which you can see if you select the "eye" icon. 1. Select **Send**.
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
The v2 tiers are supported in API Management API version **2023-05-01-preview**
### Supported regions The v2 tiers are available in the following regions:
+* East US
* East US 2 * South Central US * North Central US
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Because subnet size can't be changed after assignment, use a subnet that's large
With multi plan subnet join (MPSJ), you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
->[!NOTE]
-> Multi plan subnet join is currently in public preview. During preview the following known limitations should be observed:
->
-> * The minimum requirement for subnet size of `/26` is currently not enforced, but will be enforced at GA. If you have joined multiple plans to a smaller subnet during preview they will still work, but you cannot connect additional plans and if you disconnect you will not be able to connect again.
-> * There is currently no validation if the subnet has available IPs, so you might be able to join N+1 plan, but the instances will not get an IP. You can view available IPs in the Virtual network integration page in Azure portal in apps that are already connected to the subnet.
- ### Windows Containers specific limits Windows Containers uses an extra IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have, for example, 10 Windows Container App Service plan instances with four apps running, you need 50 IP addresses and extra addresses to support horizontal (in/out) scale.
automation Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/variables.md
from automationassets import AutomationAssetNotFound
# get a variable value = automationassets.get_automation_variable("test-variable")
-print value
+print(value)
# set a variable (value can be int/bool/string) automationassets.set_automation_variable("test-variable", True)
automationassets.set_automation_variable("test-variable", "test-string")
try: value = automationassets.get_automation_variable("nonexisting variable") except AutomationAssetNotFound:
- print ("variable not found")
+ print("variable not found")
```
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
New-AzRedisEnterpriseCache -Name "Cache2" -ResourceGroupName "myResourceGroup" -
As before, you need to list both _Cache1_ and _Cache2_ using the `-LinkedDatabase` parameter.
+## Scaling instances in a geo-replication group
+It is possible to scale instances that are configured to use active geo-replication. However, a geo-replication group with a mix of different cache sizes can introduce problems. To prevent these issues from occurring, all caches in a geo replication group need to be the same size and capacity.
+
+Since it is difficult to simultaneously scale all instances in the geo-replication group, Azure Cache for Redis has a locking mechanism. If you scale one instance in a geo-replication group, the underlying VM will be scaled, but the memory available will be capped at the original size until the other instances are scaled up as well. And any other scaling operations for the remaining instances are locked until they match the same configuration as the first cache to be scaled.
+
+### Scaling example
+For example, you may have three instances in your geo-replication group, all Enterprise E10 instances:
+
+| Instance Name | Redis00 | Redis01 | Redis02 |
+|--|:--:|:--:|:--:|
+| Type | Enterprise E10 | Enterprise E10 | Enterprise E10 |
+
+Let's say you want to scale up each instance in this geo-replication group to an Enterprise E20 instance. You would first scale one of the caches up to an E20:
+
+| Instance Name | Redis00 | Redis01 | Redis02 |
+|--|:--:|:--:|:--:|
+| Type | Enterprise E20 | Enterprise E10 | Enterprise E10 |
+
+At this point, the `Redis01` and `Redis02` instances can only scale up to an Enterprise E20 instance. All other scaling operations are blocked.
+>[!NOTE]
+> The `Redis00` instance is not blocked from scaling further at this point. But it will be blocked once either `Redis01` or `Redis02` is scaled to be an Enterprise E20.
+>
+
+Once each instance has been scaled to the same tier and size, all scaling locks are removed:
+
+| Instance Name | Redis00 | Redis01 | Redis02 |
+|--|:--:|:--:|:--:|
+| Type | Enterprise E20 | Enterprise E20 | Enterprise E20 |
+ ## Flush operation Due to the potential for inadvertent data loss, you can't use the `FLUSHALL` and `FLUSHDB` Redis commands with any cache instance residing in a geo-replication group. Instead, use the **Flush Cache(s)** button located at the top of the **Active geo-replication** working pane.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
There are fundamentally two ways to scale an Azure Cache for Redis Instance:
|Tier | Basic and Standard | Premium | Enterprise and Enterprise Flash | ||||-|
-|Scale Up | Yes | Yes | Yes (preview) |
+|Scale Up | Yes | Yes | Yes |
|Scale Down | Yes | Yes | No |
-|Scale Out | No | Yes | Yes (preview) |
+|Scale Out | No | Yes | Yes |
|Scale In | No | Yes | No | ## When to scale
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/cache/v1_0/) |No|Yes|Yes|Yes|Yes| | Data encryption in transit |Yes|Yes|Yes|Yes|Yes| | [Network isolation](cache-private-link.md) |Yes|Yes|Yes|Yes|Yes|
-| [Scaling](cache-how-to-scale.md) |Yes|Yes|Yes|Preview|Preview|
+| [Scaling](cache-how-to-scale.md) |Yes|Yes|Yes|Yes|Yes|
| OSS clustering |No|No|Yes|Yes|Yes| | [Data persistence](cache-how-to-premium-persistence.md) |No|No|Yes|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |No|Preview|Preview|Available|Available|
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
> [!NOTE] > The Enterprise Flash tier currently supports only the RediSearch module (in preview) and the RedisJSON module.
+> [!NOTE]
+> The Enterprise and Enterprise Flash tiers currently only support scaling up and scaling out. Scaling down and scaling in is not yet supported.
+ ### Choosing the right tier Consider the following options when choosing an Azure Cache for Redis tier:
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
ms.devlang: csharp
# Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
-To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in November 2024. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses.
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in March 2025. TLS versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses.
TLS versions 1.0 and 1.1 also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. This [TLS security blog](https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/) explains some of these vulnerabilities in more detail. > [!IMPORTANT]
-> Starting November 1, 2024, the TLS 1.2 requirement will be enforced.
+> Starting March 1, 2025, the TLS 1.2 requirement will be enforced.
> >
TLS versions 1.0 and 1.1 also don't support the modern encryption methods and ci
As a part of this effort, you can expect the following changes to Azure Cache for Redis: - _Phase 1_: Azure Cache for Redis stops offering TLS 1.0/1.1 as an option for _MinimumTLSVersion_ setting for new cache creates. Existing cache instances won't be updated at this point. You can't set the _MinimumTLSVersion_ to 1.0 or 1.1 for your existing cache.-- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting November 1, 2024. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service remains available while we update the _MinimumTLSVerion_ for all caches to 1.2.
+- _Phase 2_: Azure Cache for Redis stops supporting TLS 1.1 and TLS 1.0 starting March 1, 2025. After this change, your application must use TLS 1.2 or later to communicate with your cache. The Azure Cache for Redis service remains available while we update the _MinimumTLSVerion_ for all caches to 1.2.
| Date | Description | | - | | | September 2023 | TLS 1.0/1.1 retirement announcement | | March 1, 2024 | Beginning March 1, 2024, you can't create new caches with the Minimum TLS version set to 1.0 or 1.1 and you can't set the _MinimumTLSVersion_ to 1.0 or 1.1 for your existing cache. The minimum TLS version won't be updated automatically for existing caches at this point. | | October 31, 2024 | Ensure that all your applications are connecting to Azure Cache for Redis using TLS 1.2 and Minimum TLS version on your cache settings is set to 1.2. |
-| Starting November 1, 2024 | Minimum TLS version for all cache instances is updated to 1.2. This means Azure Cache for Redis instances reject connections using TLS 1.0 or 1.1 at this point. |
+| Starting March 1, 2025 | Minimum TLS version for all cache instances is updated to 1.2. This means Azure Cache for Redis instances reject connections using TLS 1.0 or 1.1 at this point. |
> [!IMPORTANT] > The content in this article does not apply to Azure Cache for Redis Enterprise/Enterprise Flash because the Enterprise tiers only support TLS 1.2.
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
Keep in mind the following considerations when deploying your function app conta
## Next steps + [Hosting and scale](./functions-scale.md)++ [Create your first containerized functions on Container Apps](./functions-deploy-container-apps.md)
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Last updated 02/27/2024
# Azure Functions deployment slots
-Azure Functions deployment slots allow your function app to run different instances called _slots_. Slots are different environments exposed via a publicly available endpoint. One app instance is always mapped to the production slot, and you can swap instances assigned to a slot on demand. Function apps running in a [Consumption plan](./consumption-plan.md) have a single extra slot for staging. You can obtain more staging slots by running your app in a [Premium plan](./functions-premium-plan.md) or [Dedicated (App Service) plan](./dedicated-plan.md). For more information, see [Service limits](./functions-scale.md#service-limits).
+Azure Functions deployment slots allow your function app to run different instances called _slots_. Slots are different environments exposed via a publicly available endpoint. One app instance is always mapped to the production slot, and you can swap instances assigned to a slot on demand.
+
+The number of available slots depends on your specific hosting option:
+
+| Hosting option | Slots (including production) |
+| - | - |
+| [Consumption plan](consumption-plan.md) | 2 |
+| [Flex Consumption plan](flex-consumption-plan.md) | Not currently supported |
+| [Premium plan](functions-premium-plan.md) | 3 |
+| [Dedicated (App Service) plan](dedicated-plan.md) | [1-20](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) |
+| [Container Apps](functions-container-apps-hosting.md) | Uses [Revisions](../container-apps/revisions.md) |
+
The following reflect how functions are affected by swapping slots:
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Title: Deployment technologies in Azure Functions
description: Learn the different ways you can deploy code to Azure Functions. Previously updated : 03/29/2024 Last updated : 09/27/2024 # Deployment technologies in Azure Functions
Specific deployments should use the best technology based on the specific scenar
## Deployment technology availability The deployment method also depends on the hosting plan and operating system on which you run your function app.
-Currently, Functions offers three hosting plans:
+Currently, Functions offers five options for hosting your function apps:
+++ [Flex Consumption plan](flex-consumption-plan.md) + [Consumption](consumption-plan.md)
-+ [Premium](functions-premium-plan.md)
-+ [Dedicated (App Service)](dedicated-plan.md)
++ [Elastic Premium plan](functions-premium-plan.md)++ [Dedicated (App Service) plan](dedicated-plan.md)++ [Azure Container Apps](functions-container-apps-hosting.md) Each plan has different behaviors. Not all deployment technologies are available for each hosting plan and operating system. This chart provides information on the supported deployment technologies:
-| Deployment technology | Windows Consumption | Windows Premium | Windows Dedicated | Linux Consumption | Linux Premium | Linux Dedicated |
-|--|:-:|:-:|::|::|:-:|::|
-| [External package URL](#external-package-url)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Zip deploy](#zip-deploy) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Docker container](#docker-container) | | | | |Γ£ö|Γ£ö|
-| [Source control](#source-control) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-| [Local Git](#local-git)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-| [FTPS](#ftps)<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-| [In-portal editing](#portal-editing)<sup>2</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Deployment technology | Flex Consumption| Consumption | Elastic Premium | Dedicated | Container Apps |
+|--|:-:|:-:|::|::|:-:|
+| [OneDeploy](#one-deploy) |Γ£ö| | | | |
+| [Zip deploy](#zip-deploy) | |Γ£ö|Γ£ö|Γ£ö| |
+| [External package URL](#external-package-url)<sup>1</sup> | |Γ£ö|Γ£ö|Γ£ö| |
+| [Docker container](#docker-container) | | Linux-only | Linux-only | Linux-only |Γ£ö|
+| [Source control](#source-control) | | Windows-only |Γ£ö|Γ£ö| |
+| [Local Git](#local-git)<sup>1</sup> | |Windows-only |Γ£ö|Γ£ö| |
+| [FTPS](#ftps)<sup>1</sup> | |Windows-only |Γ£ö|Γ£ö| |
+| [In-portal editing](#portal-editing)<sup>2</sup> | |Γ£ö|Γ£ö|Γ£ö| |
<sup>1</sup> Deployment technologies that require you to [manually sync triggers](#trigger-syncing) aren't recommended. <sup>2</sup> In-portal editing is disabled when code is deployed to your function app from outside the portal. For more information, including language support details for in-portal editing, see [Language support details](supported-languages.md#language-support-details).
You must manually sync triggers when using these deployment options:
+ [Local Git](#local-git) + [FTPS](#ftps)
-You can sync triggers in one of three ways:
+You can sync triggers in one of these ways:
+ Restart your function app in the Azure portal.
-+ Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](function-keys-how-to.md).
-+ Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app. This request requires an [access token](/rest/api/azure/#acquire-an-access-token) in the [`Authorization` request header](/rest/api/azure/#request-header).
++ Use the [`az rest`](/cli/azure/reference-index#az-rest) command to send an HTTP POST request that calls the `syncfunctiontriggers` API, as in this example:
+ ```azurecli
+ az rest --method post --url https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Web/sites/<APP_NAME>/syncfunctiontriggers?api-version=2016-08-01
+ ```
When you deploy an updated version of the deployment package and maintain the same external package URL, you need to manually restart your function app. This indicates to the host that it should synchronize and redeploy your updates from the same package URL. The Functions host also performs a background trigger sync after the application has started. However, for the Consumption and Elastic Premium hosting plans you should also [manually sync triggers](#trigger-syncing) in these scenarios: + Deployments using an external package URL with either ARM Templates or Terraform. + When updating the deployment package at the same external package URL.
+### Remote build
+You can request Azure Functions to perform a remote build of your code project during deployment. In these scenarios, you should request a remote build instead of building locally:
-### Remote build
++ You're deploying an app to a Linux-based function app that was developed on a Windows computer. This is commonly the case for Python app development. You can end up with in incorrect libraries being used when building the deployment package locally on Windows.++ Your project has dependencies on a [custom package index](functions-reference-python.md#remote-build-with-extra-index-url).++ You want to reduce the size of your deployment package.
-Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds differ depending on whether your app is running on Windows or Linux.
+How you request a remote build depends on whether your app runs in Azure on Windows or Linux.
#### [Windows](#tab/windows)
When an app is deployed to Windows, language-specific commands, like `dotnet res
#### [Linux](#tab/linux)
-To enable remote build on Linux, you must set these application settings:
+To enable remote build on Linux Consumption, Elastic Premium, and App Service plans, you must set these application settings:
+ [`ENABLE_ORYX_BUILD=true`](functions-app-settings.md#enable_oryx_build) + [`SCM_DO_BUILD_DURING_DEPLOYMENT=true`](functions-app-settings.md#scm_do_build_during_deployment)
By default, both [Azure Functions Core Tools](functions-run-local.md) and the [A
When apps are built remotely on Linux, they [run from the deployment package](run-functions-from-deployment-package.md).
+When deploying to the Flex Consumption plan, you don't need to set any application settings to request a remote build. You instead pass a remote build parameter when you start deployment. How you pass this parameter depends on the deployment tool you are using. For Core Tools and Visual Studio Code, a remote build is always requested when deploying a Python app.
+ The following considerations apply when using remote builds during deployment: + Remote builds are supported for function apps running on Linux in the Consumption plan. However, deployment options are limited for these apps because they don't have an `scm` (Kudu) site.
-+ Function apps running on Linux a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an `scm` (Kudu) site, but it's limited compared to Windows.
++ Function apps running on Linux in a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an `scm` (Kudu) site, but it's limited compared to Windows. + Remote builds aren't performed when an app is using [run-from-package](run-functions-from-deployment-package.md). To learn how to use remote build in these cases, see [Zip deploy](#zip-deploy). + You may have issues with remote build when your app was created before the feature was made available (August 1, 2019). For older apps, either create a new function app or run `az functionapp update --resource-group <RESOURCE_GROUP_NAME> --name <APP_NAME>` to update your function app. This command might take two tries to succeed. ### App content storage
-Several deployment methods store the deployed or built application payload on the storage account associated with the function app. Functions tries to use the Azure Files content share when configured, but some methods instead store the payload in the blob storage instance associated with the `AzureWebJobsStorage` connection. See the details in the _Where app content is stored_ paragraphs of each deployment technology covered in the next section.
+Package-based deployment methods store the package in the storage account associated with the function app, which is defined in the [AzureWebJobsStorage](functions-app-settings.md#azurewebjobsstorage) setting. When available, Consumption and Elastic Premium plan apps try to use the Azure Files content share from this account, but you can also maintain the package in another location. Flex Consumption plan apps instead use a storage container in default storage account, unless you [configure a different storage account to use for deployment](flex-consumption-how-to.md#configure-deployment-settings). For more information, review the details in **Where app content is stored** in each deployment technology covered in the next section.
[!INCLUDE [functions-storage-access-note](../../includes/functions-storage-access-note.md)]
Several deployment methods store the deployed or built application payload on th
The following deployment methods are available in Azure Functions.
-### External package URL
-
-You can use an external package URL to reference a remote package (.zip) file that contains your function app. The file is downloaded from the provided URL, and the app runs in [Run From Package](run-functions-from-deployment-package.md) mode.
+### One deploy
+One deploy is the only deployment technology supported for apps on the Flex Consumption plan. The end result is a ready-to-run .zip package that your function app runs on.
->__How to use it:__ Add [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) to your application settings. The value of this setting should be a URL (the location of the specific package file you want to run). You can add settings either [in the portal](functions-how-to-use-azure-function-app-settings.md#settings) or [by using the Azure CLI](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set).
+>__How to use it:__ Deploy with the [Visual Studio Code](functions-develop-vs-code.md#publish-to-azure) publish feature, or from the command line using [Azure Functions Core Tools](functions-run-local.md#project-file-deployment) or the [Azure CLI](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip). Our [Azure Dev Ops Task](functions-how-to-azure-devops.md#deploy-your-app-1) and [GitHub Action](functions-how-to-github-actions.md) similarly leverage one deploy when they detect that a Flex Consumption app is being deployed to.
>
->If you use Azure Blob storage, use a private container with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to give Functions access to the package. Any time the application restarts, it fetches a copy of the content. Your reference must be valid for the lifetime of the application.
+> When you create a Flex Consumption app, you will need to specify a deployment storage (blob) container as well as an authentication method to it. By default the same storage account as the `AzureWebJobsStorage` connection is used, with a connection string as the authentication method. Thus, your [deployment settings](flex-consumption-how-to.md#configure-deployment-settings) are configured during app create time without any need of application settings.
->__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. Whenever you deploy the package file that a function app references, you must [manually sync triggers](#trigger-syncing), including the initial deployment. When you change the contents of the package file and not the URL itself, you must also restart your function app to sync triggers.
+>__When to use it:__ One deploy is the only deployment technology available for function apps running on the Flex Consumption plan.
->__Where app content is stored:__ App content is stored at the URL specified. This could be on Azure Blobs, possibly in the storage account specified by the `AzureWebJobsStorage` connection. Some client tools may default to deploying to a blob in this account. For example, for Linux Consumption apps, the Azure CLI will attempt to deploy through a package stored in a blob on the account specified by `AzureWebJobsStorage`.
+>__Where app content is stored:__ When you create a Flex Consumption function app, you specify a [deployment storage container](functions-infrastructure-as-code.md?pivots=flex-consumption-plan#deployment-sources). This is a blob container where the platform will upload the app content you deployed. To change the location, you can visit the Deployment Settings blade in the Azure portal or use the [Azure CLI](flex-consumption-how-to.md#configure-deployment-settings).
### Zip deploy
-Use zip deploy to push a .zip file that contains your function app to Azure. Optionally, you can set your app to start [running from package](run-functions-from-deployment-package.md), or specify that a [remote build](#remote-build) occurs.
+Zip deploy is the default and recommended deployment technology for function apps on the Consumption, Elastic Premium, and App Service (Dedicated) plans. The end result a ready-to-run .zip package that your function app runs on. It differs from [external package URL](#external-package-url) in that our platform is responsible for remote building and storing your app content.
->__How to use it:__ Deploy by using your favorite client tool: [Visual Studio Code](functions-develop-vs-code.md#publish-to-azure), [Visual Studio](functions-develop-vs.md#publish-to-azure), or from the command line using the [Azure Functions Core Tools](functions-run-local.md#project-file-deployment). By default, these tools use zip deployment and [run from package](run-functions-from-deployment-package.md). Core Tools and the Visual Studio Code extension both enable [remote build](#remote-build) when deploying to Linux. To manually deploy a .zip file to your function app, follow the instructions in [Deploy from a .zip file or URL](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url).
+>__How to use it:__ Deploy by using your favorite client tool: [Visual Studio Code](functions-develop-vs-code.md#publish-to-azure), [Visual Studio](functions-develop-vs.md#publish-to-azure), or from the command line using [Azure Functions Core Tools](functions-run-local.md#project-file-deployment) or the [Azure CLI](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config-zip). Our [Azure Dev Ops Task](functions-how-to-azure-devops.md#deploy-your-app-1) and [GitHub Action](functions-how-to-github-actions.md) similarly leverage zip deploy.
>When you deploy by using zip deploy, you can set your app to [run from package](run-functions-from-deployment-package.md). To run from package, set the [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) application setting value to `1`. We recommend zip deployment. It yields faster loading times for your applications, and it's the default for VS Code, Visual Studio, and the Azure CLI.
->__When to use it:__ Zip deploy is the recommended deployment technology for Azure Functions.
+>__When to use it:__ Zip deploy is the default and recommended deployment technology for function apps on the Windows Consumption, Windows and Linux Elastic Premium, and Windows and Linux App Service (Dedicated) plans.
+
+>__Where app content is stored:__ App content from a zip deploy by default is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created. In Linux Consumption, the app content is instead persisted on a blob in the storage account specified by the `AzureWebJobsStorage` app setting, and the app setting `WEBSITE_RUN_FROM_PACKAGE` will take on the value of the blob URL.
+
+### External package URL
+
+External package URL is an option if you want to manually control how deployments are performed. You take responsibility for uploading a ready-to-run .zip package containing your built app content to blob storage and referencing this external URL as an application setting on your function app. Whenever your app restarts, it fetches the package, mounts it, and runs in [Run From Package](run-functions-from-deployment-package.md) mode.
+
+>__How to use it:__ Add [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package) to your application settings. The value of this setting should be a blob URL pointing to the location of the specific package you want your app to run. You can add settings either [in the portal](functions-how-to-use-azure-function-app-settings.md#settings) or [by using the Azure CLI](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set).
+>
+>If you use Azure Blob Storage, your Function app can access the container either by using a managed identity-based connection or with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer). The option you choose affects what kind of URL you use as the value for WEBSITE_RUN_FROM_PACKAGE. Managed identity is recommended for overall security and because SAS tokens expire and must be manually maintained.
+>
+>Whenever you deploy the package file that a function app references, you must [manually sync triggers](#trigger-syncing), including the initial deployment. When you change the contents of the package file and not the URL itself, you must also restart your function app to sync triggers. Refer to our [how-to guide](run-functions-from-deployment-package.md#use-website_run_from_package--url) on configuring this deployment technology.
+
+>__When to use it:__ External package URL is the only supported deployment method for apps running on the Linux Consumption plan when you don't want a [remote build](#remote-build) to occur. This method is also the recommended deployment technology when you [create your app without Azure Files](storage-considerations.md#create-an-app-without-azure-files). For scalable apps running on Linux, you should instead consider [Flex Consumption plan](flex-consumption-plan.md) hosting.
->__Where app content is stored:__ App content from a zip deploy by default is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created. In Linux Consumption, the app content instead is persisted on a blob in the storage account specified by the `AzureWebJobsStorage` connection.
+>__Where app content is stored:__ You are responsible for uploading your app content to blob storage. You may use any blob storage account, though Azure Blob Storage is recommended.
### Docker container
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
Title: Continuously update function app code using Azure Pipelines
description: Learn how to use Azure Pipelines to set up a pipeline that builds and deploys apps to Azure Functions. Previously updated : 04/03/2024 Last updated : 09/27/2024 ms.devlang: azurecli
steps:
You'll deploy with the [Azure Function App Deploy](/azure/devops/pipelines/tasks/deploy/azure-function-app) task. This task requires an [Azure service connection](/azure/devops/pipelines/library/service-endpoints) as an input. An Azure service connection stores the credentials to connect from Azure Pipelines to Azure.
-To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`.
+To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`. Deploying to a Flex Consumption app is not supported with @v1 of the AzureFunctionApp task.
```yaml trigger:
You'll deploy with the [Azure Function App Deploy v2](/azure/devops/pipelines/ta
The v2 version of the task includes support for newer applications stacks for .NET, Python, and Node. The task includes networking predeployment checks. When there are predeployment issues, deployment stops.
-To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`.
+To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`. Deploying to a Flex Consumption app requires you to set both `appType: functionAppLinux` and `isFlexConsumption: true`.
+### [Windows App](#tab/windows)
```yaml trigger: - main variables: # Azure service connection established during pipeline creation
- azureSubscription: <Name of your Azure subscription>
- appName: <Name of the function app>
+ azureSubscription: <SUBSCRIPTION_NAME>
+ appName: <APP_NAME>
+ # Agent VM image name
+ vmImageName: 'windows-latest'
+
+- task: AzureFunctionApp@2 # Add this at the end of your file
+ inputs:
+ azureSubscription: <AZURE_SERVICE_CONNECTION>
+ appType: functionApp # this specifies a Windows-based function app
+ appName: $(appName)
+ package: $(System.ArtifactsDirectory)/**/*.zip
+ deploymentMethod: 'auto' # 'auto' | 'zipDeploy' | 'runFromPackage'. Required. Deployment method. Default: auto.
+ #Uncomment the next lines to deploy to a deployment slot
+ #Note that deployment slots is not supported for Linux Dynamic SKU
+ #deployToSlotOrASE: true
+ #resourceGroupName: '<RESOURCE_GROUP>'
+ #slotName: '<SLOT_NAME>'
+```
+
+### [Linux App](#tab/linux)
+```yaml
+trigger:
+- main
+
+variables:
+ # Azure service connection established during pipeline creation
+ azureSubscription: <SUBSCRIPTION_NAME>
+ appName: <APP_NAME>
# Agent VM image name vmImageName: 'ubuntu-latest' - task: AzureFunctionApp@2 # Add this at the end of your file inputs:
- azureSubscription: <Azure service connection>
- appType: functionAppLinux # default is functionApp
+ azureSubscription: <AZURE_SERVICE_CONNECTION>
+ appType: functionAppLinux # This specifies a Linux-based function app
+ #isFlexConsumption: true # Uncomment this line if you are deploying to a Flex Consumption app
appName: $(appName) package: $(System.ArtifactsDirectory)/**/*.zip deploymentMethod: 'auto' # 'auto' | 'zipDeploy' | 'runFromPackage'. Required. Deployment method. Default: auto. #Uncomment the next lines to deploy to a deployment slot #Note that deployment slots is not supported for Linux Dynamic SKU #deployToSlotOrASE: true
- #resourceGroupName: '<Resource Group Name>'
- #slotName: '<Slot name>'
+ #resourceGroupName: '<RESOURCE_GROUP>'
+ #slotName: '<SLOT_NAME>'
``` The snippet assumes that the build steps in your YAML file produce the zip archive in the `$(System.ArtifactsDirectory)` folder on your agent.
+If you opted to deploy to a [deployment slot](functions-deployment-slots.md), you can add the following step to perform a slot swap. Deployment slots are not yet available for the Flex Consumption SKU.
+```yaml
+- task: AzureAppServiceManage@0
+ inputs:
+ azureSubscription: <AZURE_SERVICE_CONNECTION>
+ WebAppName: <APP_NAME>
+ ResourceGroupName: <RESOURCE_GROUP>
+ SourceSlot: <SLOT_NAME>
+ SwapWithProduction: true
+```
+ ## Deploy a container You can automatically deploy your code to Azure Functions as a custom container after every successful build. To learn more about containers, see [Working with containers and Azure Functions](./functions-how-to-custom-container.md) .
trigger:
variables: # Container registry service connection established during pipeline creation
- dockerRegistryServiceConnection: <Docker registry service connection>
- imageRepository: <Name of your image repository>
- containerRegistry: <Name of the Azure container registry>
+ dockerRegistryServiceConnection: <DOCKER_REGISTRY_SERVICE_CONNECTION>
+ imageRepository: <IMAGE_REPOSITORY_NAME>
+ containerRegistry: <AZURE_CONTAINER_REGISTRY_NAME>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile' tag: '$(Build.BuildId)'
variables:
- task: AzureFunctionAppContainer@1 # Add this at the end of your file inputs:
- azureSubscription: '<Azure service connection>'
- appName: '<Name of the function app>'
+ azureSubscription: '<AZURE_SERVICE_CONNECTION>'
+ appName: '<APP_NAME>'
imageName: $(containerRegistry)/$(imageRepository):$(tag) ``` The snippet pushes the Docker image to your Azure Container Registry. The **Azure Function App on Container Deploy** task pulls the appropriate Docker image corresponding to the `BuildId` from the repository specified, and then deploys the image.
-## Deploy to a slot
-
-You can configure your function app to have multiple slots. Slots allow you to safely deploy your app and test it before making it available to your customers.
-
-The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
-
-```yaml
-- task: AzureFunctionApp@2
- inputs:
- azureSubscription: <Azure service connection>
- appType: functionAppLinux
- appName: <Name of the Function app>
- package: $(System.ArtifactsDirectory)/**/*.zip
- deploymentMethod: 'auto'
- deployToSlotOrASE: true
- resourceGroupName: <Name of the resource group>
- slotName: staging
--- task: AzureAppServiceManage@0
- inputs:
- azureSubscription: <Azure service connection>
- WebAppName: <name of the Function app>
- ResourceGroupName: <name of resource group>
- SourceSlot: staging
- SwapWithProduction: true
-```
- ## Create a pipeline with Azure CLI To create a build pipeline in Azure, use the `az functionapp devops-pipeline create` [command](/cli/azure/functionapp/devops-pipeline#az-functionapp-devops-pipeline-create). The build pipeline is created to build and release any code changes that are made in your repo. The command generates a new YAML file that defines the build and release pipeline and then commits it to your repo. The prerequisites for this command depend on the location of your code.
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
Title: Use GitHub Actions to make code updates in Azure Functions description: Learn how to use GitHub Actions to define a workflow to build and deploy Azure Functions projects in GitHub. Previously updated : 03/16/2024 Last updated : 09/27/2024 zone_pivot_groups: github-actions-deployment-options
The Azure Functions action (`Azure/azure-functions`) defines how your code is pu
### Parameters
-The following parameters are most commonly used with this action:
+The following parameters are required for all function app plans:
|Parameter |Explanation | |||
-|_**app-name**_ | (Mandatory) The name of your function app. |
-|_**slot-name**_ | (Optional) The name of a specific [deployment slot](functions-deployment-slots.md) you want to deploy to. The slot must already exist in your function app. When not specified, the code is deployed to the active slot. |
-|_**publish-profile**_ | (Optional) The name of the GitHub secret that contains your publish profile. |
+|_**app-name**_ | The name of your function app. |
+|***package*** | This is the location in your project to be published. By default, this value is set to `.`, which means all files and folders in the GitHub repository will be deployed.|
-The following parameters are also supported, but are used only in specific cases:
+The following parameters are required for the Flex Consumption plan:
|Parameter |Explanation | |||
-| _**package**_ | (Optional) Sets a subpath in your repository from which to publish. By default, this value is set to `.`, which means all files and folders in the GitHub repository are deployed. |
-| _**respect-pom-xml**_ | (Optional) Used only for Java functions. Whether it's required for your app's deployment artifact to be derived from the pom.xml file. When deploying Java function apps, you should set this parameter to `true` and set `package` to `.`. By default, this parameter is set to `false`, which means that the `package` parameter must point to your app's artifact location, such as `./target/azure-functions/` |
-| _**respect-funcignore**_ | (Optional) Whether GitHub Actions honors your .funcignore file to exclude files and folders defined in it. Set this value to `true` when your repository has a .funcignore file and you want to use it exclude paths and files, such as text editor configurations, .vscode/, or a Python virtual environment (.venv/). The default setting is `false`. |
-| _**scm-do-build-during-deployment**_ | (Optional) Whether the App Service deployment site (Kudu) performs predeployment operations. The deployment site for your function app can be found at `https://<APP_NAME>.scm.azurewebsites.net/`. Change this setting to `true` when you need to control the deployments in Kudu rather than resolving the dependencies in the GitHub Actions workflow. The default value is `false`. For more information, see the [SCM_DO_BUILD_DURING_DEPLOYMENT](./functions-app-settings.md#scm_do_build_during_deployment) setting. |
-| _**enable-oryx-build**_ |(Optional) Whether the Kudu deployment site resolves your project dependencies by using Oryx. Set to `true` when you want to use Oryx to resolve your project dependencies by using a remote build instead of the GitHub Actions workflow. When `true`, you should also set `scm-do-build-during-deployment` to `true`. The default value is `false`.|
+|_**sku**_ | Set this to `flexconsumption` when authenticating with publish-profile. When using RBAC credentials or deploying to a non-Flex Consumption plan, the Action can resolve the value, so the parameter does not need to be included. |
+|_**remote-build**_ | Set this to `true` to enable a build action from Kudu when the package is deployed to a Flex Consumption app. Oryx build is always performed during a remote build in Flex Consumption; do not set **scm-do-build-during-deployment** or **enable-oryx-build**. By default, this parameter is set to `false`. |
+
+The following parameters are specific to the Consumption, Elastic Premium, and App Service (Dedicated) plans:
+
+|Parameter |Explanation |
+|||
+|_**scm-do-build-during-deployment**_ | (Optional) Allow the Kudu site (e.g. `https://<APP_NAME>.scm.azurewebsites.net/`) to perform pre-deployment operations, such as [remote builds](functions-deployment-technologies.md#remote-build). By default, this is set to `false`. Set this to `true` when you do want to control deployment behaviors using Kudu instead of resolving dependencies in your GitHub workflow. For more information, see the [`SCM_DO_BUILD_DURING_DEPLOYMENT`](./functions-app-settings.md#scm_do_build_during_deployment) setting.|
+|_**enable-oryx-build**_ | (Optional) Allow Kudu site to resolve your project dependencies with Oryx. By default, this is set to `false`. If you want to use [Oryx](https://github.com/Microsoft/Oryx) to resolve your dependencies instead of the GitHub Workflow, set both **scm-do-build-during-deployment** and **enable-oryx-build** to `true`.|
+
+Optional parameters for all function app plans:
+
+|Parameter |Explanation |
+|||
+| ***slot-name*** | This is the [deployment slot](functions-deployment-slots.md) name to be deployed to. By default, this value is empty, which means the GitHub Action will deploy to your production site. When this setting points to a non-production slot, please ensure the **publish-profile** parameter contains the credentials for the slot instead of the production site. _Currently not supported in Flex Consumption_. |
+|***publish-profile*** | The name of the GitHub secret that contains your publish profile.|
+| _**respect-pom-xml**_ | Used only for Java functions. Whether it's required for your app's deployment artifact to be derived from the pom.xml file. When deploying Java function apps, you should set this parameter to `true` and set `package` to `.`. By default, this parameter is set to `false`, which means that the `package` parameter must point to your app's artifact location, such as `./target/azure-functions/` |
+| _**respect-funcignore**_ | Whether GitHub Actions honors your .funcignore file to exclude files and folders defined in it. Set this value to `true` when your repository has a .funcignore file and you want to use it exclude paths and files, such as text editor configurations, .vscode/, or a Python virtual environment (.venv/). The default setting is `false`. |
### Considerations
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
Azure Maps is a portfolio of geospatial service APIs integrated into Azure, enab
Azure Maps REST APIs support languages like Python and R for geospatial data analysis and machine learning, offering robust [routing APIs] for calculating routes based on conditions such as vehicle type or reachable area.
-This tutorial guides users through routing electric vehicles using Azure Maps APIs along with Azure Notebooks and Python to find the closest charging station when the battery is low.
+This tutorial guides users through routing electric vehicles using Azure Maps APIs along with [Jupyter Notebooks in VS Code] and Python to find the closest charging station when the battery is low.
In this tutorial, you will: > [!div class="checklist"] >
-> * Create and run a Jupyter Notebook file on [Azure Notebooks] in the cloud.
+> * Create and run a [Jupyter Notebook in VS Code].
> * Call Azure Maps REST APIs in Python. > * Search for a reachable range based on the electric vehicle's consumption model. > * Search for electric vehicle charging stations within the reachable range, or [isochrone].
In this tutorial, you will:
* An [Azure Maps account] * A [subscription key]
+* [Visual Studio Code]
+* A working knowledge of [Jupyter Notebooks in VS Code]
+* Environment set up to work with Python in Jupyter Notebooks. For more information, see [Setting up your environment].
> [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
-## Create an Azure Notebooks project
-
-To proceed with this tutorial, it's necessary to create an Azure Notebooks project and download and execute the Jupyter Notebook file. This file contains Python code that demonstrates the scenario presented in this tutorial.
-
-Follow these steps to create an Azure Notebooks project and upload the Jupyter Notebook document:
-
-1. Go to [Azure Notebooks] and sign in.
-1. At the top of your public profile page, select **My Projects**.
-
- ![The My Projects button](./media/tutorial-ev-routing/myproject.png)
-
-1. On the **My Projects** page, select **New Project**.
-
- ![The New Project button](./media/tutorial-ev-routing/create-project.png)
-
-1. In the **Create New Project** pane, enter a project name and project ID.
-
- ![The Create New Project pane](./media/tutorial-ev-routing/create-project-window.png)
-
-1. Select **Create**.
-
-1. After your project is created, download this [Jupyter Notebook document file] from the [Azure Maps Jupyter Notebook repository].
-
-1. In the projects list on the **My Projects** page, select your project, and then select **Upload** to upload the Jupyter Notebook document file.
-
- ![upload Jupyter Notebook](./media/tutorial-ev-routing/upload-notebook.png)
-
-1. Upload the file from your computer, and then select **Done**.
+## Install project level packages
-1. Once uploaded successfully, your file is displayed on your project page. Double-click on the file to open it as a Jupyter Notebook.
+The _EV Routing and Reachable Range_ project has dependencies on the [aiohttp] and [IPython] python libraries. You can install these in the Visual Studio terminal using pip:
-Familiarize yourself with the functionality implemented in the Jupyter Notebook file. Execute the code within the Jupyter Notebook one cell at a time by selecting the **Run** button located at the top of the Jupyter Notebook application.
+```python
+pip install aiohttp
+pip install ipython
+```
- ![The Run button](./media/tutorial-ev-routing/run.png)
+## Open Jupyter Notebook in Visual Studio Code
-## Install project level packages
+Download then open the Notebook used in this tutorial:
-To run the code in Jupyter Notebook, install packages at the project level by following these steps:
+1. Open the file [EVrouting.ipynb] in the [AzureMapsJupyterSamples] repository in GitHub.
+1. Select the **Download raw file** button in the upper-right corner of the screen to save the file locally.
-1. Download the [*requirements.txt*] file from the [Azure Maps Jupyter Notebook repository], and then upload it to your project.
-1. On the project dashboard, select **Project Settings**.
-1. In the **Project Settings** pane, select the **Environment** tab, and then select **Add**.
-1. Under **Environment Setup Steps**, do the following:
- a. In the first drop-down list, select **Requirements.txt**.
- b. In the second drop-down list, select your *requirements.txt* file.
- c. In the third drop-down list, select the version of Python. Version 3.11 was used when creating this tutorial.
-1. Select **Save**.
+ :::image type="content" source="./media/tutorial-ev-routing/download-notebook.png"alt-text="A screenshot showing how to download the Notebook file named EVrouting.ipynb from the GitHub repository.":::
- ![Install packages](./media/tutorial-ev-routing/install-packages.png)
+1. Open the downloaded Notebook in Visual Studio Code by right-clicking on the file then selecting **Open with > Visual Studio Code**, or through the VS Code File Explorer.
## Load the required modules and frameworks
+Once your code is added, you can run a cell using the **Run** icon to the left of the cell and the output is displayed below the code cell.
+ Run the following script to load all the required modules and frameworks. ```Python
import urllib.parse
from IPython.display import Image, display ``` + ## Request the reachable range boundary A package delivery company operates a fleet that includes some electric vehicles. These vehicles need to be recharged during the day without returning to the warehouse. When the remaining charge drops below an hour, a search is conducted to find charging stations within a reachable range. The boundary information for the range of these charging stations is then obtained.
-The requested routeType is eco to balance economy and speed. The following script calls the [Get Route Range] API of the Azure Maps routing service, using parameters related to the vehicle's consumption model. The script then parses the response to create a polygon object in GeoJSON format, representing the car's maximum reachable range.
+The requested `routeType` is _eco_ to balance economy and speed. The following script calls the [Get Route Range] API of the Azure Maps routing service, using parameters related to the vehicle's consumption model. The script then parses the response to create a polygon object in GeoJSON format, representing the car's maximum reachable range.
```python subscriptionKey = "Your Azure Maps key"
poiRangeMap = await staticMapResponse.content.read()
display(Image(poiRangeMap)) ```
-![A map showing the location range](./media/tutorial-ev-routing/location-range.png)
## Find the optimal charging station
await session.close()
display(Image(staticMapImage)) ```
-![A map showing the route](./media/tutorial-ev-routing/route.png)
In this tutorial, you learned how to call Azure Maps REST APIs directly and visualize Azure Maps data by using Python.
-To explore the Azure Maps APIs that are used in this tutorial, see:
+For more information on the Azure Maps APIs used in this tutorial, see:
+* [Get Route Directions]
* [Get Route Range]
+* [Post Route Matrix]
* [Post Search Inside Geometry] * [Render - Get Map Image]
-* [Post Route Matrix]
-* [Get Route Directions]
-* [Azure Maps REST APIs]
-## Next steps
+For a complete list of Azure Maps REST APIs, see [Azure Maps REST APIs].
-To learn more about Azure Notebooks, see
+## Next steps
> [!div class="nextstepaction"]
-> [Azure Notebooks]
+> [Learn more about all the notebooks experiences from Microsoft and GitHub](https://visualstudio.microsoft.com/vs/features/notebooks-at-microsoft)
+[aiohttp]: https://pypi.org/project/aiohttp/
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook
[Azure Maps REST APIs]: /rest/api/maps
-[Azure Notebooks]: https://notebooks.azure.com
-[Get Map Image]: /rest/api/maps/render/get-map-static-image
+[AzureMapsJupyterSamples]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook
+[EVrouting.ipynb]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb
[Get Map Image service]: /rest/api/maps/render/get-map-static-image
+[Get Map Image]: /rest/api/maps/render/get-map-static-image
[Get Route Directions]: /rest/api/maps/route/getroutedirections [Get Route Range]: /rest/api/maps/route/getrouterange
+[IPython]: https://ipython.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html
[isochrone]: glossary.md#isochrone
-[Jupyter Notebook document file]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb
+[Jupyter Notebook in VS Code]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
+[Jupyter Notebooks in VS Code]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
[manage authentication in Azure Maps]: how-to-manage-authentication.md [Matrix Routing]: /rest/api/maps/route/postroutematrix [Post Route Matrix]: /rest/api/maps/route/postroutematrix [Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true [Render - Get Map Image]: /rest/api/maps/render/get-map-static-image
-[*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt
[routing APIs]: /rest/api/maps/route
+[Setting up your environment]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Visual Studio Code]: https://code.visualstudio.com/
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python). Previously updated : 10/28/2021 Last updated : 10/28/2024
In this tutorial, you will:
> [!div class="checklist"] >
-> * Work with data files in [Azure Notebooks] in the cloud.
+> * Create and run a [Jupyter Notebook in VS Code].
> * Load demo data from file. > * Call Azure Maps REST APIs in Python. > * Render location data on the map. > * Enrich the demo data with Azure Maps [Daily Forecast] weather data. > * Plot forecast data in graphs.
+> [!NOTE]
+> The Jupyter notebook file for this project can be downloaded from the [Weather Maps Jupyter Notebook repository].
+ ## Prerequisites If you don't have an Azure subscription, create a [free account] before you begin. * An [Azure Maps account] * A [subscription key]
+* [Visual Studio Code]
+* A working knowledge of [Jupyter Notebooks in VS Code]
+* Environment set up to work with Python in Jupyter Notebooks. For more information, see [Setting up your environment].
> [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
-To get familiar with Azure notebooks and to know how to get started, follow the instructions [Create an Azure Notebook].
+## Install project level packages
-> [!NOTE]
-> The Jupyter notebook file for this project can be downloaded from the [Weather Maps Jupyter Notebook repository].
+The _EV Routing and Reachable Range_ project has dependencies on the [aiohttp] and [IPython] python libraries. You can install these in the Visual Studio terminal using pip:
+
+```python
+pip install aiohttp
+pip install ipython
+pip install pandas
+```
+
+## Open Jupyter Notebook in Visual Studio Code
+
+Download then open the Notebook used in this tutorial:
+
+1. Open the file [weatherDataMaps.ipynb] in the [AzureMapsJupyterSamples] repository in GitHub.
+1. Select the **Download raw file** button in the upper-right corner of the screen to save the file locally.
+
+ :::image type="content" source="./media/weather-service-tutorial/download-notebook.png"alt-text="A screenshot showing how to download the Notebook file named weatherDataMaps.ipynb from the GitHub repository.":::
+
+1. Open the downloaded Notebook in Visual Studio Code by right-clicking on the file then selecting **Open with > Visual Studio Code**, or through the VS Code File Explorer.
## Load the required modules and frameworks
-To load all the required modules and frameworks, run the following script:
+Once your code is added, you can run a cell using the **Run** icon to the left of the cell and the output is displayed below the code cell.
-```python
+Run the following script to load all the required modules and frameworks.
+
+```Python
+import aiohttp
import pandas as pd import datetime from IPython.display import Image, display
-!pip install aiohttp
-import aiohttp
``` + ## Import weather data This tutorial uses weather data readings from sensors installed at four different wind turbines. The sample data consists of 30 days of weather readings. These readings are gathered from weather data centers near each turbine location. The demo data contains data readings for temperature, wind speed and, direction. You can download the demo data contained in [weather_dataset_demo.csv] from GitHub. The script below imports demo data to the Azure Notebook.
df = pd.read_csv("./data/weather_dataset_demo.csv")
## Request daily forecast data
-In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API] of the Azure Maps Weather services. This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
+In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast] API of the Azure Maps Weather services. This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
```python subscription_key = "Your Azure Maps key"
await session.close()
display(Image(poi_range_map)) ```
-![Turbine locations](./media/weather-service-tutorial/location-map.png)
Group the forecast data with the demo data based on the station ID. The station ID is for the weather data center. This grouping augments the demo data with the forecast data.
windsPlot.set_ylabel("Wind direction")
The following graphs visualize the forecast data. For the change of wind speed, see the left graph. For change in wind direction, see the right graph. This data is prediction for next 15 days from the day the data is requested.
-![Wind speed plot](./media/weather-service-tutorial/speed-date-plot.png) ![Wind direction plot](./media/weather-service-tutorial/direction-date-plot.png)
-In this tutorial, you learned how to call Azure Maps REST APIs to get weather forecast data. You also learned how to visualize the data on graphs.
-To learn more about how to call Azure Maps REST APIs inside Azure Notebooks, see [EV routing using Azure Notebooks].
+In this tutorial, you learned how to call Azure Maps REST APIs to get weather forecast data. You also learned how to visualize the data on graphs.
To explore the Azure Maps APIs that are used in this tutorial, see:
To explore the Azure Maps APIs that are used in this tutorial, see:
For a complete list of Azure Maps REST APIs, see [Azure Maps REST APIs].
-## Clean up resources
-
-There are no resources that require cleanup.
- ## Next steps
-To learn more about Azure Notebooks, see
- > [!div class="nextstepaction"]
-> [Azure Notebooks]
+> [Learn more about all the notebooks experiences from Microsoft and GitHub](https://visualstudio.microsoft.com/vs/features/notebooks-at-microsoft)
+[aiohttp]: https://pypi.org/project/aiohttp/
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure Maps REST APIs]: consumption-model.md
-[Azure Notebooks]: https://notebooks.azure.com
-[Create an Azure Notebook]: tutorial-ev-routing.md#create-an-azure-notebooks-project
-[Daily Forecast API]: /rest/api/maps/weather/getdailyforecast
+[Azure Maps REST APIs]: /rest/api/maps
+[AzureMapsJupyterSamples]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook
[Daily Forecast]: /rest/api/maps/weather/getdailyforecast
-[EV routing using Azure Notebooks]: tutorial-ev-routing.md
[free account]: https://azure.microsoft.com/free/ [Get Map Image service]: /rest/api/maps/render/get-map-static-image
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[IPython]: https://ipython.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html
+[Jupyter Notebook in VS Code]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
+[Jupyter Notebooks in VS Code]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Render - Get Map Image]: /rest/api/maps/render/get-map-static-image
+[Setting up your environment]: https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Visual Studio Code]: https://code.visualstudio.com/
[Weather Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data [weather_dataset_demo.csv]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data
+[weatherDataMaps.ipynb]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/weatherDataMaps.ipynb
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
## Large volumes
-Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB.
+Azure NetApp Files allows you to create [large volumes](large-volumes.md) up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB.
For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md).
For more information, see [Requirements and considerations for large volumes](la
- [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md) - [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md) - [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md)
+- [Understand large volumes](large-volumes.md)
- [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
If this is your first time using large volumes, register the feature with the [l
## Next steps
+* [Understand large volumes](large-volumes.md)
* [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Create an NFS volume](azure-netapp-files-create-volumes.md)
azure-netapp-files Large Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes.md
+
+ Title: Understand large volumes in Azure NetApp Files
+description: Learn about the benefits, use cases, and requirements for using large volumes in Azure NetApp Files.
+++++ Last updated : 10/29/2024++
+# Understand large volumes in Azure NetApp Files
+
+Volumes in Azure NetApp Files are the way you present high performance, cost-effective storage to your network attached storage (NAS) clients in the Azure cloud. Volumes act as independent file systems with their own capacity, file counts, ACLs, snapshots, and file system IDs. These qualities provide a way to separate datasets into individual secure tenants.
++
+All resources in Azure NetApp files have [limits](azure-netapp-files-resource-limits.md). _Regular_ volumes have the following limits:
+
+| Limit type | Limits |
+| - | - |
+| Capacity | <ul><li>50 GiB minimum</li><li>100 TiB maximum</li></ul> |
+| File count | 2,147,483,632 |
+| Performance | <ul><li>Standard: 1,600</li><li>Premium: 1,600</li><li>Ultra: 4,500</li></ul> |
+
+Large volumes have the following limits:
+
+| Limit type | Values |
+| - | - |
+| Capacity | <ul><li>50 TiB minimum</li><li>1 PiB maximum (or [2 PiB by special request](azure-netapp-files-resource-limits.md#request-limit-increase))</li></ul> |
+| File count | 15,938,355,048 |
+| Performance | <ul><li>Standard: 1,600</li><li>Premium: 6,400</li><li>Ultra: 12,800</li></ul> |
++
+## Large volumes effect on performance
+
+In many cases, a regular volume can handle the performance needs for a production workload, particularly when dealing with database workloads, general file shares, and Azure VMware Service or virtual desktop infrastructure (VDI) workloads. When workloads are metadata heavy or require scale beyond what a regular volume can handle, a large volume can increase performance needs with minimal cost impact.
+
+For instance, the following graphs show that a large volume can deliver two to three times the performance at scale of a regular volume.
+
+For more information about performance tests, see [Large volume performance benchmarks for Linux](performance-large-volumes-linux.md) and [Regular volume performance benchmarks for Linux](performance-benchmarks-linux.md).
+
+For example, in benchmark tests using Flexible I/O Tester (FIO), a large volume achieved higher I/OPS and throughput than a regular volume.
+++
+## Work load types and use cases
+
+Regular volumes can handle most workloads. Once capacity, file count, performance, or scale limits are reached, new volumes must be created. This condition adds unnecessary complexity to a solution.
+
+Large volumes allow workloads to extend beyond the current limitations of regular volumes. The following table shows some examples of use cases for each volume type.
+
+| Volume type | Primary use cases |
+| - | -- |
+| Regular volumes | <ul><li>General file shares</li><li>SAP HANA and databases (Oracle, SQL Server, Db2, and others)</li><li>VDI/Azure VMware Service</li><li>Capacities less than 50 TiB</li></ul> |
+| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, FSI)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
+
+## More information
+
+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
+* [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
+* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | scheduledQueryRules | resource group | 1-260 | Can't use:<br>`*<>%{}&:\\?/#|` or control characters <br><br>Can't end with space or period. | > | metricAlerts | resource group | 1-260 | Can't use:<br>`*#&+:<>?@%{}\/|` or control characters <br><br>Can't end with space or period. | > | activityLogAlerts | resource group | 1-260 | Can't use:<br>`<>*%{}&:\\?+/#|` or control characters <br><br>Can't end with space or period. |
-> | PrometheusAlerts | resource group | 1-260 | Can't use:<br>`<>*%{}&:\\?+/#|` or control characters <br><br>Can't end with space or period. |
+
+## Microsoft.AlertsManagement
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | prometheusRuleGroups | resource group | 1-260 | Can't use:<br>`<>*%{}&:\\?+/#|` or control characters <br><br>Can't end with space or period. |
++ ## Microsoft.IoTCentral
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | availabilitySets | Yes | Yes |
-> | capacityReservationGroups | Yes | Yes |
-> | capacityReservationGroups / capacityReservations | Yes | Yes |
+> | capacityReservationGroups | No | No |
+> | capacityReservationGroups / capacityReservations | No | No |
> | cloudServices | Yes | Yes | > | cloudServices / networkInterfaces | No | No | > | cloudServices / publicIPAddresses | No | No |
azure-web-pubsub Socket Io Serverless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-overview.md
Socket.IO is a library that enables real-time, bidirectional, and event-based co
With the increasing adoption of serverless computing, we're introducing a new mode: Socket.IO Serverless mode. This mode allows Socket.IO to function in a serverless environment, handling communication logic through RESTful APIs or webhooks, offering a scalable, cost-effective, and maintenance-free solution. ## Differences Between Default Mode and Serverless Mode++ | Feature | Default Mode | Serverless Mode | |||| |Architecture|Use persistent connection for both servers and clients | Clients use persistent connections but servers use RESTful APIs and webhook event handlers in a stateless manner|
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
This article shows you how to configure Azure Bastion to use Kerberos authentica
* VMs migrated from on-premises to Azure aren't currently supported for Kerberos.  * Cross-realm authentication isn't currently supported for Kerberos. * The Domain controller must be an Azure Hosted VM within the same VNET that bastion is deployed.
-* Changes to DNS server aren't currently supported for Kerberos. After making any changes to DNS server, you'll need to delete and re-create the Bastion resource.
+* Changes to DNS servers do not propagate to Bastion. Bastion re-deployment is needed for DNS info to properly propagate. After making any changes to DNS server, you'll need to delete and re-create the Bastion resource.
* If additional DC (domain controllers) are added, Bastion will only recognize the first DC. * If additional DCs are added for different domains, the added domains can't successfully authenticate with Kerberos.
cdn Edgio Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/edgio-retirement-faq.md
+
+ Title: Azure CDN from Edgio retirement FAQ
+
+description: Common questions about the retirement of Azure CDN Standard from Edgio.
++++ Last updated : 10/30/2024+++
+# Azure CDN from Edgio retirement FAQ
+
+Azure CDN from Edgio will be retired on November 4, 2025. You must migrate your workload to Azure Front Door before this date to avoid service disruption. This article provides answers to common questions about the retirement of Azure CDN from Edgio.
+
+## Frequently asked questions
+
+### I see Edgio filed for Chapter 11 bankruptcy. Can Microsoft guarantee that Azure CDN from EdgioΓÇÖs availability and services until November 4, 2025?
+
+Edgio informed us that their services remain operational until at least November 4, 2025. However, we can't guarantee that Edgio doesn't unexpectedly cease operations before this date due to circumstances beyond our control.
+
+### How does Microsoft assist me with migrating my workloads from Azure CDN from Edgio?
+
+You're encouraged to migrate their workloads to Azure Front Door. You need to determine if Azure Front Door is suitable for their workloads and set up a test environment to validate compatibility. For a feature comparison, see [Azure Front Door and Azure CDN features](../frontdoor/front-door-cdn-comparison.md).
+
+If Azure Front Door isn't compatible with your workload, we offer a service called [Routing Preference Unmetered](../virtual-network/ip-services/routing-preference-unmetered.md) also known as *CDN Interconnect*. This service routes traffic from your Azure resources to another CDN. You can choose to continue working with Edgio directly to minimize interruptions and keep your origins on Azure. For further information, you can contact Microsoft support or reach out to [Edgio](https://edg.io/contact-us/).
+
+### Does Microsoft validate my workloads work on Azure Front Door?
+
+You need to determine if Azure Front Door suits your workloads. We recommend setting up a test environment to validate that your services are compatible with Azure Front Door.
+
+### What alternative solutions does Microsoft offer?
+
+We encourage you to consider migrating your workloads to Azure Front Door. For a feature comparison between Azure Front Door and Azure CDN from Edgio, see [Azure CDN Features](cdn-features.md). For a pricing comparison, visit [Azure CDN Pricing](https://azure.microsoft.com/pricing/details/cdn/).
+
+### If I determined my workload isn't a match for Azure Front Door, what are my options?
+
+If you find that Azure Front Door isn't suitable for your workload, we offer an alternative service called [Routing Preference Unmetered](../virtual-network/ip-services/routing-preference-unmetered.md), also known as "CDN Interconnect." This service might allow free data transfer for traffic egressing from your Azure resources to another CDN of your choice.
+
+Additionally, you can choose to continue working directly with Edgio to minimize interruptions, keeping your origins on Azure while utilizing Edgio's services. For further information, contact Microsoft Support or reach out to [Edgio](https://edg.io/contact-us/).
+
+### If I find Azure Front Door isn't compatible with my workload, can I transfer my services to Edgio and have them bill me directly?
+
+Edgio informed Microsoft that they strive to facilitate seamless transitions for users who contact them directly. However, Microsoft can't guarantee the success of these transitions.
+
+### What are Azure Front Door and Microsoft's media delivery capabilities?
+
+Azure Front Door supports live and on-demand video streaming for small to medium-sized businesses. Edgio enabled Microsoft to deliver large-scale streaming workloads, such as major live events and over-the-top (OTT) services. While A Azure Front Door is exploring the capability to deliver streaming services for large-scale enterprises, there's currently no estimated time of arrival (ETA) for this feature.
+
+### What will happen if I don't take action before November 4, 2025?
+
+If no action is taken before November 4, 2025, Azure CDN from Edgio profiles and associated data will be removed from Edgio systems. It's imperative that users migrate their workloads before this date to avoid any service disruptions and data loss.
+
+### Is Microsoft publishing a self-service guide to manually migrate my Azure Front Door-compatible workloads from Azure CDN from Edgio to Azure Front Door?
+
+Yes, you can migrate your workloads manually by following the steps in the [Azure CDN to Azure Front Door migration guide](../frontdoor/migrate-cdn-to-front-door.md). This guide provides detailed instructions on how to set up an Azure Front Door profile, test functionality, and migrate your workloads from Azure CDN from Edgio to Azure Front Door with the help of Azure Traffic Manager.
+
+### How is Microsoft communicating the retirement of Azure CDN from Edgio, and how often are reminders sent?
+
+We communicate the retirement of Azure CDN from Edgio through multiple channels, including email notifications and in-portal messages. Reminders are sent at least monthly to all users with active Edgio profiles to ensure they're aware of the upcoming changes and necessary actions.
+
+## Next steps
+
+Migrate your Azure CDN from Edgio workloads to Azure Front Door by following the steps in the [Azure CDN to Azure Front Door migration guide](../frontdoor/migrate-cdn-to-front-door.md).
data-factory Connector Troubleshoot Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-oracle.md
description: Learn how to troubleshoot issues with the Oracle connector in Azure
Previously updated : 07/02/2024 Last updated : 10/23/2024
This article provides suggestions to troubleshoot common problems with the Oracl
- **Cause**: The secure algorithm is not added to your Oracle server. -- **Recommendation**: Update your Oracle server settings to add these secure algorithms:
+- **Recommendation**: Update your Oracle server settings to add these secure algorithms if they are not already included:
- - The following algorithms are deemed as secure by OpenSSL, and will be sent along to the server for OAS (Oracle Advanced Security) encryption.
+ - For **SQLNET.ENCRYPTION_TYPES_SERVER**, need to add the following algorithms that are deemed as secure by OpenSSL and will be used for OAS (Oracle Advanced Security) encryption.
- AES256 - AES192 - 3DES168
This article provides suggestions to troubleshoot common problems with the Oracl
- 3DES112 - DES
- - The following algorithms are deemed as secure by OpenSSL, and will be sent along to the server for OAS (Oracle Advanced Security) data integrity.
+ - For **SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER**, need to add the following algorithms that are deemed as secure by OpenSSL and will be used for OAS (Oracle Advanced Security) data integrity.
- SHA256 - SHA384 - SHA512
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
Previously updated : 08/12/2024 Last updated : 09/11/2024
No. Azure supports a single reverse DNS record for each Azure Cloud Service or P
### Can I configure reverse DNS for IPv6 PublicIpAddress resources?
-Yes. See [Azure support for reverse DNS](/azure/dns/dns-reverse-dns-overview#azure-support-for-reverse-dns).
+No. Azure DNS does not currently support reverse DNS (PTR records) for public IPv6 addresses.
### Can I send emails to external domains from my Azure Compute services?
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
description: Learn how to use Azure DNS to host the reverse DNS lookup zones for
Previously updated : 06/07/2024 Last updated : 09/12/2024 ms.devlang: azurecli
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
Previously updated : 06/10/2024 Last updated : 09/12/2024
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897
Previously updated : 11/21/2023 Last updated : 10/30/2024
The DNS standards permit a single TXT record to contain multiple strings, each o
When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary.
-The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 4096 characters`*` in each TXT record set (across all records combined).
-
-`*` 4096 character support is currently only available in the Azure Public Cloud. National clouds are limited to 1024 characters until 4k support rollout is complete.
+The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 4096 characters in each TXT record set (across all records combined).
## Tags and metadata
dns Dnssec How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dnssec-how-to.md
+
+ Title: How to sign your Azure Public DNS zone with DNSSEC (Preview)
+description: Learn how to sign your Azure public DNS zone with DNSSEC.
+++ Last updated : 10/30/2024+++
+# How to sign your Azure Public DNS zone with DNSSEC (Preview)
+
+This article shows you how to sign your DNS zone with [Domain Name System Security Extensions (DNSSEC)](dnssec.md).
+
+To remove DNSSEC signing from a zone, see [How to unsign your Azure Public DNS zone](dnssec-unsign.md).
+
+> [!NOTE]
+> DNSSEC zone signing is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.<br>
+> This DNSSEC preview is offered without a requirement to enroll in a preview. You can use Cloud Shell to sign or unsign a zone with Azure PowerShell or Azure CLI. Signing a zone by using the Azure portal is available in the next portal update.
+
+## Prerequisites
+
+* The DNS zone must be hosted by Azure Public DNS. For more information, see [Manage DNS zones](/azure/dns/dns-operations-dnszones-portal).
+* The parent DNS zone must be signed with DNSSEC. Most major top level domains (.com, .net, .org) are already signed.
+
+## Sign a zone with DNSSEC
+
+To protect your DNS zone with DNSSEC, you must first sign the zone. The zone signing process creates a delegation signer (DS) record that must then be added to the parent zone.
+
+## [Azure portal](#tab/sign-portal)
+
+To sign your zone with DNSSEC using the Azure portal:
+
+1. On the Azure portal Home page, search for and select **DNS zones**.
+2. Select your DNS zone, and then from the zone's **Overview** page, select **DNSSEC**. You can select **DNSSEC** from the menu at the top, or under **DNS Management**.
+
+ [ ![Screenshot of how to select DNSSEC.](./media/dnssec-how-to/select-dnssec.png) ](./media/dnssec-how-to/select-dnssec.png#lightbox)
+
+3. Select the **Enable DNSSEC** checkbox.
+
+ ![Screenshot of selecting the DNSSEC checkbox.](./media/dnssec-how-to/sign-dnssec.png)
+
+4. When you are prompted to confirm that you wish to enable DNSSEC, select **OK**.<br>
+
+ ![Screenshot of confirming DNSSEC signing.](./media/dnssec-how-to/confirm-dnssec.png)
+
+5. Wait for zone signing to complete. After the zone is signed, review the **DNSSEC delegation information** that is displayed. Notice that the status is: **Signed but not delegated**.
+
+ [ ![Screenshot of a signed zone with DS record missing.](./media/dnssec-how-to/ds-missing.png) ](./media/dnssec-how-to/ds-missing.png#lightbox)
+
+6. Copy the delegation information and use it to create a DS record in the parent zone.
+
+ 1. If the parent zone is a top level domain (for example: `.com`), you must add the DS record at your registrar. Each registrar has its own process. The registrar might ask for values such as the Key Tag, Algorithm, Digest Type, and Key Digest. In the example shown here, these values are:
+
+ **Key Tag**: 4535<br>
+ **Algorithm**: 13<br>
+ **Digest Type**: 2<br>
+ **Digest**: 7A1C9811A965C46319D94D1D4BC6321762B632133F196F876C65802EC5089001
+
+ When you provide the DS record to your registrar, the registrar adds the DS record to the parent zone, such as the Top Level Domain (TLD) zone.
+
+ 2. If you own the parent zone, you can add a DS record directly to the parent yourself. The following example shows how to add a DS record to the DNS zone **adatum.com** for the child zone **secure.adatum.com** when both zones are hosted using Azure Public DNS:
+
+ [ ![Screenshot of adding a DS record to the parent zone.](./media/dnssec-how-to/ds-add.png) ](./media/dnssec-how-to/ds-add.png#lightbox)
+ [ ![Screenshot of a DS record in the parent zone.](./media/dnssec-how-to/ds-added.png) ](./media/dnssec-how-to/ds-added.png#lightbox)
+
+ 3. If you don't own the parent zone, send the DS record to the owner of the parent zone with instructions to add it into their zone.
+
+7. When the DS record has been uploaded to the parent zone, select the DNSSEC information page for your zone and verify that **Signed and delegation established** is displayed. Your DNS zone is now fully DNSSEC signed.
+
+ [ ![Screenshot of a fully signed and delegated zone.](./media/dnssec-how-to/delegated.png) ](./media/dnssec-how-to/delegated.png#lightbox)
+
+## [Azure CLI](#tab/sign-cli)
+
+1. Sign a zone using the Azure CLI:
+
+```azurepowershell-interactive
+# Ensure you are logged in to your Azure account
+az login
+
+# Select the appropriate subscription
+az account set --subscription "your-subscription-id"
+
+# Enable DNSSEC for the DNS zone
+az network dns dnssec-config create --resource-group "your-resource-group" --zone-name "adatum.com"
+
+# Verify the DNSSEC configuration
+az network dns dnssec-config show --resource-group "your-resource-group" --zone-name "adatum.com"
+```
+
+2. Obtain the delegation information and use it to create a DS record in the parent zone.
+
+You can use the following Azure CLI command to display the DS record information:
+
+```azurepowershell-interactive
+az network dns zone show --name "adatum.com" --resource-group "your-resource-group" | jq '.signingKeys[] | select(.delegationSignerInfo != null) | .delegationSignerInfo'
+```
+Sample output:
+
+```
+ {
+ "digestAlgorithmType": 2,
+ "digestValue": "0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C",
+ "record": "26767 13 2 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C"
+ }
+```
+
+Alternatively, you can also obtain DS information by using dig.exe on the command line:
+
+```Cmd
+dig adatum.com DS +dnssec
+```
+
+Sample output:
+
+```Cmd
+;; ANSWER SECTION:
+adatum.com. 86400 IN DS 26767 13 2 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C71 00EA776C
+```
+In these examples, the DS values are:
+- Key Tag: 26767
+- Algorithm: 13
+- Digest Type: 2
+- Digest: 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C
++
+3. If the parent zone is a top level domain (for example: `.com`), you must add the DS record at your registrar. Each registrar has its own process.
+
+4. If you own the parent zone, you can add a DS record directly to the parent yourself. The following example shows how to add a DS record to the DNS zone **adatum.com** for the child zone **secure.adatum.com** when both zones are signed and hosted using Azure Public DNS:
+
+```azurepowershell-interactive
+az network dns record-set ds add-record --resource-group "your-resource-group" --zone-name "adatum.com" --record-set-name "secure" --key-tag <key-tag> --algorithm <algorithm> --digest <digest> --digest-type <digest-type>
+```
+
+5. If you don't own the parent zone, send the DS record to the owner of the parent zone with instructions to add it into their zone.
+
+## [PowerShell](#tab/sign-powershell)
+
+1. Sign and verify your zone using PowerShell:
+
+```PowerShell
+# Connect to your Azure account (if not already connected)
+Connect-AzAccount
+
+# Select the appropriate subscription
+Select-AzSubscription -SubscriptionId "your-subscription-id"
+
+# Enable DNSSEC for the DNS zone
+New-AzDnsDnssecConfig -ResourceGroupName "your-resource-group" -ZoneName "adatum.com"
+
+# Verify the DNSSEC configuration
+Get-AzDnsDnssecConfig -ResourceGroupName "your-resource-group" -ZoneName "adatum.com"
+```
+
+2. Obtain the delegation information and use it to create a DS record in the parent zone.
+
+```PowerShell
+Get-AzDnsDnssecConfig -ResourceGroupName "dns-rg" -ZoneName "adatum.com" | Select-Object -ExpandProperty SigningKey | Select-Object -ExpandProperty delegationSignerInfo
+```
+
+Example output:
+
+```PowerShell
+DigestAlgorithmType DigestValue Record
+- --
+ 2 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C 26767 13 2 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C
+```
+
+In these examples, the DS values are:
+- Key Tag: 26767
+- Algorithm: 13
+- Digest Type: 2
+- Digest: 0B9E68FC1711B4AC4EC0FCE5E673EDB0AFDC18F27EA94861CDF08C7100EA776C
+
+3. If the parent zone is a top level domain (for example: `.com`), you must add the DS record at your registrar. Each registrar has its own process.
+
+4. If you own the parent zone, you can add a DS record directly to the parent yourself. The following example shows how to add a DS record to the DNS zone **adatum.com** for the child zone **secure.adatum.com** when both zones are signed and hosted using Azure Public DNS. Replace \<key-tag\>, \<algorithm\>, \<digest\>, and \<digest-type\> with the appropriate values from the DS record you queried previously.
+
+```PowerShell
+$dsRecord = New-AzDnsRecordConfig -DnsRecordType DS -KeyTag <key-tag> -Algorithm <algorithm> -Digest <digest> -DigestType <digest-type>
+New-AzDnsRecordSet -ResourceGroupName "dns-rg" -ZoneName "adatum.com" -Name "secure" -RecordType DS -Ttl 3600 -DnsRecords $dsRecord
+```
+5. If you don't own the parent zone, send the DS record to the owner of the parent zone with instructions to add it into their zone.
++
+## Next steps
+
+- Learn how to [unsign a DNS zone](dnssec-unsign.md).
+- Learn how to [host the reverse lookup zone for your ISP-assigned IP range in Azure DNS](dns-reverse-dns-for-azure-services.md).
+- Learn how to [manage reverse DNS records for your Azure services](dns-reverse-dns-for-azure-services.md).
dns Dnssec Unsign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dnssec-unsign.md
+
+ Title: How to unsign your Azure Public DNS zone (Preview)
+description: Learn how to remove DNSSEC from your Azure public DNS zone.
+++ Last updated : 10/30/2024+++
+# How to unsign your Azure Public DNS zone (Preview)
+
+This article shows you how to remove [Domain Name System Security Extensions (DNSSEC)](dnssec.md) from your Azure Public DNS zone.
+
+To sign a zone with DNSSEC, see [How to sign your Azure Public DNS zone with DNSSEC](dnssec-how-to.md).
+
+> [!NOTE]
+> DNSSEC zone signing is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.<br>
+> This DNSSEC preview is offered without a requirement to enroll in a preview. You can use Cloud Shell to sign or unsign a zone with Azure PowerShell or Azure CLI. Signing a zone by using the Azure portal is available in the next portal update.
+
+## Prerequisites
+
+* The DNS zone must be hosted by Azure Public DNS. For more information, see [Manage DNS zones](/azure/dns/dns-operations-dnszones-portal).
+* You must have permission to delete a DS record from the parent DNS zone. Most top level domains (.com, .net, .org) allow you to do this using your registrar.
+
+## Unsign a zone
+
+> [!IMPORTANT]
+> Removing DNSSEC from your DNS zone requires that you first remove the delegation signer (DS) record from the parent zone, and wait for the time-to-live (TTL) of the DS record to expire. After the DS record TTL has expired, you can safely unsign the zone.
+
+## [Azure portal](#tab/sign-portal)
+
+To unsign a zone using the Azure portal:
+
+1. On the Azure portal Home page, search for and select **DNS zones**.
+2. Select your DNS zone, and then from the zone's **Overview** page, select **DNSSEC**. You can select **DNSSEC** from the menu at the top, or under **DNS Management**.
+3. If you have successfully removed the DS record at your registrar for this zone, you see that the DNSSEC status is **Signed but not delegated**. Do not proceed until you see this status.
+
+ ![Screenshot of confirming to disable DNSSEC.](./media/dnssec-how-to/ds-removed.png)
+
+4. Clear the **Enable DNSSEC** checkbox and select **OK** in the popup dialog box confirming that you wish to disable DNSSEC.
+
+ ![Screenshot of DNSSEC status.](./media/dnssec-how-to/disable-dnssec.png)
+
+5. In the **Disable DNSSEC** pane, type the name of your domain and then select **Disable**.
+
+ ![Screenshot of the disable DNSSEC pane.](./media/dnssec-how-to/disable-pane.png)
+
+6. The zone is now unsigned.
+
+## [Azure CLI](#tab/sign-cli)
+
+Unsign a DNSSEC-signed zone using the Azure CLI:
+
+1. To unsign a signed zone, issue the following commands. Replace the values for subscription ID, resource group, and zone name with your values.
+
+```azurepowershell-interactive
+# Ensure you are logged in to your Azure account
+az login
+
+# Select the appropriate subscription
+az account set --subscription "your-subscription-id"
+
+# Disable DNSSEC for the DNS zone
+az network dns dnssec-config delete --resource-group "your-resource-group" --zone-name "adatum.com"
+
+# Verify the DNSSEC configuration has been removed
+az network dns dnssec-config show --resource-group "your-resource-group" --zone-name "adatum.com"
+```
+
+2. Confirm that **(NotFound) DNSSEC is not enabled for DNS zone 'adatum.com'** is displayed after the last command. The zone is now unsigned.
+
+## [PowerShell](#tab/sign-powershell)
+
+1. Use the following commands to remove DNSSEC signing from your zone and view the zone status using PowerShell. Replace the values for subscription ID, resource group, and zone name with your values.
+
+```PowerShell
+# Connect to your Azure account (if not already connected)
+Connect-AzAccount
+
+# Select the appropriate subscription
+Select-AzSubscription -SubscriptionId "your-subscription-id"
+
+# Disable DNSSEC for the DNS zone
+Remove-AzDnsDnssecConfig -ResourceGroupName "your-resource-group" -ZoneName "adatum.com"
+
+# View the DNSSEC configuration
+Get-AzDnsDnssecConfig -ResourceGroupName "your-resource-group" -ZoneName "adatum.com"
+```
+
+2. Confirm that **DNSSEC is not enabled for DNS zone 'adatum.com'** is displayed after the last command. The zone is now unsigned.
+++
+## Next steps
+
+- Learn how to [sign a DNS zone with DNSSEC](dnssec-how-to.md).
+- Learn how to [host the reverse lookup zone for your ISP-assigned IP range in Azure DNS](dns-reverse-dns-for-azure-services.md).
+- Learn how to [manage reverse DNS records for your Azure services](dns-reverse-dns-for-azure-services.md).
dns Dnssec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dnssec.md
+
+ Title: Overview of DNSSEC - Azure Public DNS (Preview)
+description: Learn about DNSSEC zone signing for Azure Public DNS.
++++ Last updated : 10/22/2024+++
+# DNSSEC overview (Preview)
+
+This article provides an overview of Domain Name System Security Extensions (DNSSEC) and includes an introduction to [DNSSEC terminology](#dnssec-terminology). Benefits of DNSSEC zone signing are described and examples are provided for viewing DNSSEC related resource records. When you are ready to sign your Azure public DNS zone, see the following how-to guides:
+
+- [How to sign your Azure Public DNS zone with DNSSEC (Preview)](dnssec-how-to.md).
+- [How to unsign your Azure Public DNS zone (Preview)](dnssec-unsign.md)
+
+> [!NOTE]
+> DNSSEC zone signing is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## What is DNSSEC?
+
+DNSSEC is a suite of extensions that add security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated as genuine. DNSSEC provides origin authority, data integrity, and authenticated denial of existence. With DNSSEC, the DNS protocol is much less susceptible to certain types of attacks, particularly DNS spoofing attacks.
+
+The core DNSSEC extensions are specified in the following Request for Comments (RFCs):
+
+* [RFC 4033](https://datatracker.ietf.org/doc/html/rfc4033): "DNS Security Introduction and Requirements"
+* [RFC 4034](https://datatracker.ietf.org/doc/html/rfc4034): "Resource Records for the DNS Security Extensions"
+* [RFC 4035](https://datatracker.ietf.org/doc/html/rfc4035): "Protocol Modifications for the DNS Security Extensions"
+
+For a summary of DNSSEC RFCs, see [RFC9364](https://www.rfc-editor.org/rfc/rfc9364): DNS Security Extensions (DNSSEC).
+
+## How DNSSEC works
+
+DNS zones are secured with DNSSEC using a process called zone signing. Signing a zone with DNSSEC adds validation support without changing the basic mechanism of a DNS query and response. To sign a zone with DNSSEC, the zone's primary authoritative DNS server must support DNSSEC.
+
+Resource Record Signatures (RRSIGs) and other cryptographic records are added to the zone when it's signed. The following figure shows DNS resource records in the zone contoso.com before and after zone signing.
+
+ ![A diagram showing how RRSIG records are added to a zone when it's signed with DNSSEC.](media/dnssec/rrsig-records.png)
+
+[DNSSEC validation](#dnssec-validation) of DNS responses occurs by using these digital signatures with an unbroken [chain of trust](#chain-of-trust).
+
+> [!NOTE]
+> DNSSEC-related resource records aren't displayed in the Azure portal. For more information, see [View DNSSEC-related resource records](#view-dnssec-related-resource-records).
+
+## Why sign a zone with DNSSEC?
+
+Signing a zone with DNSSEC is required for compliance with some security guidelines, such as SC-20: Secure Name/Address Resolution Service.
+
+DNSSEC validation of DNS responses can prevent common types of DNS hijacking attacks, also known as DNS redirection. DNS hijacking occurs when a client device is redirected to a malicious server by using incorrect (spoofed) DNS responses. DNS cache poisoning is a common method used to spoof DNS responses.
+
+An example of how DNS hijacking works is shown in the following figure.
+
+ ![A diagram showing how DNS hijacking works.](media/dnssec/dns-hijacking.png)
+
+**Normal DNS resolution**:
+1. A client device sends a DNS query for **contoso.com** to a DNS server.
+2. The DNS server responds with a DNS resource record for **contoso.com**.
+3. The client device requests a response from **contoso.com**.
+4. The contoso.com app or web server returns a response to the client.
+
+**DNS hijacking**
+1. A client device sends a DNS query for **contoso.com** to a hijacked DNS server.
+2. The DNS server responds with an invalid (spoofed) DNS resource record for **contoso.com**.
+3. The client device requests a response for **contoso.com** from malicious server.
+4. The malicious server returns a spoofed response to the client.
+
+The type of DNS resource record that is spoofed depends on the type of DNS hijacking attack. An MX record might be spoofed to redirect client emails, or a spoofed A record might send clients to a malicious web server.
+
+DNSSEC works to prevent DNS hijacking by performing validation on DNS responses. In the DNS hijacking scenario pictured here, the client device can reject non-validated DNS responses if the contoso.com domain is signed with DNSSEC. To reject non-validated DNS responses, the client device must enforce [DNSSEC validation](#dnssec-validation) for contoso.com.
+
+DNSSEC also includes Next Secure 3 (NSEC3) to prevent zone enumeration. Zone enumeration, also known as zone walking, is an attack whereby the attacker establishes a list of all names in a zone, including child zones.
+
+Before you sign a zone with DNSSEC, be sure to understand [how DNSSEC works](#how-dnssec-works). When you are ready to sign a zone, see [How to sign your Azure Public DNS zone with DNSSEC](dnssec-how-to.md).
+
+## DNSSEC validation
+
+If a DNS server is DNSSEC-aware, it can set the DNSSEC OK (DO) flag in a DNS query to a value of `1`. This value tells the responding DNS server to include DNSSEC-related resource records with the response. These DNSSEC records are Resource Record Signature (RRSIG) records that are used to validate that the DNS response is genuine.
+
+A recursive (non-authoritative) DNS server performs DNSSEC validation on RRSIG records using a trust anchor (DNSKEY). The server uses a DNSKEY to decrypt digital signatures in RRSIG records (and other DNSSEC-related records), and then computes and compares hash values. If hash values are the same, it provides a reply to the DNS client with the DNS data that it requested, such as a host address (A) record. See the following diagram:
+
+ ![A diagram showing how DNSSEC validation works.](media/dnssec/dnssec-validation.png)
+
+If hash values aren't the same, the recursive DNS server replies with a SERVFAIL message. In this way, DNSSEC-capable resolving (or forwarding) DNS servers with a valid trust anchor installed can protect against DNS hijacking in the path between the recursive server and the authoritative server. This protection doesn't require DNS client devices to be DNSSEC-aware or to enforce DNS response validation, provided the local (last hop) recursive DNS server is itself secure from hijacking.
+
+Windows 10 and Windows 11 client devices are [nonvalidating security-aware stub resolvers](#dnssec-terminology). These client devices don't perform validation, but can enforce DNSSEC validation using Group Policy. [The NRPT](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn593632(v=ws.11)) can be used to create and enforce namespace based DNSSEC validation policy.
+
+### Trust anchors and DNSSEC validation
+
+> [!NOTE]
+> DNSSEC response validation is not performed by the default Azure-provided resolver. The information in this section is helpful if you are setting up your own recursive DNS servers for DNSSEC validation or troubleshooting validation issues.
+
+Trust anchors operate based on the DNS namespace hierarchy. A recursive DNS server can have any number of trust anchors, or no trust anchors. Trust anchors can be added for a single child DNS zone, or any parent zone. If a recursive DNS server has a root (.) trust anchor, then it can perform DNSSEC validation on any DNS zone.
+
+The DNSSEC validation process works with trust anchors as follows:
+ - If a recursive DNS server doesn't have a DNSSEC trust anchor for a zone or the zone's parent hierarchical namespace, it will not perform DNSSEC validation on that zone.
+ - If a recursive DNS server has a DNSSEC trust anchor for a zone's parent namespace and it receives a query for the child zone, it checks to see if a DS record for the child zones is present in the parent zone.
+ - If the DS record is found, the recursive DNS server performs DNSSEC validation.
+ - If the recursive DNS server determines that the parent zone doesn't have a DS record for the child zone, it assumes the child zone is insecure and doesn't perform DNSSEC validation.
+ - If multiple recursive DNS servers are involved in a DNS response (including forwarders), each server must be able to perform DNSSEC validation on the response so that there is an unbroken chain of trust.
+ - Recursive servers that have DNSSEC validation disabled or aren't DNSSEC-aware don't perform validation.
+
+## Chain of trust
+
+ A chain of trust occurs when all the DNS servers involved in sending a response for a DNS query are able to validate that the response wasn't modified during transit. In order for DNSSEC validation to work end-to-end, the chain of trust must be unbroken. This chain of trust applies to both authoritative and non-authoritative (recursive) servers.
+
+### Authoritative servers
+
+Authoritative DNS servers maintain a chain of trust through the use of delegation signer (DS) records. DS records are used to verify the authenticity of child zones in the DNS hierarchy.
+ - In order for DNSSEC validation to occur on a signed zone, the parent of the signed zone must also be signed. The parent zone also must have a DS record for the child zone.
+ - During the validation process, a zone's parent is queried for the DS record. If the DS record is not present, or the DS record data in the parent does not match the DNSKEY data in the child zone, the chain of trust is broken and validation fails.
+
+### Recursive servers
+
+Recursive DNS servers (also called resolving or caching DNS servers) maintain a chain of trust through the use of DNSSEC trust anchors.
+- The trust anchor is a DNSKEY record, or DS record containing a [hash](/dotnet/standard/security/ensuring-data-integrity-with-hash-codes) of a DNSKEY record. The DNSKEY record is created on an authoritative server when a zone is signed, and removed from the zone if the zone is unsigned.
+- Trust anchors must be manually installed on recursive DNS servers.
+- If a trust anchor for a parent zone is present, a recursive server can validate all child zones in the hierarchical namespace. This includes forwarded queries. To support DNSSEC validation of all DNSSEC-signed DNS zones, you can install a trust anchor for the root (.) zone.
+
+## Key rollover
+
+The zone signing key (ZSK) in a DNSSEC-signed zone is periodically rolled over (replaced) automatically by Azure. It should not be necessary to replace your key signing key (KSK), but this option is available by contacting Microsoft support. Replacing the KSK requires that you also update your DS record in the parent zone.
+
+## Zone signing Algorithm
+
+Zones are DNSSEC signed using Elliptic Curve Digital Signature Algorithm (ECDSAP256SHA256).
+
+## DNSSEC-related resource records
+
+The following table provides a short description of DNSSEC-related records. For more detailed information, see [RFC 4034: Resource Records for the DNS Security Extensions](https://datatracker.ietf.org/doc/html/rfc4034) and [RFC 7344: Automating DNSSEC Delegation Trust Maintenance](https://datatracker.ietf.org/doc/html/rfc7344).
+
+| Record | Description |
+| | |
+| Resource record signature (RRSIG) | A DNSSEC resource record type that is used to hold a signature, which covers a set of DNS records for a particular name and type. |
+| DNSKEY | A DNSSEC resource record type that is used to store a public key. |
+| Delegation signer (DS) | A DNSSEC resource record type that is used to secure a delegation. |
+| Next secure (NSEC) | A DNSSEC resource record type that is used to prove nonexistence of a DNS name. |
+| Next secure 3 (NSEC3) | The NSEC3 resource record that provides hashed, authenticated denial of existence for DNS resource record sets. |
+| Next secure 3 parameters (NSEC3PARAM) | Specifies parameters for NSEC3 records. |
+| Child delegation signer (CDS) | This record is optional. If present, the CDS record can be used by a child zone to specify the desired contents of the DS record in a parent zone. |
+| Child DNSKEY (CDNSKEY) | This record is optional. If the CDNSKEY record is present in a child zone, it can be used to generate a DS record from a DNSKEY record. |
+
+### View DNSSEC-related resource records
+
+DNSSEC-related records are not displayed in the Azure portal. To view DNSSEC-related records, use command line tools such as Resolve-DnsName or dig.exe. These tools are available using Cloud Shell, or locally if installed on your device. Be sure to set the DO flag in your query by using the `-dnssecok` option in Resolve-DnsName or the `+dnssec` option in dig.exe.
+
+> [!IMPORTANT]
+> Don't use the nslookup.exe command-line tool to query for DNSSEC-related records. The nslookup.exe tool uses an internal DNS client that isn't DNSSEC-aware.
+
+See the following examples:
+
+```PowerShell
+PS C:\> resolve-dnsname server1.contoso.com -dnssecok
+
+Name Type TTL Section IPAddress
+- - -
+server1.contoso.com A 3600 Answer 203.0.113.1
+
+Name : server1.contoso.com
+QueryType : RRSIG
+TTL : 3600
+Section : Answer
+TypeCovered : A
+Algorithm : 13
+LabelCount : 3
+OriginalTtl : 3600
+Expiration : 9/20/2024 11:25:54 PM
+Signed : 9/18/2024 9:25:54 PM
+Signer : contoso.com
+Signature : {193, 20, 122, 196…}
+```
+
+```Cmd
+C:\>dig server1.contoso.com +dnssec
+
+; <<>> DiG 9.9.2-P1 <<>> server1.contoso.com +dnssec
+;; global options: +cmd
+;; Got answer:
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61065
+;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags: do; udp: 512
+;; QUESTION SECTION:
+;server1.contoso.com. IN A
+
+;; ANSWER SECTION:
+server1.contoso.com. 3600 IN A 203.0.113.1
+server1.contoso.com. 3600 IN RRSIG A 13 3 3600 20240920232359 20240918212359 11530 contoso.com. GmxeQhNk1nJZiep7nuCS2qmOQ+Ffs78Z2eoOgIYP3j417yqwS1DasfA5 e1UZ4HuujDk2G6GIbs0ji3RiM9ZpGQ==
+
+;; Query time: 153 msec
+;; SERVER: 192.168.1.1#53(192.168.1.1)
+;; WHEN: Thu Sep 19 15:23:45 2024
+;; MSG SIZE rcvd: 179
+```
+
+```PowerShell
+PS C:\> resolve-dnsname contoso.com -Type dnskey -dnssecok
+
+Name Type TTL Section Flags Protocol Algorithm Key
+- - - -- --
+contoso.com DNSKEY 3600 Answer 256 DNSSEC 13 {115, 117, 214,
+ 165…}
+contoso.com DNSKEY 3600 Answer 256 DNSSEC 13 {149, 166, 55, 78…}
+contoso.com DNSKEY 3600 Answer 257 DNSSEC 13 {45, 176, 217, 2…}
+
+Name : contoso.com
+QueryType : RRSIG
+TTL : 3600
+Section : Answer
+TypeCovered : DNSKEY
+Algorithm : 13
+LabelCount : 2
+OriginalTtl : 3600
+Expiration : 11/17/2024 9:00:15 PM
+Signed : 9/18/2024 9:00:15 PM
+Signer : contoso.com
+Signature : {241, 147, 134, 121…}
+```
+
+```Cmd
+C:\>dig contoso.com dnskey
+
+; <<>> DiG 9.9.2-P1 <<>> contoso.com dnskey
+;; global options: +cmd
+;; Got answer:
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46254
+;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 512
+;; QUESTION SECTION:
+;contoso.com. IN DNSKEY
+
+;; ANSWER SECTION:
+contoso.com. 3600 IN DNSKEY 256 3 13 laY3Toc/VTyjupgp/+WgD05N+euB6Qe1iaM/253k7bkaA0Dx+gSDhbH2 5wXTt+uLQgPljL9OusKTneLdhU+1iA==
+contoso.com. 3600 IN DNSKEY 257 3 13 LbDZAtjG8E9Ftih+LC8CqQrSZIJFFJMtP6hmN3qBRqLbtAj4JWtr2cVE ufXM5Pd/yW+Ca36augQDucd5n4SgTg==
+contoso.com. 3600 IN DNSKEY 256 3 13 c3XWpTqZ0q9IO+YqMEtOBHZSzGGeyFKq0+3xzs6tifvD1rey1Obhrkz4 DJlEIxy2m84VsG1Ij9VYdtGxxeVHIQ==
+
+;; Query time: 182 msec
+;; SERVER: 192.168.1.1#53(192.168.1.1)
+;; WHEN: Thu Sep 19 16:35:10 2024
+;; MSG SIZE rcvd: 284
+```
+
+## DNSSEC terminology
+
+This list is provided to help understand some of the common terms used when discussing DNSSEC. Also see: [DNSSEC-related resource records](#dnssec-related-resource-records)
+
+| Term | Description |
+| | |
+| Authenticated data (AD) bit | A data bit that indicates in a response that all data included in the answer and authority sections of the response has been authenticated by the DNS server according to the policies of that server. |
+| Authentication chain | A chain of signed and validated DNS records that extends from a preconfigured trust anchor to some child zone in the DNS tree. |
+| DNS Extension (EDNS0) | A DNS record that carries extended DNS header information, such as the **DO bit** and maximum UDP packet size. |
+| DNS Security Extensions (DNSSEC) | Extensions to the DNS service that provide mechanisms for signing and for securely resolving DNS data. |
+| DNSSEC OK (DO) bit | A bit in the EDNS0 portion of a DNS request that signals that the client is DNSSEC-aware. |
+| DNSSEC validation | DNSSEC validation is the process of verifying the origin and integrity of DNS data using public cryptographic keys. |
+| Island of security | A signed zone that doesn't have an authentication chain from its delegating parent zone. |
+| Key signing key (KSK) | An authentication key that corresponds to a private key that is used to sign one or more other signing keys for a given zone. Typically, the private key that corresponds to a KSK signs a zone signing key (ZSK), which in turn has a corresponding private key that signs other zone data. Local policy can require that the ZSK be changed frequently, while the KSK can have a longer validity period in order to provide a more stable, secure entry point into the zone. Designating an authentication key as a KSK is purely an operational issue: DNSSEC validation doesn't distinguish between KSKs and other DNSSEC authentication keys. It's possible to use a single key as both a KSK and a ZSK. |
+| Nonvalidating security-aware stub resolver | A security-aware stub resolver that trusts one or more security-aware DNS servers to perform DNSSEC validation on its behalf. |
+| secure entry point (SEP) key | A subset of public keys within the DNSKEY RRSet. A SEP key is used either to generate a DS RR or is distributed to resolvers that use the key as a trust anchor. |
+| Security-aware DNS server | A DNS server that implements the DNS security extensions as defined in RFCs 4033 [5], 4034 [6], and 4035 [7]. In particular, a security-aware DNS server is an entity that receives DNS queries, sends DNS responses, supports the EDNS0 [3] message size extension and the DO bit, and supports the DNSSEC record types and message header bits. |
+| Signed zone | A zone whose records are signed as defined by RFC 4035 [7] Section 2. A signed zone can contain DNSKEY, NSEC, NSEC3, NSEC3PARAM, RRSIG, and DS resource records. These resource records enable DNS data to be validated by resolvers. |
+| Trust anchor | A preconfigured public key that is associated with a particular zone. A trust anchor enables a DNS resolver to validate signed DNSSEC resource records for that zone and to build authentication chains to child zones. |
+| Unsigned zone | Any DNS zone that has not been signed as defined by RFC 4035 [7] Section 2. |
+| Zone signing | Zone signing is the process of creating and adding DNSSEC-related resource records to a zone, making it compatible with DNSSEC validation. |
+| Zone unsigning | Zone unsigning is the process of removing DNSSSEC-related resource records from a zone, restoring it to an unsigned status. |
+| Zone signing key (ZSK) | An authentication key that corresponds to a private key that is used to sign a zone. Typically, a zone signing key is part of the same DNSKEY RRSet as the key signing key whose corresponding private key signs this DNSKEY RRSet, but the zone signing key is used for a slightly different purpose and can differ from the key signing key in other ways, such as in validity lifetime. Designating an authentication key as a zone signing key is purely an operational issue; DNSSEC validation doesn't distinguish between zone signing keys and other DNSSEC authentication keys. It's possible to use a single key as both a key signing key and a zone signing key. |
+
+## Next steps
+
+- Learn how to [sign a DNS zone with DNSSEC](dnssec-how-to.md).
+- Learn how to [unsign a DNS zone](dnssec-unsign.md).
+- Learn how to [host the reverse lookup zone for your ISP-assigned IP range in Azure DNS](dns-reverse-dns-for-azure-services.md).
+- Learn how to [manage reverse DNS records for your Azure services](dns-reverse-dns-for-azure-services.md).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
If you're using a dual-stack circuit, there's a maximum of 100 IPv6 prefixes on
The connection between the ExpressRoute circuit and the gateway disconnects including peered virtual network using gateway transit. Connectivity re-establishes when the prefix limit is no longer exceeded.
+## How can I adjust the number of prefixes advertised to the gateway to ensure it is within the maximum limitation?
+
+ExpressRoute supports up to 11,000 routes, covering virtual network address spaces, on-premises networks, and virtual network peering connections. If the ExpressRoute gateway exceeds this limit, please update the prefixes to be within the allowed range.
+
+To make this change in the Azure Portal:
+1. Go to the Advisor resource and select the "Performance" pillar.
+2. Click on the recommendation for "Max prefix reached for ExpressRoute Gateway."
+3. Select the Gateway with this recommendation.
+4. In the Gateway resource, select the "Virtual network" that the Gateway is attached to
+5. In the Virtual Network resource, select "Address Space" blade under settings on the left menu
+6. Reduce the advertised address space to within the limit
+ ### Can routes from the on-premises network get filtered? The only way to filter or include routes is on the on-premises edge router. User-defined routes can be added in the VNet to affect specific routing, but is only static and not part of the BGP advertisement.
expressroute Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md
ExpressRoute Traffic Collector supports both Provider-managed circuits and Expre
| Dot1qVlanId | int | Dot1q VlanId. | | DstAsn | int | Destination Autonomous System Number (ASN). | | DstMask | int | Mask of destination subnet. |
-| DstSubnet | string | Destination subnet of destination IP. |
+| DstSubnet | string | Destination virtual network of destination IP. |
| ExRCircuitDirectPortId | string | Azure resource ID of Express Route Circuit's direct port. | | ExRCircuitId | string | Azure resource ID of Express Route Circuit. | | ExRCircuitServiceKey | string | Service key of Express Route Circuit. |
ExpressRoute Traffic Collector supports both Provider-managed circuits and Expre
| SourceSystem | string | | | SrcAsn | int | Source Autonomous System Number (ASN). | | SrcMask | int | Mask of source subnet. |
-| SrcSubnet | string | Source subnet of source IP. |
+| SrcSubnet | string | Source virtual network of source IP. |
| \_SubscriptionId | string | A unique identifier for the subscription that the record is associated with | | TcpFlag | int | TCP flag as defined in the TCP header. | | TenantId | string | |
firewall Choose Firewall Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/choose-firewall-sku.md
Take a closer look at the features across the three Azure Firewall versions:
:::image type="content" source="media/choose-firewall-sku/azure-firewall-sku-table.png" alt-text="Table of Azure Firewall version features." lightbox="media/choose-firewall-sku/azure-firewall-sku-table-large.png":::
+## Flow chart
+
+You can use the following flow chart to help you choose the Azure Firewall version that best fits you needs.
+
+<!-- Art Library Source# ConceptArt-0-000-011 -->
+ ## Next steps - [Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md)
firewall Firewall Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-copilot.md
Microsoft Copilot for Security is a generative AI-powered security solution that helps increase the efficiency and capabilities of security personnel to improve security outcomes at machine speed and scale. It provides a natural language, assistive copilot experience helping support security professionals in end-to-end scenarios such as incident response, threat hunting, intelligence gathering, and posture management. For more information about what it can do, see [What is Microsoft Copilot for Security?](/copilot/security/microsoft-security-copilot)
-## Copilot for Security integrates with Azure Firewall
+## Know before you begin
+
+If you're new to Microsoft Copilot for Security, you should familiarize yourself with it by reading these articles:
+- [What is Microsoft Copilot for Security?](/security-copilot/microsoft-security-copilot)
+- [Microsoft Copilot for Security experiences](/security-copilot/experiences-security-copilot)
+- [Get started with Microsoft Copilot for Security](/security-copilot/get-started-security-copilot)
+- [Understand authentication in Microsoft Copilot for Security](/security-copilot/authentication)
+- [Prompting in Microsoft Copilot for Security](/security-copilot/prompting-security-copilot)
+
+## Microsoft Copilot for Security integration in Azure Firewall
Azure Firewall is a cloud-native and intelligent network firewall security service that provides best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability.
-The Azure Firewall integration helps analysts perform detailed investigations of the malicious traffic intercepted by the IDPS feature of their firewalls across their entire fleet using natural language questions in the Copilot for Security standalone experience.
+The Azure Firewall integration helps analysts perform detailed investigations of the malicious traffic intercepted by the IDPS and/or threat intelligence features of their firewalls across their entire fleet using natural language questions in the Microsoft Copilot for Security standalone experience.
This article introduces you to Copilot and includes sample prompts that can help Azure Firewall users.
-## Know before you begin
+You can use the Azure Firewall integration in Microsoft Copilot for Security in the [Microsoft Copilot for Security portal](https://securitycopilot.microsoft.com). For more information, see [Microsoft Copilot for Security experiences](/copilot/security/experiences-security-copilot).
-- You can use the Azure Firewall integration in Copilot for Security in the [Copilot for Security portal](https://securitycopilot.microsoft.com). For more information, see [Microsoft Copilot for Security experiences](/copilot/security/experiences-security-copilot).-- Be clear and specific with your prompts. You might get better results if you include specific time frames, resources, and threats in your prompts. It might also help if you add **Azure Firewall** to your prompt.
+## Key features
+Microsoft Copilot for Security has built-in system features that can get data from the different plugins that are turned on.
-- Use the example prompts in this article to help guide your interactions with Copilot. -- Experiment with different prompts and variations to see what works best for your use case. Chat AI models vary, so iterate and refine your prompts based on the results you receive.-- Copilot for Security saves your prompt sessions. To see the previous sessions, from the Copilot [Home menu](/copilot/security/navigating-security-copilot#home-menu), go to **My sessions**.
+To view the list of built-in system capabilities for Azure Firewall, use the following procedure:
- :::image type="content" source="media/firewall-copilot/copilot-my-sessions.png" alt-text="Partial screenshot of the Microsoft Copilot for Security Home menu with My sessions highlighted.":::
-
- > [!NOTE]
- > For a Copilot walkthrough, including the pin and share feature, see [Navigate Microsoft Copilot for Security](/copilot/security/navigating-security-copilot).
+1. In the prompt bar, select the **Prompts** icon.
-
-For more information about writing effective Copilot for Security prompts, see [Create effective prompts](/copilot/security/prompting-tips).
+ :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-prompts.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Prompts icon highlighted.":::
+
+2. Select **See all system capabilities**. The **Azure Firewall** section lists all the available capabilities that you can use.
-## Using the Azure Firewall integration in the Copilot for Security standalone portal
+
+## Enable the Azure Firewall integration in Microsoft Copilot for Security
1. Ensure your Azure Firewall is configured correctly:
- - [Azure Structured Firewall Logs](firewall-structured-logs.md#resource-specific-mode) ΓÇô the Azure Firewalls to be used with Copilot for Security must be configured with resource specific structured logs for IDPS and these logs must be sent to a Log Analytics workspace.
- - [Role Based Access Control for Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/role-based-access-control-for-azure-firewall/ba-p/2245598) ΓÇô the users using the Azure Firewall plugin in Copilot for Security must have the appropriate Azure RBAC roles to access the Firewall and associated Log Analytics workspace(s).
+ - [Azure Structured Firewall Logs](firewall-structured-logs.md#resource-specific-mode) ΓÇô the Azure Firewalls to be used with Microsoft Copilot for Security must be configured with resource specific structured logs for IDPS and these logs must be sent to a Log Analytics workspace.
+ - [Role Based Access Control for Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/role-based-access-control-for-azure-firewall/ba-p/2245598) ΓÇô the users using the Azure Firewall plugin in Microsoft Copilot for Security must have the appropriate Azure RBAC roles to access the Firewall and associated Log Analytics workspace(s).
2. Go to [Microsoft Copilot for Security](https://go.microsoft.com/fwlink/?linkid=2247989) and sign in with your credentials.
-1. In the prompt bar, select the **Sources** icon.
+3. Ensure that the Azure Firewall plugin is turned on. In the prompt bar, select the **Sources** icon.
- :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-sources.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Sources icon highlighted.":::
+ :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-sources.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Sources icon highlighted.":::
+
+ In the **Manage sources** pop-up window that appears, confirm that the **Azure Firewall** toggle is turned on, then close the window.
- In the **Manage sources** pop-up window that appears, confirm that the **Azure Firewall** toggle is turned on, then close the window. No additional configuration is necessary, as long as structured logs are being sent to a Log Analytics workspace and you have the right RBAC permissions, Copilot will find the data it needs to answer your questions.
-
- :::image type="content" source="media/firewall-copilot/azure-firewall-plugin.png" alt-text="Screenshot showing the Azure Firewall plugin.":::
+ :::image type="content" source="media/firewall-copilot/azure-firewall-plugin.png" alt-text="Screenshot showing the Azure Firewall plugin.":::
- > [!NOTE]
- > Some roles can turn the toggle on or off for plugins like Azure Firewall. For more information, see [Manage plugins in Microsoft Copilot for Security](/copilot/security/manage-plugins?tabs=securitycopilotplugin).
+ > [!NOTE]
+ > Some roles can turn the toggle on or off for plugins like Azure Firewall. For more information, see [Manage plugins in Microsoft Copilot for Security](/copilot/security/manage-plugins?tabs=securitycopilotplugin).
4. Enter your prompt in the prompt bar.
-## Built-in system features
-
-Copilot for Security has built-in system features that can get data from the different plugins that are turned on.
-
-To view the list of built-in system capabilities for Azure Firewall, use the following procedure:
-
-1. In the prompt bar, select the **Prompts** icon.
-
- :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-prompts.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Prompts icon highlighted.":::
-
-2. Select **See all system capabilities**. The **Azure Firewall** section lists all the available capabilities that you can use.
-
-## Sample prompts for Azure Firewall
+## Sample Azure Firewall prompts
There are many prompts you can use to get information from Azure Firewall. This section lists the ones that work best today. They're continuously updated as new capabilities are launched.
Get **additional details** to enrich the threat information/profile of an IDPS s
- I see that the third signature ID is associated with CVE _\<CVE number\>_, tell me more about this CVE. > [!NOTE]
-> The Microsoft Threat Intelligence plugin is another source that Copilot for Security may use to provide threat intelligence for IDPS signatures.
+>The Microsoft Threat Intelligence plugin is another source that Microsoft Copilot for Security may use to provide threat intelligence for IDPS signatures.
++ ### Look for a given IDPS signature across your tenant, subscription, or resource group Perform a **fleet-wide search** (over any scope) for a threat across all your Firewalls instead of searching for the threat manually.
Get **information from documentation** about using Azure Firewall's IDPS feature
- What is the difference in risk between alert only and alert and block modes for IDPS? > [!NOTE]
->Copilot for Security may also use the *Ask Microsoft Documentation* capability to provide information on how to use Azure Firewall's IDPS feature to secure your environment.
+>Microsoft Copilot for Security may also use the *Ask Microsoft Documentation* capability to provide information on how to use Azure Firewall's IDPS feature to secure your environment.
## Provide feedback
Your feedback is vital to guide the current and planned development of the produ
For each feedback option, you can provide more information in the next dialog box that appears. Whenever possible, and especially when the result is **Needs improvement**, write a few words explaining what can be done to improve the outcome. If you entered prompts specific to Azure Firewall and the results aren't related, then include that information.
-## Data processing and privacy
+## Privacy and data security in Microsoft Copilot for Security
-When you interact with Copilot for Security to get Azure Firewall data, Copilot pulls that data from Azure Firewall. The prompts, the data retrieved, and the output shown in the prompt results are processed and stored within the Copilot service. For more information, see [Privacy and data security in Microsoft Copilot for Security](/copilot/security/privacy-data-security).
+When you interact with Microsoft Copilot for Security to get Azure Firewall data, Copilot pulls that data from Azure Firewall. The prompts, the data retrieved, and the output shown in the prompt results are processed and stored within the Copilot service. For more information, see [Privacy and data security in Microsoft Copilot for Security](/copilot/security/privacy-data-security).
## Related content -- [What is Microsoft Copilot for Security?](/copilot/security/microsoft-security-copilot)
+- [What is Microsoft Copilot for Security?](/copilot/security/microsoft-security-copilot)
frontdoor Migrate Cdn To Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-cdn-to-front-door.md
+
+ Title: Migrate Azure CDN from Edgio to Azure Front Door
+
+description: Learn how to migrate your workloads from Azure CDN from Edgio to Azure Front Door using Azure Traffic Manager.
++++ Last updated : 10/30/2024+++
+# Migrate Azure CDN from Edgio to Azure Front Door
+
+Azure CDN from Edgio will be retired on November 4, 2025. You must migrate your workload to Azure Front Door before this date to avoid service disruption. This article provides guidance on how to migrate your workloads from Azure CDN from Edgio to Azure Front Door using Azure Traffic Manager. The migration process in this article can also be used to migrate workloads from a legacy CDN to Azure Front Door.
+
+Azure Traffic Manager initially routes all traffic to the Azure CDN from Edgio. After you set up Azure Front Door, you can update the Traffic Manager profile to incrementally route traffic to the Azure Front Door. This approach allows you to validate if Azure Front Door is compatible with your workloads before fully migrating.
+
+We recommend that your plan this migration well in advance and test the functionality over the course of a few days to ensure a smooth transition.
+
+## Prerequisites
+
+- Review the [feature differences](front-door-cdn-comparison.md) between Azure CDN and Azure Front Door to determine if there are any compability gaps.
+- You need access to a VM connected to the internet that can run Wget on Linux or Invoke-WebRequest on Windows using PowerShell.
+- You need access to a monitoring tool such as CatchPoint or ThousandEyes to verify the availability of your URLs before and after the migration. These tools are the most ideal because they can monitor the availability of your URLs from different locations around the world. `webpagetest.org` is another option, but it only provides a limited view of your URLs from a few locations.
+
+## Migrate your workloads
+
+The followings steps assume you're using an Azure Blob Storage account as your origin. If you're using a different origin, adjust the steps accordingly.
++
+### Gather information
+
+1. Collect the following information from your Azure CDN from Edgio profile:
+
+ - Endpoints
+ - Origin configurations
+ - Custom domains
+ - Caching settings
+ - Compression settings
+ - Web application firewall (WAF) settings
+ - Custom rules settings
+
+1. Determine which tier of Azure Front Door is suitable for your workloads. For more information, see [Azure Front Door comparison](../frontdoor/front-door-cdn-comparison.md).
+
+1. Review the origin settings in your Azure CDN from Edgio profile.
+
+1. Determine a test URL with your Azure CDN from Edgio profile and perform a `wget` or `Invoke-WebRequest` to obtain the HTTP header information.
+
+1. Enter the URL into the monitoring tool to understand the geographic availability of your URL.
+
+### Set up Azure Front Door
+
+1. From the Azure portal, select **+ Create a resource**, then search for **Front Door**.
+
+1. Select **Front Door and CDN profiles** and then select **Create**.
+
+1. On the *Compare offerings* pages, select **Azure Front Door** and then select **Custom create**.
+
+1. Select **Continue to create a Front Door**.
+
+1. Select the subscription and resource group. Enter a name for the Azure Front Door profile. Then select the tier that best suits your workloads and select the **Endpoint** tab.
+
+1. Select **Add an endpoint**. Enter a name for the endpoint, then select **Add**. The endpoint name will look like `<endpointname>-<hash>.xxx.azurefd.net`.
+
+1. Select **+ Add a route**. Enter a name for the route and note the **Domain** selected. Leave the **Patterns to match** and **Accepted protocols** as the default settings.
+
+ > [!NOTE]
+ > A CDN profile can have multiple endpoints, so you may need to create multiple routes.
+
+1. Select **Add a new origin group**. Enter a name for the origin group and select the **+ Add an origin** button. Enter the origin name and select the origin type. This example uses Azure Blob Storage, so select **Storage** as the origin type. Select the hostname of the Azure Blob Storage account and leave the rest of the settings as default. Select **Add**.
+
+ :::image type="content" source="./media/migrate-cdn-to-front-door/add-origin.png" alt-text="Screenshot of adding an Azure Blob Storage as an origin to Azure Front Door.":::
+
+1. Leave the rest of the settings as default and select **Add**.
+
+1. If caching was enabled in your Azure CDN from Edgio profile, select **Enable caching** and set the caching rules.
+
+ > [!NOTE]
+ > Azure CDN from Edgio *Standard-cache* is equivalent to Azure Front Door *Ignore query string* caching.
+
+1. Select **Enable compression** if you have compression enabled in your Azure CDN from Edgio profile. Ensure the origin path matches the path in your Azure CDN from Edgio profile. If this isn't set correctly, the origin won't be able to serve the content and will return a 4xx error.
+
+1. Select **Add** to create the route.
+
+1. Select **+ Add a policy** to set up web application firewall (WAF) settings and set up custom rules you determined in the previous steps.
+
+1. Select **Review + create** and then select **Create**.
+
+1. Set up the custom domain for the Azure Front Door profile. For more information, see [Custom domains](front-door-custom-domain.md). You may have multiple custom domains in your Azure CDN from Edgio profile. Ensure you add all custom domains to the Azure Front Door profile and associate them with the correct routes.
+
+### Set up Traffic Manager
+
+The steps in this section need to be repeated for each endpoint in your Azure CDN from Edgio profile. It is critical that the health check is set up correctly to ensure that the Traffic Manager profile routes traffic to the Azure CDN or Azure Front Door.
+
+1. From the Azure portal, select **+ Create a resource**, then search for **Traffic Manager profile**.
+
+1. Enter a name for the Traffic Manager profile.
+
+1. Select the routing method **Weighted**.
+
+1. Select the same subscription and resource group as the Azure Front Door profile then select **Create**.
+
+1. Select **Endpoints** from the left-hand menu, and then select **+ Add**.
+
+1. For the **Type**, select **External endpoint**.
+
+1. Enter a name for the endpoint and leave the **Enable Endpoint** checked.
+
+1. Enter the **Fully-qualified domain name (FQDN)** of the Azure CDN from Edgio profile. For example, `yourdomain.azureedge.net`.
+
+1. Set the **Weight** to 100.
+
+1. For *Health check*, select **Always serve traffic**. This setting disables the health check and always routes traffic to the endpoint.
+
+ :::image type="content" source="./media/migrate-cdn-to-front-door/cdn-endpoint.png" alt-text="Screenshot of adding the Azure CDN from Edgio as an endpoint in Azure Traffic Manager.":::
+
+1. Add another endpoint for the Azure Front Door profile and select **External endpoint**.
+
+1. Enter a name for the endpoint and uncheck the **Enable Endpoint** setting.
+
+1. Enter the **Fully-qualified domain name (FQDN)** of the Azure Front Door profile. For example, `your-new-endpoint-name.azurefd.net`.
+
+1. Set the **Weight** to 1.
+
+1. Since the endpoint is disabled, the **Health check** setting isn't relevant.
+
+### Internal testing of Traffic Manager profile
+
+1. Perform a DNS dig to test the Traffic Manager profile: `dig your-profile.trafficmanager.net`. The dig command should always return the CNAME of the Azure CDN from Edgio profile: `yourdomain.azureedge.net`.
+
+1. Test the Azure Front Door profile by manually adding a DNS entry in your local hosts file pointing to the Azure Front Door profile:
+
+ 1. Get the IP address of the Azure Front Door profile by performing a DNS dig.
+
+ 1. Add a new line to your hosts file with the IP address followed by a space and then `your-new-endpoint-name.azurefd.net`. For example, `203.0.113.254 your-new-endpoint-name.azurefd.net`.
+
+ 1. For Windows, the hosts file is located at `C:\Windows\System32\drivers\etc\hosts`.
+
+ 1. For Linux, the hosts file is located at `/etc/hosts`.
+
+ 1. Test the functionality of the Azure Front Door profile locally and ensure everything is working as expected.
+
+ 1. Remove the entry from the hosts file when testing is complete.
+
+### Configure Traffic Manager with CNAME
+
+We only recommend this step after you have fully tested the Azure Front Door profile and are confident that it is working as expected.
+
+1. Sign into your DNS provider and locate the CNAME record for the Azure CDN from Edgio profile.
+
+1. Locate the custom domain you want to migrate to Azure Front Door and set the time-to-live (TTL) to 600 secs (10 minutes).
+
+1. Update the CNAME record to point to the Traffic Manager profile: `your-profile.trafficmanager.net`.
+
+1. In the Azure portal, navigate to the Traffic Manager profile and select **Endpoints**.
+
+1. Enable the Azure Front Door endpoint and select **Always serve traffic** for the health check.
+
+1. Use a tool like dig or nslookup to verify that the DNS change propagated and pointed to the correct Traffic Manager profile.
+
+1. Verify that the Azure CDN from Edgio profile is working properly by checking the monitoring tool you set up earlier.
+
+### Gradual traffic shift
+
+The initial traffic distribution starts by routing a small percentage of traffic to the Azure Front Door profile. Monitor the performance of the Azure Front Door profile and gradually increase the traffic percentage until all traffic is routed to the Azure Front Door profile.
+
+1. Start by routing 10% of the traffic to the Azure Front Door profile and the rest to the Azure CDN from Edgio profile.
+
+1. Monitor the performance of the Azure Front Door profile and the Azure CDN from Edgio profile using the monitoring tool you set up earlier. Review your internal applications and systems logs to ensure that the Azure Front Door profile is working as expected. Look at metrics and logs to observe for 4xx/5xx errors, cache/byte hit ratios, and origin health.
+
+ > [!NOTE]
+ > If you don't have access to a third-party tool, you can use [Webpagetest](https://webpagetest.org) to verify the availability of your endpoint from a remote location. However, this tool only provides a limited view of your URLs from a few locations around the world, so you may not see any changes until you have fully shifted traffic to the Azure Front Door profile.
+
+1. Gradually increase the traffic percentage to the Azure Front Door profile by 10% increments until all traffic is routed to the Azure Front Door profile. Ensure that you're testing and monitoring the performance of the Azure Front Door profile at each increment.
+
+1. Once you're confident that the Azure Front Door profile is working as expected, update the Traffic Manager profile to route all traffic to the Azure Front Door profile.
+
+ 1. Ensure the Azure Front Door endpoint is enabled, Weight is set to 100, and the health check is set to **Always serve traffic**.
+
+ 1. Ensure the Azure CDN from Edgio endpoint is disabled.
+
+### Remove Azure Traffic Manager
+
+1. Sign in to your DNS provider. Change the CNAME record from the Traffic Manager profile to the Azure Front Door profile: `<endpointname>-<hash>.xxx.azurefd.net`.
+
+1. Over the next few hours, begin testing using dig, and monitor using the monitoring tool to ensure the DNS is fully propagated correctly around the world.
+
+1. Set the DNS TTL back to the original value (60 minutes).
+
+At this stage you have fully migrated all traffic from Azure CDN from Edgio to Azure Front Door.
+
+## Next steps
+
+Learn about [best practices](best-practices.md) for Azure Front Door.
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Cis Azure 2 0 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Gov Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 (Azure Government) description: Details of the System and Organization Controls (SOC) 2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Mcfs Baseline Confidential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-confidential.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Confidential Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Confidential Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Mcfs Baseline Global https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-global.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Global Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Global Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Nl Bio Cloud Theme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md
Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Pci Dss 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md
Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Rbi Itf Banks 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Rbi Itf Nbfc 2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 description: Details of the System and Organization Controls (SOC) 2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Spain Ens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/spain-ens.md
Title: Regulatory Compliance details for Spain ENS description: Details of the Spain ENS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Swift Csp Cscf 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Swift Csp Cscf 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 10/21/2024 Last updated : 10/30/2024
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
We support the following matching types.
| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This range is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to, and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. | | Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name that starts with the value |
+| UID List Match | `StudyInstanceUID` | Matches studies identified by the values provided in the list. Supports comma (,) or a backslash (\\) as a valid separator. `{attributeID}=1.2.3,5.6.7,8.9.0` will return details associated with all the studies, given they exist. |
#### Attribute ID
The query API returns one of the following status codes in the response.
| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. The response body contains details of the failure. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
+| `414 (URI Too Long)` | URI exceeded maximum supported length of 8192 characters. |
| `424 (Failed Dependency)` | The DICOM service can't access a resource it depends on to complete this request. An example is failure to access the connected Data Lake store, or the key vault for supporting customer-managed key encryption. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
iot-operations Tutorial Mqtt Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/tutorial-mqtt-bridge.md
spec:
command: ["sh", "-c"] args: ["apk add mosquitto-clients mqttui && sleep infinity"] volumeMounts:
- - name: mq-sat
+ - name: broker-sat
mountPath: /var/run/secrets/tokens - name: trust-bundle mountPath: /var/run/certs volumes:
- - name: mq-sat
+ - name: broker-sat
projected: sources: - serviceAccountToken:
- path: mq-sat
+ path: broker-sat
audience: aio-internal # Must match audience in BrokerAuthentication expirationSeconds: 86400 - name: trust-bundle
mosquitto_sub --host aio-broker --port 18883 \
-t "tutorial/#" \ --debug --cafile /var/run/certs/ca.crt \ -D CONNECT authentication-method 'K8S-SAT' \
- -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+ -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
``` Leave the command running and open a new terminal window.
mosquitto_pub -h aio-broker -p 18883 \
--repeat 5 --repeat-delay 1 -d \ --debug --cafile /var/run/certs/ca.crt \ -D CONNECT authentication-method 'K8S-SAT' \
- -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+ -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
``` ## View the messages in the subscriber
iot-operations Concept Opcua Message Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/concept-opcua-message-format.md
The connector for OPC UA publishes messages from OPC UA servers to the MQTT brok
The payload of an OPC UA message is a JSON object that contains the telemetry data from the OPC UA server. The following example shows the payload of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic: ```console
-mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -v --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -v --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
``` The output from the previous command looks like the following example:
Client $server-generated/05a22b94-c5a2-4666-9c62-837431ca6f7e received PUBLISH (
The headers in the messages published by the connector for OPC UA are based on the [CloudEvents specification for OPC UA](https://github.com/cloudevents/spec/blob/main/cloudevents/extensions/opcua.md). The headers from an OPC UA message become user properties in a message published to the MQTT broker. The following example shows the user properties of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic: ```console
-mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -V mqttv5 -F %P --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -V mqttv5 -F %P --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
``` The output from the previous command looks like the following example:
iot-operations Howto Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authentication.md
kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
Inside the pod's shell, run the following command to publish a message to the broker: ```bash
-mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
``` The output should look similar to the following:
Client (null) sending PUBLISH (d0, q0, r0, m1, 'world', ... (5 bytes))
Client (null) sending DISCONNECT ```
-The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/mq-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
+The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/broker-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
### Refresh service account tokens
iot-operations Howto Configure Availability Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
For a list of the available settings, see the [Broker](/rest/api/iotoperationsmq
To configure the scaling settings MQTT broker, you need to specify the `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
-The `cardinality` field is a nested field that has these subfields:
+### Automatic deployment cardinality
+
+To automatically determine the initial cardinality during deployment, omit the `cardinality` field in the *Broker* resource. The MQTT broker operator automatically deploys the appropriate number of pods based on the number of available nodes at the time of the deployment. This is useful for non-production scenarios where you don't need high-availability or scale.
+
+However, this is *not* auto-scaling. The operator doesn't automatically scale the number of pods based on the load. The operator only determines the initial number of pods to deploy based on the cluster hardware. As noted above, the cardinality can only be set at initial deployment time, and a new deployment is required if the cardinality settings need to be changed.
+
+### Configure cardinality directly
+
+To configure the cardinality settings directly, specify the `cardinality` field. The `cardinality` field is a nested field that has these subfields:
- `frontend`: This subfield defines the settings for the frontend pods, such as:
- - `replicas`: The number of frontend pods to deploy. This subfield is required if the `mode` field is set to `distributed`.
- - `workers`: The number of workers to deploy per frontend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
+ - `replicas`: The number of frontend pods to deploy. Increasing the number of frontend replicas provides high availability in case one of the frontend pods fails.
+ - `workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica improves CPU core utilization because each worker can use only one CPU core at most. For example, if your cluster has 3 nodes, each with 8 CPU cores, then set the number of replicas to match the number of nodes (3) and increase the number of workers up to 8 per replica as you need more frontend throughput. This way, each frontend replica can use all the CPU cores on the node without workers competing for CPU resources.
- `backendChain`: This subfield defines the settings for the backend chains, such as:
- - `redundancyFactor`: The number of data copies in each backend chain. This subfield is required if the `mode` field is set to `distributed`.
- - `partitions`: The number of partitions to deploy. This subfield is required if the `mode` field is set to `distributed`.
- - `workers`: The number of workers to deploy per backend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
-
-If `cardinality` field is omitted, cardinality is determined by MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware.
+ - `partitions`: The number of partitions to deploy. Increasing the number of partitions increases the number of messages that the broker can handle. Through a process called *sharding*, each partition is responsible for a portion of the messages, divided by topic ID and session ID. The frontend pods distribute message traffic across the partitions.
+ - `redundancyFactor`: The number of backend pods to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
+ - `workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle. Each worker can consume up to 2 CPU cores at most, so be careful when increasing the number of workers per replica to not exceed the number of CPU cores in the cluster.
-To configure the scaling settings MQTT broker, you need to specify the `mode` and `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
+When you increase these values, the broker's capacity to handle more connections and messages improves, and it enhances high availability in case of pod or node failures. However, this also leads to higher resource consumption. So, when adjusting cardinality values, consider the memory profile settings and balance these factors to optimize the broker's resource usage.
## Configure memory profile
iot-operations Howto Configure Tls Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-auto.md
The `--cafile` argument enables TLS on the mosquitto client and specifies that t
Replace `$HOST` with the appropriate host: -- If connecting from [within the same cluster](howto-test-connection.md#connect-from-a-pod-within-the-cluster-with-default-configuration), replace with the service name given (`my-new-tls-listener` in example) or the service `CLUSTER-IP`.
+- If connecting from [within the same cluster](howto-test-connection.md#connect-to-the-default-listener-inside-the-cluster), replace with the service name given (`my-new-tls-listener` in example) or the service `CLUSTER-IP`.
- If connecting from outside the cluster, the service `EXTERNAL-IP`. Remember to specify authentication methods if needed.
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-test-connection.md
By default, MQTT broker:
Before you begin, [install or configure IoT Operations](../get-started-end-to-end-sample/quickstart-deploy.md). Use the following options to test connectivity to MQTT broker with MQTT clients in a nonproduction environment.
-## Connect from a pod within the cluster with default configuration
-
-The first option is to connect from within the cluster. This option uses the default configuration and requires no extra updates. The following examples show how to connect from within the cluster using plain Alpine Linux and a commonly used MQTT client, using the service account and default root CA cert.
-
-1. Create a file named `client.yaml` with the following configuration:
-
- ```yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: mqtt-client
- namespace: azure-iot-operations
-
- apiVersion: v1
- kind: Pod
- metadata:
- name: mqtt-client
- # Namespace must match MQTT broker BrokerListener's namespace
- # Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local
- namespace: azure-iot-operations
- spec:
- # Use the "mqtt-client" service account created from above
- # Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations`
- serviceAccountName: mqtt-client
- containers:
- # Mosquitto and mqttui on Alpine
- - image: alpine
- name: mqtt-client
- command: ["sh", "-c"]
- args: ["apk add mosquitto-clients mqttui && sleep infinity"]
- volumeMounts:
- - name: mq-sat
- mountPath: /var/run/secrets/tokens
- - name: trust-bundle
- mountPath: /var/run/certs
- volumes:
- - name: mq-sat
- projected:
- sources:
- - serviceAccountToken:
- path: mq-sat
- audience: aio-internal # Must match audience in BrokerAuthentication
- expirationSeconds: 86400
- - name: trust-bundle
- configMap:
- name: azure-iot-operations-aio-ca-trust-bundle # Default root CA cert
- ```
-
-1. Use `kubectl apply -f client.yaml` to deploy the configuration. It should only take a few seconds to start.
+## Connect to the default listener inside the cluster
-1. Once the pod is running, use `kubectl exec` to run commands inside the pod.
+The first option is to connect from within the cluster. This option uses the default configuration and requires no extra updates. The following examples show how to connect from within the cluster using plain Alpine Linux and a commonly used MQTT client, using the service account and default root CA certificate.
- For example, to publish a message to the broker, open a shell inside the pod:
+First, create a file named `client.yaml` with the following configuration:
- ```bash
- kubectl exec --stdin --tty mqtt-client --namespace azure-iot-operations -- sh
- ```
+```yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: mqtt-client
+ namespace: azure-iot-operations
+
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mqtt-client
+ # Namespace must match MQTT broker BrokerListener's namespace
+ # Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local
+ namespace: azure-iot-operations
+spec:
+ # Use the "mqtt-client" service account created from above
+ # Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations`
+ serviceAccountName: mqtt-client
+ containers:
+ # Mosquitto and mqttui on Alpine
+ - image: alpine
+ name: mqtt-client
+ command: ["sh", "-c"]
+ args: ["apk add mosquitto-clients mqttui && sleep infinity"]
+ volumeMounts:
+ - name: broker-sat
+ mountPath: /var/run/secrets/tokens
+ - name: trust-bundle
+ mountPath: /var/run/certs
+ volumes:
+ - name: broker-sat
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: broker-sat
+ audience: aio-internal # Must match audience in BrokerAuthentication
+ expirationSeconds: 86400
+ - name: trust-bundle
+ configMap:
+ name: azure-iot-operations-aio-ca-trust-bundle # Default root CA cert
+```
-1. Inside the pod's shell, run the following command to publish a message to the broker:
+Then, use `kubectl` to deploy the configuration. It should only take a few seconds to start.
- ```console
- mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
- ```
+```bash
+kubectl apply -f client.yaml
+```
- The output should look similar to the following:
+Once the pod is running, use `kubectl exec` to run commands inside the pod.
- ```Output
- Client (null) sending CONNECT
- Client (null) received CONNACK (0)
- Client (null) sending PUBLISH (d0, q0, r0, m1, 'world', ... (5 bytes))
- Client (null) sending DISCONNECT
- ```
+For example, to publish a message to the broker, open a shell inside the pod:
- The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/mq-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
+```bash
+kubectl exec --stdin --tty mqtt-client --namespace azure-iot-operations -- sh
+```
-1. To subscribe to the topic, run the following command:
+Inside the pod's shell, run the following command to publish a message to the broker:
- ```console
- mosquitto_sub --host aio-broker --port 18883 --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
- ```
+```console
+mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
+```
- The output should look similar to the following:
+The output should look similar to the following:
- ```Output
- Client (null) sending CONNECT
- Client (null) received CONNACK (0)
- Client (null) sending SUBSCRIBE (Mid: 1, Topic: world, QoS: 0, Options: 0x00)
- Client (null) received SUBACK
- Subscribed (mid: 1): 0
- ```
+```Output
+Client (null) sending CONNECT
+Client (null) received CONNACK (0)
+Client (null) sending PUBLISH (d0, q0, r0, m1, 'world', ... (5 bytes))
+Client (null) sending DISCONNECT
+```
- The mosquitto client uses the same service account token and root CA cert to authenticate with the broker and subscribe to the topic.
+The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/broker-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
-1. To remove the pod, run `kubectl delete pod mqtt-client -n azure-iot-operations`.
+> [!TIP]
+> You can use `kubectl` to download the default root CA certificate to use with other clients. For example, to download the default root CA certificate to a file named `ca.crt`:
+>
+> ```bash
+> kubectl get configmap azure-iot-operations-aio-ca-trust-bundle -n azure-iot-operations -o jsonpath='{.data.ca\.crt}' > ca.crt
+> ```
-## Connect clients from outside the cluster to default the TLS port
-### TLS trust chain
+To subscribe to the topic, run the following command:
-Since the broker uses TLS, the client must trust the broker's TLS certificate chain. You need to configure the client to trust the root CA certificate used by the broker.
+```console
+mosquitto_sub --host aio-broker --port 18883 --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/broker-sat)
+```
-To use the default root CA certificate, download it from the `azure-iot-operations-aio-ca-trust-bundle` ConfigMap:
+The output should look similar to the following:
-```bash
-kubectl get configmap azure-iot-operations-aio-ca-trust-bundle -n azure-iot-operations -o jsonpath='{.data.ca\.crt}' > ca.crt
+```Output
+Client (null) sending CONNECT
+Client (null) received CONNACK (0)
+Client (null) sending SUBSCRIBE (Mid: 1, Topic: world, QoS: 0, Options: 0x00)
+Client (null) received SUBACK
+Subscribed (mid: 1): 0
```
-Use the downloaded `ca.crt` file to configure your client to trust the broker's TLS certificate chain.
+The mosquitto client uses the same service account token and root CA cert to authenticate with the broker and subscribe to the topic.
+
+To remove the pod, run `kubectl delete pod mqtt-client -n azure-iot-operations`.
-If you are connecting to the broker from a different namespace, you must use the full service hostname `aio-broker.azure-iot-operations.svc.cluster.local`. You must also add the DNS name to the server certificate by including a subject alternative name (SAN) DNS field to the *BrokerListener* resource. For more information, see [Configure server certificate parameters](howto-configure-tls-auto.md#optional-configure-server-certificate-parameters).
+## Connect clients from outside the cluster
-### Authenticate with the broker
+Since the [default broker listener](howto-configure-brokerlistener.md#default-brokerlistener) is set to *ClusterIp* service type, you can't connect to the broker from outside the cluster directly. To prevent unintentional disruption to communication between internal AIO components, we recommend keeping the default listener unmodified and dedicated for AIO internal communication. While it's possible to create a separate Kubernetes *LoadBalancer* service to expose the cluster IP service, it's better to create a separate listener with different settings, like more common MQTT port 1883 and 8883, to avoid confusion and potential security risks.
-By default, MQTT broker only accepts Kubernetes service accounts for authentication for connections from within the cluster. To connect from outside the cluster, you must configure a different authentication method like X.509. For more information, see [Configure authentication](howto-configure-authentication.md).
+<!-- TODO: consider moving to the main listener article and just link from here? -->
+### Node port
-#### Turn off authentication is for testing purposes only
+The easiest way to test connectivity is to use the *NodePort* service type in the listener. With that, you can use `<nodeExternalIP>:<NodePort>` to connect like in [Kubernetes documentation](https://kubernetes.io/docs/tutorials/services/connect-applications-service/#exposing-the-service).
-To turn off authentication for testing purposes, edit the `BrokerListener` resource and set the `authenticationEnabled` field to `false`:
+For example, to create a new BrokerListener with *NodePort* service type and port 1883, create a file named `broker-nodeport.yaml` with configuration like the following, replacing placeholders with your own values, including your own authentication and TLS settings.
> [!CAUTION]
-> Turning off authentication should only be used for testing purposes with a test cluster that's not accessible from the internet.
+> Removing `authenticationRef` and `tls` settings from the configuration [will turn off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+<!-- TODO: Bicep and portal -->
+
+```yaml
+apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
+kind: BrokerListener
+metadata:
+ name: broker-nodeport
+ namespace: azure-iot-operations
+spec:
+ brokerRef: default
+ serviceType: NodePort
+ serviceName: broker-nodeport
+ ports:
+ - port: 1883
+ nodePort: 31883 # Must be in the range 30000-32767
+ authenticationRef: # Add BrokerAuthentication reference
+ tls:
+ # Add TLS settings
+```
+
+Then, use `kubectl` to deploy the configuration:
```bash
-kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='[{"op": "replace", "path": "/spec/authenticationEnabled", "value": false}]'
+kubectl apply -f broker-nodeport.yaml
```
-### Port connectivity
+Next, get the node's external IP address:
-Some Kubernetes distributions can [expose](https://k3d.io/v5.1.0/usage/exposing_services/) MQTT broker to a port on the host system (localhost). You should use this approach because it makes it easier for clients on the same host to access MQTT broker.
+```bash
+kubectl get nodes -o yaml | grep ExternalIP -C 1
+```
-For example, to create a K3d cluster with mapping the MQTT broker's default MQTT port 18883 to localhost:18883:
+The output should look similar to the following:
+
+```output
+ - address: 104.197.41.11
+ type: ExternalIP
+ allocatable:
+--
+ - address: 23.251.152.56
+ type: ExternalIP
+ allocatable:
+...
+```
+
+Use the external IP address and the node port to connect to the broker. For example, to publish a message to the broker:
```bash
-k3d cluster create --port '18883:18883@loadbalancer'
+mosquitto_pub --host <EXTERNAL_IP> --port 31883 --message "hello" --topic "world" --debug # Add authentication and TLS options matching listener settings
```
-But for this method to work with MQTT broker, you must configure it to use a load balancer instead of cluster IP. There are two ways to do this: create a load balancer or patch the existing default BrokerListener resource service type to load balancer.
-
-#### Option 1: Create a load balancer
-
-1. Create a file named `loadbalancer.yaml` with the following configuration:
-
- ```yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: iotmq-public-svc
- spec:
- type: LoadBalancer
- ports:
- - name: mqtt1
- port: 18883
- targetPort: 18883
- selector:
- app: broker
- app.kubernetes.io/instance: broker
- app.kubernetes.io/managed-by: dmqtt-operator
- app.kubernetes.io/name: dmqtt
- tier: frontend
- ```
+If there's no external IP in the output, you might be using a Kubernetes setup that doesn't expose the node's external IP address by default, like many setups of k3s, k3d, or minikube. In that case, you can access the broker with the internal IP along with the node port from machines on the same network. For example, to get the internal IP address of the node:
-1. Apply the configuration to create a load balancer service:
+```bash
+kubectl get nodes -o yaml | grep InternalIP -C 1
+```
- ```bash
- kubectl apply -f loadbalancer.yaml
- ```
+The output should look similar to the following:
-#### Option 2: Patch the default load balancer
+```output
+ - address: 172.19.0.2
+ type: InternalIP
+ allocatable:
+```
-1. Edit the `BrokerListener` resource and change the `serviceType` field to `loadBalancer`.
+Then, use the internal IP address and the node port to connect to the broker from a machine within the same cluster. If Kubernetes is running on a local machine, like with single-node k3s, you can often use `localhost` instead of the internal IP address. If Kubernetes is running in a Docker container, like with k3d, the internal IP address corresponds to the container's IP address, and should be reachable from the host machine.
- ```bash
- kubectl patch brokerlistener listener --namespace azure-iot-operations --type='json' --patch='[{"op": "replace", "path": "/spec/serviceType", "value": "loadBalancer"}]'
- ```
+### Load balancer
-1. Wait for the service to be updated.
+Another way to expose the broker to the internet is to use the *LoadBalancer* service type. This method is more complex and might require additional configuration, like setting up port forwarding. For example, to create a new BrokerListener with *LoadBalancer* service type and port 1883, create a file named `broker-loadbalancer.yaml` with configuration like the following, replacing placeholders with your own values, including your own authentication and TLS settings.
- ```console
- kubectl get service aio-broker --namespace azure-iot-operations
- ```
+> [!CAUTION]
+> Removing `authenticationRef` and `tls` settings from the configuration [will turn off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+```yaml
+apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
+kind: BrokerListener
+metadata:
+ name: broker-loadbalancer
+ namespace: azure-iot-operations
+spec:
+ brokerRef: default
+ serviceType: LoadBalancer
+ serviceName: broker-loadbalancer
+ ports:
+ - port: 1883
+ authenticationRef: # Add BrokerAuthentication reference
+ tls:
+ # Add TLS settings
+```
- Output should look similar to the following:
+Then, use `kubectl` to deploy the configuration:
- ```Output
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- aio-broker LoadBalancer 10.43.107.11 XXX.XX.X.X 18883:30366/TCP 14h
- ```
+```bash
+kubectl apply -f broker-loadbalancer.yaml
+```
-1. You can use the external IP address to connect to MQTT broker over the internet. Make sure to use the external IP address instead of `localhost`.
+Next, get the external IP address for the broker's service:
- ```bash
- mosquitto_pub --qos 1 --debug -h XXX.XX.X.X --message hello --topic world --username client1 --pw password --cafile ca.crt
- ```
+```bash
+kubectl get service broker-loadbalancer --namespace azure-iot-operations
+```
-> [!TIP]
-> You can use the external IP address to connect to MQTT broker from outside the cluster. If you used the K3d command with port forwarding option, you can use `localhost` to connect to MQTT broker. For example, to connect with mosquitto client:
->
-> ```bash
-> mosquitto_pub --qos 1 --debug -h localhost --message hello --topic world --username client1 --pw password --cafile ca.crt --insecure
-> ```
->
-> In this example, the mosquitto client uses username and password to authenticate with the broker along with the root CA cert to verify the broker's TLS certificate chain. Here, the `--insecure` flag is required because the default TLS certificate issued to the load balancer is only valid for the load balancer's default service name (aio-broker) and assigned IPs, not localhost.
->
-> Never expose MQTT broker port to the internet without authentication and TLS. Doing so is dangerous and can lead to unauthorized access to your IoT devices and bring unsolicited traffic to your cluster.
->
-> For information on how to add localhost to the certificate's subject alternative name (SAN) to avoid using the insecure flag, see [Configure server certificate parameters](howto-configure-tls-auto.md#optional-configure-server-certificate-parameters).
+If the output looks similar to the following:
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+broker-loadbalancer LoadBalancer 10.43.213.246 172.19.0.2 1883:30382/TCP 83s
+```
+
+This means that an external IP has been assigned to the load balancer service, and you can use the external IP address and the port to connect to the broker. For example, to publish a message to the broker:
+
+```bash
+mosquitto_pub --host <EXTERNAL_IP> --port 1883 --message "hello" --topic "world" --debug # Add authentication and TLS options matching listener settings
+```
+
+If the external IP is not assigned, you might need to use port forwarding or a virtual switch to access the broker.
#### Use port forwarding
With [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8
1. Use 127.0.0.1 to connect to the broker at port 18883 with the same authentication and TLS configuration as the example without port forwarding.
-Port forwarding is also useful for testing MQTT broker locally on your development machine without having to modify the broker's configuration.
For more information about minikube, see [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) #### Port forwarding on AKS Edge Essentials
-For Azure Kubernetes Services Edge Essentials, you need to perform a few additional steps. For more information about port forwarding, see [Expose Kubernetes services to external devices](/azure/aks/hybrid/aks-edge-howto-expose-service).
-1. Assume that the broker's service is exposed to an external IP using a load balancer. For example if you patched the default load balancer `aio-broker`, get the external IP address for the service.
+
+For Azure Kubernetes Services Edge Essentials, you need to perform a few additional steps. With AKS Edge Essentials, getting the external IP address might not be enough to connect to the broker. You might need to set up port forwarding and open the port on the firewall to allow traffic to the broker's service.
+
+1. First, get the external IP address of the broker's load balancer listener:
+
```bash
- kubectl get service aio-broker --namespace azure-iot-operations
+ kubectl get service broker-loadbalancer --namespace azure-iot-operations
``` Output should look similar to the following: ```Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- aio-broker LoadBalancer 10.43.107.11 192.168.0.4 18883:30366/TCP 14h
+ broker-loadbalancer LoadBalancer 10.43.107.11 192.168.0.4 1883:30366/TCP 14h
```
-1. Set up port forwarding to the `aio-broker` service on the external IP address `192.168.0.4` and port `18883`:
+1. Set up port forwarding to the `broker-loadbalancer` service on the external IP address `192.168.0.4` and port `1883`:
+ ```bash
- netsh interface portproxy add v4tov4 listenport=18883 connectport=18883 connectaddress=192.168.0.4
+ netsh interface portproxy add v4tov4 listenport=1883 connectport=1883 connectaddress=192.168.0.4
``` 1. Open the port on the firewall to allow traffic to the broker's service:+ ```bash
- New-NetFirewallRule -DisplayName "AIO MQTT Broker" -Direction Inbound -Protocol TCP -LocalPort 18883 -Action Allow
+ New-NetFirewallRule -DisplayName "AIO MQTT Broker" -Direction Inbound -Protocol TCP -LocalPort 1883 -Action Allow
```
-1. Use the host's public IP address to connect to the MQTT broker.
-
-## No TLS and no authentication
-The reason that MQTT broker uses TLS and service accounts authentication by default is to provide a secure-by-default experience that minimizes inadvertent exposure of your IoT solution to attackers. You shouldn't turn off TLS and authentication in production.
-
-> [!CAUTION]
-> Don't use in production. Exposing MQTT broker to the internet without authentication and TLS can lead to unauthorized access and even DDOS attacks.
-
-If you understand the risks and need to use an insecure port in a well-controlled environment, you can turn off TLS and authentication for testing purposes following these steps:
-
-1. Create a new `BrokerListener` resource without TLS settings:
-
- ```yaml
- apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
- kind: BrokerListener
- metadata:
- name: non-tls-listener
- namespace: azure-iot-operations
- spec:
- brokerRef: default
- serviceType: loadBalancer
- serviceName: my-unique-service-name
- authenticationEnabled: false
- authorizationEnabled: false
- port: 1883
- ```
+1. Use the host's public IP address to connect to the MQTT broker.
- The `authenticationEnabled` and `authorizationEnabled` fields are set to `false` to turn off authentication and authorization. The `port` field is set to `1883` to use common MQTT port.
+For more information about port forwarding, see [Expose Kubernetes services to external devices](/azure/aks/hybrid/aks-edge-howto-expose-service).
-1. Wait for the service to be updated.
+#### Access through localhost
- ```console
- kubectl get service my-unique-service-name --namespace azure-iot-operations
- ```
+Some Kubernetes distributions can [expose](https://k3d.io/v5.1.0/usage/exposing_services/) MQTT broker to a port on the host system (localhost) as part of cluster configuration. Use this approach to make it easier for clients on the same host to access MQTT broker.
- Output should look similar to the following:
+For example, to create a K3d cluster with mapping the MQTT broker's default MQTT port 1883 to `localhost:1883`:
- ```Output
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- my-unique-service-name LoadBalancer 10.43.144.182 XXX.XX.X.X 1883:31001/TCP 5m11s
- ```
+```bash
+k3d cluster create --port '1883:1883@loadbalancer'
+```
- The new port 1883 is available.
+Or to update an existing cluster:
-1. Use mosquitto client to connect to the broker:
+```bash
+k3d cluster edit <CLUSTER_NAME> --port-add '1883:1883@loadbalancer'
+```
- ```console
- mosquitto_pub --qos 1 --debug -h localhost --message hello --topic world
- ```
+Then, use `localhost` and the port to connect to the broker. For example, to publish a message to the broker:
- The output should look similar to the following:
+```bash
+mosquitto_pub --host localhost --port 1883 --message "hello" --topic "world" --debug # Add authentication and TLS options matching listener settings
+```
- ```Output
- Client mosq-7JGM4INbc5N1RaRxbW sending CONNECT
- Client mosq-7JGM4INbc5N1RaRxbW received CONNACK (0)
- Client mosq-7JGM4INbc5N1RaRxbW sending PUBLISH (d0, q1, r0, m1, 'world', ... (5 bytes))
- Client mosq-7JGM4INbc5N1RaRxbW received PUBACK (Mid: 1, RC:0)
- Client mosq-7JGM4INbc5N1RaRxbW sending DISCONNECT
- ```
+## Only turn off TLS and authentication for testing
+
+The reason that MQTT broker uses TLS and service accounts authentication by default is to provide a secure-by-default experience that minimizes inadvertent exposure of your IoT solution to attackers. You shouldn't turn off TLS and authentication in production. Exposing MQTT broker to the internet without authentication and TLS can lead to unauthorized access and even DDOS attacks.
+
+If you understand the risks and need to use an insecure port in a well-controlled environment, you can turn off TLS and authentication for testing purposes by removing the `tls` and `authenticationRef` settings from the listener configuration.
+
+```yaml
+apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
+kind: BrokerListener
+metadata:
+ name: <NAME>
+ namespace: azure-iot-operations
+spec:
+ brokerRef: default
+ serviceType: <SERVICE_TYPE> # LoadBalancer or NodePort
+ serviceName: <NAME>
+ ports:
+ - port: 1883
+ nodePort: 31883 # If using NodePort
+ # Omitting authenticationRef and tls for testing only
+```
## Related content
load-balancer Load Balancer Manage Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-manage-health-status.md
+
+ Title: Manage Azure Load Balancer health status
+
+description: Learn how to manage Azure Load Balancer health status to get detailed health information about the backend instances in your Azure Load Balancer backend pool.
+++ Last updated : 10/30/2024++++
+# Manage Azure Load Balancer Health Status
+
+Health status is an Azure Load Balancer feature that gives detailed health information about the backend instances in your Azure Load Balancer backend pool. Linked to your load balancing rule, this status provides insights into the health state and reasoning of these backend instances.
+
+## State of backend instances
+
+Health status exposes the state of your backend instances. There are two state values:
+
+| **State** | **Description** |
+|--|--|
+| Up | This state value represents a healthy backend instance. |
+| Down | This state value represents an unhealthy backend instance. |
+
+## Reason codes
+
+Health status also exposes reason codes, categorized into User Triggered Reason Codes and Platform Triggered Reason Codes. These codes help you understand the precise reasons for the health probe state of your backend instances and why they're being probed Up or Down.
+
+### User Triggered Reason Codes
+
+User triggered reason codes are triggered based on how you configured your Load Balancer; these can be addressed by you, the user. The following tables describe the user-triggered reason codes along with the portal displayed reason for success and failed reason codes.
+
+#### Success reason codes
+
+The following table describes the success reason codes where the backend state is equal to **Up**:
+
+| **Reason Code** | **Portal displayed reason** | **Description** |
+|-|-|-|
+| **Up_Probe_Succes**s | The backend instance is responding to health probe successfully. | Your backend instance is responding to the health probe successfully. |
+| **Up_Probe_AllDownIsUp** | The backend instance is considered healthy due to enablement of *NoHealthyBackendsBehavior*. | Health probe state of the backend instance is ignored because *NoHealthyBackendsBehavior* is enabled. The backend instance is considered healthy and can receive traffic. |
+| **Up_Probe_ApproachingUnhealthyThreshold** | Health probe is approaching an unhealthy threshold but backend instance remains healthy based on last response. | The most recent probe has failed to respond but the backend instance remains healthy enough based on earlier responses. |
+| **Up_Admin**| The backend instance is healthy due to Admin State set to *Up*. | Health probe state of the backend instance is ignored because the Admin State is set to *UP*. The backend instance is considered healthy and can receive traffic. |
+
+#### Failure reason codes
+
+The following table describes the failure reason codes where the backend state is equal to **Down**:
+
+| **Reason Code** | **Portal displayed reason** | **Description** |
+|-|-|-|
+| **Down_Probe_ApproachingHealthyThreshold** | Health probe is approaching a healthy threshold but backend instance remains unhealthy based on last response. | The most recent probe outcome is positive, but it doesn't meet the required number of responses in the healthy threshold, so the backend instance remains unhealthy. |
+| **Down_Probe_HttpStatusCodeError** | A non-200 HTTP status code received; meaning there's an issue with the application listening on the port. | Your backend instance is returning a non-200 HTTP status code indicating an issue with the application listening on the port. |
+| **Down_Probe_HttpEndpointUnreachable** | HTTP endpoint unreachable; meaning either an NSG rule blocking port or unhealthy app listening on port. | The health probe was able to establish a TCP handshake with your backend instance but the HTTP session was rejected which indicates two possibilities: An NSG rule blocking the port or no healthy application listening on the port. |
+| **Down_Probe_TcpProbeTimeout** | TCP probe timeout; meaning either unhealthy backend instance, blocked health probe port, or unhealthy app listening on port. | Your backend instance has sent back no TCP response within the probe interval. This indicates three possibilities: An unhealthy Backend Instance, blocked health probe port, or unhealthy application listening on the port. |
+| **Down_Probe_NoHealthyBackend** | No healthy backend instances behind the regional load balancer. | Your regional load balancer that is associated with a Global Load Balancer has no healthy backend instances behind it. |
+| **Down_Admin** | The backend instance is unhealthy due to *Admin State* set to *Down*. | Health probe state of the backend instance is ignored because the *Admin State* is set to *Down*. The backend instance is considered unhealthy and can't receive new traffic. |
+| **Down_Probe_HttpNoResponse** | Application isn't returning a response. | The health probe was able to establish an HTTP session but the application isn't returning a response. This indicates an unhealthy application listening on the port. |
+
+> [!NOTE]
+> In rare cases, **NA** will show as a reason code. This code is shown when the health probe has not probed your backend instance yet so there is no reason code to display.
+
+### Platform Triggered reason codes
+
+Platform triggered reason codes are triggered based on the Azure Load BalancerΓÇÖs platform; these codes can't be addressed by you, the user. The following table below describes each reason code:
+
+| **Reason Code** | **Portal displayed reason** | **Description** |
+|-|-|-|
+| **Up_Platform** | The backend instance is responding to the health probe successfully, but there may be an infrastructure related issue. The Azure service team is alerted and will resolve the issue.| The backend instance is responding to the health probe successfully, but there can be an infrastructure related issue. The Azure service team is alerted and will resolve the issue. |
+| **Down_Platform** | The backend instance is unhealthy due to an infrastructure related issue. The Azure service team is alerted and will resolve the issue. | The backend instance is unhealthy due to an infrastructure related issue. The Azure service team is alerted and will resolve the issue. |
+
+## How to retrieve health status
+
+Health status can be retrieved on a per load balancing rule basis. This is supported via Azure port and REST API.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to the Azure portal.
+2. In the search bar, enter **Load Balancers** and select **Load Balancers** from the search results.
+3. On the **Load Balancers** page, select your load balancer from the list.
+4. In your load balancer's **Settings** section, select **Load balancing rules**.
+5. In the **Load balancing rules** page, select **View details** under the **Health status** column for the rule you want to view.
+
+ :::image type="content" source="media/load-balancer-manage-health-status/load-balancing-rules-list-small.png" alt-text="Screenshot of list of load balancing rules with health status link." lightbox="media/load-balancer-manage-health-status/load-balancing-rules-list.png":::
+
+6. Review the health status of your backend instances in the **Load balancing rule health status** window.
+7. To retrieve the latest health status, select **Refresh**.
+
+ :::image type="content" source="media/load-balancer-manage-health-status/load-balancing-rule-health-status.png" alt-text="Screenshot of health status for load balancing rule.":::
+
+ > [!IMPORTANT]
+ > The Load balancing rule health status window may take a few minutes to load the health status of your backend instances.
+
+1. Select **Close** to exit the **Load balancing rule health status** window.
+
+# [REST API](#tab/rest-api)
+
+To retrieve the health status information via REST API, you need to do a two request process.
+
+> [!NOTE]
+> Using the REST API method requres that you have a **Bearer access token** for autorization. For assistance retrieving the access token, see [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) for details.
+
+1. Use the following POST request to obtain the Location URI from the Response Headers.
+
+ ```rest
+ POST https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/loadBalancers/<loadBalancerName>/loadBalancingRules/<loadBalancingRulesName>/health?api-version=2024-03-01&preserve-view=true
+ Authorization: Bearer <access token>
+ ```
+
+1. Copy the Location URI from the Response Headers. Location URI should follow this schema.
+
+ ```rest
+ https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Network/locations/<locationName>/operationResults/<operationResultsId>?api-version=2024-03-01&preserve-view=true
+ ```
+
+1. Use the copied Location URI to make a GET request.
+
+ ```rest
+ GET https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Network/locations/<locationName>/operationResults/<operationResultsId>?api-version=2024-03-01&preserve-view=true
+
+ Authorization: Bearer <access token>
+ ```
+
+1. A status code of 200 is returned and the health status information is displayed in the response body similar to this example response:
+
+ ```JSON
+ {
+ "up": 2,
+ "down": 0,
+ "loadBalancerBackendAddresses": [
+ {
+ "ipAddress": "10.0.2.5",
+ "networkInterfaceIPConfigurationId": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/networkInterfaces/<networkInterfaceName>/ipConfigurations/<ipConfigurationName>",
+ "state": "Up",
+ "reason": "Up_Admin"
+ },
+ {
+ "ipAddress": "10.0.2.4",
+ "networkInterfaceIPConfigurationId": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/networkInterfaces/<networkInterfaceName>/ipConfigurations/<ipConfigurationName>",
+ "state": "Up",
+ "reason": "Up_Probe_Success"
+ }
+ ]
+ }
+
+ ```
+
+## Design considerations
+
+When using the health status feature, consider the following design considerations:
+
+- If the virtual machine instance in the backend pool is turned off, health status returns empty values since the health status isn't retrievable.
+- If you're using a global load balancer, health status displays the reason as *Down_Platform** when a regional load balancerΓÇÖs backend pool is an IP-based backend address that isn't associated with a virtual machine instance.
+- For retrieving health status, the Azure portal and REST API methods are the only supported methods.
+
+## Limitations
+
+When using the health status feature, consider the following limitations:
+
+- Health status isnΓÇÖt supported for nonprobed load balancing rules.
+- Health status isnΓÇÖt supported for Gateway Load Balancer.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Create a public load balancer with an IP-based backend using the Azure portal](tutorial-load-balancer-ip-backend-portal.md)
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Title: What is Azure Load Balancer?
-description: Overview of Azure Load Balancer features, architecture, and implementation. Learn how the Load Balancer works and how to use it in the cloud.
+description: Get an overview of Azure Load Balancer features, architecture, and implementation. Learn how the service works and how to use it in the cloud.
*Load balancing* refers to efficiently distributing incoming network traffic across a group of backend servers or resources.
-Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It's the single point of contact for clients. Load balancer distributes inbound flows that arrive at the load balancer's front end to backend pool instances. These flows are according to configured load-balancing rules and health probes. The backend pool instances can be Azure Virtual Machines or instances in a Virtual Machine Scale Set.
+Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It's the single point of contact for clients. The service distributes inbound flows that arrive at the load balancer's frontend to backend pool instances. These flows are distributed according to configured load-balancing rules and health probes. The backend pool instances can be Azure virtual machines (VMs) or virtual machine scale sets.
-A **[public load balancer](./components.md#frontend-ip-configurations)** can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance internet traffic to your VMs.
+A [public load balancer](./components.md#frontend-ip-configurations) can provide both inbound and outbound connectivity for the VMs inside your virtual network. For inbound traffic scenarios, Azure Load Balancer can load balance internet traffic to your VMs. For outbound traffic scenarios, the service can translate the VMs' private IP addresses to public IP addresses for any outbound connections that originate from your VMs.
-An **[internal (or private) load balancer](./components.md#frontend-ip-configurations)** is used in scenarios where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario.
+Alternatively, an [internal (or private) load balancer](./components.md#frontend-ip-configurations) can provide inbound connectivity to your VMs in private network connectivity scenarios, such as accessing a load balancer frontend from an on-premises network in a hybrid scenario. Internal load balancers are used to load balance traffic inside a virtual network.
-For more information on the individual load balancer components, see [Azure Load Balancer components](./components.md).
+For more information on the service's individual components, see [Azure Load Balancer components](./components.md).
## Why use Azure Load Balancer?
-With Azure Load Balancer, you can scale your applications and create highly available services.
-Load balancer supports both inbound and outbound scenarios. Load balancer provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications.
-Key scenarios that you can accomplish using Azure Standard Load Balancer include:
+With Azure Load Balancer, you can scale your applications and create highly available services.
-- Load balance **[internal](./quickstart-load-balancer-standard-internal-portal.md)** and **[external](./quickstart-load-balancer-standard-public-portal.md)** traffic to Azure virtual machines.
+The service supports both inbound and outbound scenarios. It provides low latency and high throughput, and it scales up to millions of flows for all TCP and UDP applications.
-- Pass-through load balancing which results in ultra-low latency.
+Key scenarios that you can accomplish by using Azure Standard Load Balancer include:
-- Increase availability by distributing resources **[within](./tutorial-load-balancer-standard-public-zonal-portal.md)** and **[across](./quickstart-load-balancer-standard-public-portal.md)** zones.
+- Load balance [internal](./quickstart-load-balancer-standard-internal-portal.md) and [external](./quickstart-load-balancer-standard-public-portal.md) traffic to Azure virtual machines.
-- Configure **[outbound connectivity](./load-balancer-outbound-connections.md)** for Azure virtual machines.
+- Use pass-through load balancing, which results in ultralow latency.
-- Use **[health probes](./load-balancer-custom-probe-overview.md)** to monitor load-balanced resources.
+- Increase availability by distributing resources [within](./tutorial-load-balancer-standard-public-zonal-portal.md) and [across](./quickstart-load-balancer-standard-public-portal.md) zones.
-- Employ **[port forwarding](./tutorial-load-balancer-port-forwarding-portal.md)** to access virtual machines in a virtual network by public IP address and port.
+- Configure [outbound connectivity](./load-balancer-outbound-connections.md) for Azure virtual machines.
-- Enable support for **[load-balancing](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)** of **[IPv6](../virtual-network/ip-services/ipv6-overview.md)**.
+- Use [health probes](./load-balancer-custom-probe-overview.md) to monitor load-balanced resources.
-- Standard load balancer provides multi-dimensional metrics through [Azure Monitor](/azure/azure-monitor/overview). These metrics can be filtered, grouped, and broken out for a given dimension. They provide current and historic insights into performance and health of your service. [Insights for Azure Load Balancer](./load-balancer-insights.md) offers a preconfigured dashboard with useful visualizations for these metrics. Resource Health is also supported. Review **[Standard load balancer diagnostics](load-balancer-standard-diagnostics.md)** for more details.
+- Employ [port forwarding](./tutorial-load-balancer-port-forwarding-portal.md) to access virtual machines in a virtual network by public IP address and port.
-- Load balance services on **[multiple ports, multiple IP addresses, or both](./load-balancer-multivip-overview.md)**.
+- Enable support for [load balancing](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md) of [IPv6](../virtual-network/ip-services/ipv6-overview.md).
-- Move **[internal](./move-across-regions-internal-load-balancer-portal.md)** and **[external](./move-across-regions-external-load-balancer-portal.md)** load balancer resources across Azure regions.
+- Use multidimensional metrics through [Azure Monitor](/azure/azure-monitor/overview). You can filter, group, and break out these metrics for a particular dimension. They provide current and historic insights into performance and health of your service.
-- Load balance TCP and UDP flow on all ports simultaneously using **[HA ports](./load-balancer-ha-ports-overview.md)**.
+ [Insights for Azure Load Balancer](./load-balancer-insights.md) offer a preconfigured dashboard with useful visualizations for these metrics. Resource Health is also supported. For more details, review [Standard load balancer diagnostics](load-balancer-standard-diagnostics.md).
+
+- Load balance services on [multiple ports, multiple IP addresses, or both](./load-balancer-multivip-overview.md).
+
+- Move [internal](./move-across-regions-internal-load-balancer-portal.md) and [external](./move-across-regions-external-load-balancer-portal.md) load balancer resources across Azure regions.
+
+- Load balance TCP and UDP flow on all ports simultaneously by using [high-availability ports](./load-balancer-ha-ports-overview.md).
- Chain Standard Load Balancer and [Gateway Load Balancer](./tutorial-gateway-portal.md).
-### <a name="securebydefault"></a>Secure by default
+### <a name="securebydefault"></a>Security by default
-* Standard load balancer is built on the zero trust network security model.
+- Standard Load Balancer is built on the Zero Trust network security model.
-* Standard Load Balancer is secure by default and part of your virtual network. The virtual network is a private and isolated network.
+- Standard Load Balancer is part of your virtual network, which is private and isolated for security.
-* Standard load balancers and standard public IP addresses are closed to inbound connections unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you don't have an NSG on a subnet or NIC of your virtual machine resource, traffic isn't allowed to reach this resource. To learn about NSGs and how to apply them to your scenario, see [Network Security Groups](../virtual-network/network-security-groups-overview.md).
+- Standard load balancers and standard public IP addresses are closed to inbound connections, unless network security groups (NSGs) open them. You use NSGs to explicitly permit allowed traffic. If you don't have an NSG on a subnet or network interface card (NIC) of your virtual machine resource, traffic isn't allowed to reach the resource. To learn about NSGs and how to apply them to your scenario, see [Network security groups](../virtual-network/network-security-groups-overview.md).
-* Basic load balancer is open to the internet by default.
+- Basic Load Balancer is open to the internet by default.
-* Load balancer doesn't store customer data.
+- Azure Load Balancer doesn't store customer data.
## Pricing and SLA
-For standard load balancer pricing information, see [Load balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
-Basic load balancer is offered at no charge.
-See [SLA for load balancer](https://aka.ms/lbsla). Basic load balancer has no SLA.
+For Standard Load Balancer pricing information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/). For service-level agreements (SLAs), see the [Microsoft licensing information for online services](https://aka.ms/lbsla).
+
+Basic Load Balancer is offered at no charge and has no SLA.
## What's new? Subscribe to the RSS feed and view the latest Azure Load Balancer updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=load%20balancer) page.
-## Next steps
+## Related content
-> [!div class="nextstepaction"]
-> [Create a public standard load balancer](quickstart-load-balancer-standard-public-portal.md)
-> [Azure Load Balancer components](./components.md)
+- [Create a public load balancer](quickstart-load-balancer-standard-public-portal.md)
+- [Azure Load Balancer components](./components.md)
load-balancer Load Balancer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot.md
Title: Troubleshoot common issues Azure Load Balancer
-description: Learn how to troubleshoot common issues with Azure Load Balancer.
+ Title: Troubleshoot common problems with Azure Load Balancer
+description: Learn how to troubleshoot common problems with Azure Load Balancer.
# Troubleshoot Azure Load Balancer
-This page provides troubleshooting information for Basic and Standard common Azure Load Balancer questions. For more information about Standard Load Balancer, see [Standard Load Balancer overview](load-balancer-standard-diagnostics.md).
+This article provides troubleshooting information for common questions about Azure Load Balancer (Basic and Standard tiers). For more information about Standard Load Balancer, see the [Standard Load Balancer overview](load-balancer-standard-diagnostics.md).
-When the Load Balancer connectivity is unavailable, the most common symptoms are as follows:
+When a load balancer's connectivity is unavailable, the most common symptoms are:
-- VMs behind the Load Balancer aren't responding to health probes -- VMs behind the Load Balancer aren't responding to the traffic on the configured port
+- Virtual machines (VMs) behind the load balancer aren't responding to health probes.
+- VMs behind the load balancer aren't responding to the traffic on the configured port.
-When the external clients to the backend VMs go through the load balancer, the IP address of the clients is used for the communication. Make sure the IP address of the clients are added into the NSG allowlist.
+When the external clients to the backend VMs go through the load balancer, the IP address of the clients is used for the communication. Make sure the IP address of the clients is added to the network security group (NSG) allowlist.
-## Problem: No outbound connectivity from Standard internal Load Balancers (ILB)
+## Problem: No outbound connectivity from Standard internal load balancers
-### Validation and Resolution
+### Validation and resolution
-Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address called the default outbound access IP. This isn't recommended for production workloads as the IP address isn't static or locked down via network security groups that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration, which locks down the IP via network security groups. You can also use a [NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) on your subnet. NAT Gateway is the recommended solution for outbound.
+Standard internal load balancers (ILBs) have default security features. Basic ILBs allow connecting to the internet via a hidden public IP address called the *default outbound access IP*. We don't recommend connecting via default outbound access IP for production workloads, because the IP address isn't static or locked down via network security groups that you own.
-## Problem: No inbound connectivity to Standard external Load Balancers (ELB)
+If you recently moved from a Basic ILB to a Standard ILB and need outbound connectivity to the internet from your VMs, you can configure [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) on your subnet. We recommend NAT Gateway for all outbound access in production scenarios.
+
+## Problem: No inbound connectivity to Standard external load balancers
### Cause
-Standard load balancers and standard public IP addresses are closed to inbound connections unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you don't have an NSG on a subnet or NIC of your virtual machine resource, traffic isn't allowed to reach this resource.
+
+Standard load balancers and standard public IP addresses are closed to inbound connections unless network security groups open them. You use NSGs to explicitly permit allowed traffic. You must configure your NSGs to explicitly permit allowed traffic. If you don't have an NSG on a subnet or network interface card (NIC) of your VM resource, traffic isn't allowed to reach the resource.
### Resolution
-In order to allow the ingress traffic, [add a Network Security Group](../virtual-network/manage-network-security-group.md) to the Subnet or interface for your virtual resource.
-## Problem: Can't change backend port for existing LB rule of a load balancer that has Virtual Machine Scale Set deployed in the backend pool.
+To allow ingress traffic, [add a network security group](../virtual-network/manage-network-security-group.md) to the subnet or interface for your virtual resource.
+
+## Problem: Can't change the backend port for an existing load-balancing rule of a load balancer that has a virtual machine scale set deployed in the backend pool
### Cause
-The backend port can't be modified for a load balancing rule that's used by a health probe for load balancer referenced by Virtual Machine Scale Set
+
+When a load balancer is configured with a virtual machine scale set, you can't modify the backend port of a load-balancing rule while it's associated with a health probe.
### Resolution
-In order to change the port, you can remove the health probe by updating the Virtual Machine Scale Set, update the port and then configure the health probe again.
-## Problem: Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
+To change the port, you can remove the health probe. Update the virtual machine scale set, update the port, and then configure the health probe again.
+
+## Problem: Small traffic still going through the load balancer after removal of VMs from the backend pool
-### Cause
-VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
+### Cause
+
+VMs removed from load balancer's backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, Domain Name System (DNS), and other functions within Azure.
### Resolution
-To verify, you can conduct a network trace. The Fully Qualified Domain Name (FQDN) used for your blob storage account is listed within the properties of each storage account. From a virtual machine within your Azure subscription, you can perform `nslookup` to determine the Azure IP assigned to that storage account.
-## Problem: Load Balancer in failed state
+To verify, you can conduct a network trace. The properties of each storage account list the fully qualified domain name (FQDN) for your blob storage account. From a virtual machine within your Azure subscription, you can perform `nslookup` to determine the Azure IP assigned to that storage account.
+
+## Problem: Load Balancer in a failed state
### Resolution-- Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state.-- Update the toggle on the right-hand top corner to **Read/Write**.-- Select **Edit** for the resource in failed state.-- Select **PUT** followed by **GET** to ensure the provisioning state was updated to Succeeded.-- You can then proceed with other actions as the resource is out of failed state.
-## Network captures needed for troubleshooting and support cases
+1. Go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource that's in a failed state.
+1. Update the toggle in the upper-right corner to **Read/Write**.
+1. Select **Edit** for the resource in failed state.
+1. Select **PUT** followed by **GET** to ensure that the provisioning state changed to **Succeeded**.
+
+You can then proceed with other actions, because the resource is out of a failed state.
+
+## Network captures for support tickets
-If you decide to open a support case, collect the following information for a quicker resolution. Choose a single backend VM to perform the following tests:
+If the preceding resolutions don't resolve the problem, open a [support ticket](https://azure.microsoft.com/support/options/).
-- Use `ps ping` from one of the backend VMs within the virtual network to test the probe port response (example: ps ping 10.0.0.4:3389) and record results. -- If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM and the virtual network test VM while you run PsPing then stop the Netsh trace.--
-## Next steps
+If you decide to open a support ticket, collect network captures for a quicker resolution. Choose a single backend VM to perform the following tests:
-If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
+- Use `ps ping` from one of the backend VMs in the virtual network to test the probe port response (example: `ps ping 10.0.0.4:3389`) and record results.
+- If you don't receive a response in ping tests, run a simultaneous Netsh trace on the backend VM and the virtual network test VM while you run PsPing. Then stop the Netsh trace.
load-balancer Troubleshoot Rhc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-rhc.md
Title: Troubleshoot Azure Load Balancer resource health, frontend, and backend availability issues
-description: Use the available metrics to diagnose your degraded or unavailable Azure Standard Load Balancer.
+ Title: Troubleshoot Azure Load Balancer resource health, frontend, and backend availability problems
+description: Use the available metrics to diagnose your degraded or unavailable Azure Standard Load Balancer deployment.
-# Troubleshoot resource health, and inbound availability issues
+# Troubleshoot resource health and inbound availability problems
-This article is a guide to investigate issues impacting the availability of your load balancer frontend IP and backend resources.
+This article can help you investigate problems that affect the availability of your load balancer's frontend IP and backend resources.
-The Resource Health Check (RHC) for Azure Load Balancer is used to determine the health of your load balancer. It analyzes the Data Path Availability metric to determine whether the load balancing endpoints, the frontend IP and frontend ports combinations with load balancing rules, are available.
+You can use the *resource health* feature in Azure Load Balancer to determine the health of your load balancer. It analyzes the Data Path Availability metric to determine whether the load-balancing endpoints, the frontend IP, and frontend port combinations with load-balancing rules are available.
-> Note: RHC is not supported for Basic SKU Load Balancer
+> [!NOTE]
+> Basic Load Balancer doesn't support the resource health feature.
-The below table describes the RHC logic used to determine the health status of your load balancer.
+The following table describes the logic for determining the health status of your load balancer.
| Resource health status | Description | | | |
-| Available | Your load balancer resource is healthy and available. |
-| Degraded | Your load balancer has platform or user initiated events impacting performance. The Data Path Availability metric reported as less than 90% but greater than 25% health for at least two minutes. You may be experiencing moderate to severe performance degradation.
-| Unavailable | Your load balancer resource isn't healthy. The Data Path Availability metric reported less than 25% health for at least two minutes. You may be experiencing significant performance degradation or a lack of availability for inbound connectivity. There can be user or platform events causing unavailability. |
-| Unknown | Resource health status for your load balancer resource hasn't updated or received Data Path Availability information in the last 10 minutes. This state may be transient or your load balancer might not support RHC. |
+| **Available** | Your load balancer resource is healthy and available. |
+| **Degraded** | Your load balancer has platform or user-initiated events that affect performance. The Data Path Availability metric reported less than 90% but greater than 25% health for at least two minutes. You might be experiencing moderate to severe performance degradation. |
+| **Unavailable** | Your load balancer resource isn't healthy. The Data Path Availability metric reported less than 25% health for at least two minutes. You might be experiencing significant performance degradation or a lack of availability for inbound connectivity. User or platform events might be causing unavailability. |
+| **Unknown** | Resource health status for your load balancer resource hasn't updated or received Data Path Availability information in the last 10 minutes. This state might be transient, or your load balancer might not support the resource health feature. |
+## Monitor your load balancer's availability
-## Monitoring your load balancer availability
-The two metrics to be used are *Data Path Availability* and *Health Probe Status* and it's important to understand their meaning to derive correct insights.
+The two metrics that Azure Load Balancer uses to check resource health are *Data Path Availability* and *Health Probe Status*. It's important to understand their meaning to derive correct insights.
-## Data Path Availability
-The Data Path Availability metric is generated by a TCP ping every 25 seconds on all frontend ports that have load-balancing rules configured. This TCP ping is routed to any of the healthy (probed up) backend instances. The metric is an aggregated percentage success rate of TCP pings on each frontend IP:port combination for each of your load balancing rules, across a sample period of time.
+### Data Path Availability
-## Health Probe Status
-The Health Probe Status metric is generated by a ping of the protocol defined in the health probe. This ping is sent to each instance in the backend pool and on the port defined in the health probe. For HTTP and HTTPS probes, a successful ping requires an HTTP 200 OK response whereas with TCP probes any response is considered successful. The health of each backend instance is determined when the probe has reached the number of consecutive successes or failures necessary, based on your configuration of the probe threshold property. The health status of each backend instance determines whether or not the backend instance is allowed to receive traffic. Similar to the Data Path Availability metric, the Health Probe Status metric aggregates the average successful/total pings during the sampling interval. The Health Probe Status value indicates the backend health in isolation from your load balancer by probing your backend instances without sending traffic through the frontend.
+A TCP ping generates the Data Path Availability metric every 25 seconds on all frontend ports where you configured load-balancing rules. This TCP ping is routed to any of the healthy (probed up) backend instances. The metric is an aggregated percentage success rate of TCP pings on each frontend IP/port combination for each of your load-balancing rules, across a sample period of time.
->[!IMPORTANT]
->Health Probe Status is sampled on a one minute basis. This can lead to minor fluctuations in an otherwise steady value. For example, in Active/Passive scenarios where there are two backend instances, one probed up and one probed down, the health probe service may capture 7 samples for the healthy instance and 6 for the unhealthy instance. This will lead to a previously steady value of 50 showing as 46.15 for a one minute interval.
+### Health Probe Status
+
+A ping of the protocol defined in the health probe generates the Health Probe Status metric. This ping is sent to each instance in the backend pool and on the port defined in the health probe. For HTTP and HTTPS probes, a successful ping requires an `HTTP 200 OK` response. With TCP probes, any response is considered successful.
+
+Azure Load Balancer determines the health of each backend instance when the probe reaches the number of consecutive successes or failures that you configured for the probe threshold property. The health status of each backend instance determines whether or not the backend instance is allowed to receive traffic.
+
+Like the Data Path Availability metric, the Health Probe Status metric aggregates the average successful and total pings during the sampling interval. The Health Probe Status value indicates the backend health in isolation from your load balancer by probing your backend instances without sending traffic through the frontend.
+
+> [!IMPORTANT]
+> Health Probe Status is sampled on a one-minute basis. This sampling can lead to minor fluctuations in an otherwise steady value.
+>
+> For example, consider active/passive scenarios where there are two backend instances, one probed up and one probed down. The health probe service might capture seven samples for the healthy instance and six for the unhealthy instance. This situation leads to a previously steady value of 50 showing as 46.15 for a one-minute interval.
## Diagnose degraded and unavailable load balancers
-As outlined in the [resource health article](load-balancer-standard-diagnostics.md#resource-health-status), a degraded load balancer is one that shows between 25% and 90% data path availability. An unavailable load balancer is one with less than 25% data path availability, over a two-minute period. The same steps can be taken to investigate the failure you see in any Health Probe Status or Data Path Availability alerts you've configured. We explore the case where we've checked our resource health and found our load balancer to be unavailable with a Data Path Availability of 0% - our service is down.
+As outlined in the [this article about resource health](load-balancer-standard-diagnostics.md#resource-health-status), a degraded load balancer shows between 25% and 90% for Data Path Availability. An unavailable load balancer is one with less than 25% for Data Path Availability over a two-minute period.
-First, we go to the detailed metrics view of our load balancer insights page in the Azure portal. Access the view from your load balancer resource page or the link in your resource health message. Next we navigate to the Frontend and Backend availability tab and review a thirty-minute window of the time period when the degraded or unavailable state occurred. If we see our data path availability is 0%, we know there's an issue preventing traffic for all of our load-balancing rules, and we can see how long this issue has lasted.
+You can take the same steps to investigate the failure that you see in any Health Probe Status or Data Path Availability alerts that you configured. The following steps explore what to do if you check your resource health and find your load balancer to be unavailable with a Data Path Availability value of 0%. Your service is down.
-The next place we need to look is our Health Probe Status metric to determine whether our data path is unavailable is because we have no healthy backend instances to serve traffic. If we have at least one healthy backend instance for all of our load-balancing and inbound rules, we know it isn't our configuration causing our data paths to be unavailable. This scenario indicates an Azure platform issue. While platform issues are rare, an automated alert is sent to our team to rapidly resolve all platform issues.
+1. In the Azure portal, go to the detailed metrics view of the page for your load balancer insights. Access the view from the page for your load balancer resource or from the link in your resource health message.
+
+1. Go to the tab for frontend and backend availability, and review a 30-minute window of the time period when the degraded or unavailable state occurred. If the Data Path Availability value is 0%, you know that something is preventing traffic for all of your load-balancing rules. You can also see how long this problem has lasted.
+
+1. Check your Health Probe Status metric to determine whether your data path is unavailable because you have no healthy backend instances to serve traffic. If you have at least one healthy backend instance for all of your load-balancing and inbound rules, you know that your configuration isn't what's causing your data paths to be unavailable. This scenario indicates an Azure platform problem. Although platform problems are rare, they trigger an automated alert to our team for rapid resolution.
## Diagnose health probe failures
-If your Health Probe Status metric is reflecting that your backend instances are unhealthy, we recommend following the below checklist to rule out common configuration errors:
-* Check the CPU utilization for your resources to determine if they are under high load.
- * You can check this by viewing the resource's Percentage CPU metric via the Metrics page. Learn how to [Troubleshoot high-CPU issues for Azure virtual machines](/troubleshoot/azure/virtual-machines/troubleshoot-high-cpu-issues-azure-windows-vm).
-* If using an HTTP or HTTPS probe check if the application is healthy and responsive.
- * Validate your application is functional by directly accessing the applications through the private IP address or instance-level public IP address associated with your backend instance.
-* Review the Network Security Groups applied to our backend resources. Ensure that there are no rules of a higher priority than *AllowAzureLoadBalancerInBound* that blocks the health probe.
- * You can do this by visiting the Networking settings of your backend VMs or Virtual Machine Scale Sets.
- * If you find this NSG issue is the case, move the existing Allow rule or create a new high priority rule to allow AzureLoadBalancer traffic.
-* Check your OS. Ensure your VMs are listening on the probe port and review their OS firewall rules to ensure they aren't blocking the probe traffic originating from IP address `168.63.129.16`.
- * You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal.
-* Ensure you're using the right protocol. For example, a probe using HTTP to probe a port listening for a non-HTTP application fails.
-* Azure Firewall shouldn't be placed in the backend pool of load balancers. See [Integrate Azure Firewall with Azure Standard Load Balancer](../firewall/integrate-lb.md) to properly integrate Azure Firewall with load balancer.
-
-## Next steps
-
-* [Learn more about the Azure Load Balancer health probe](load-balancer-custom-probe-overview.md)
+
+If your Health Probe Status metric indicates that your backend instances are unhealthy, we recommend using the following checklist to rule out common configuration errors:
+
+* Check the CPU utilization for your resources to determine if they're under high load.
+
+ You can check by viewing the resource's Percentage CPU metric via the **Metrics** page. For more information, see [Troubleshoot high-CPU issues for Azure Windows virtual machines](/troubleshoot/azure/virtual-machines/troubleshoot-high-cpu-issues-azure-windows-vm).
+* If you're using an HTTP or HTTPS probe, check if the application is healthy and responsive.
+
+ Validate that your application is functional by directly accessing it through the private IP address or instance-level public IP address that's associated with your backend instance.
+* Review the network security groups (NSGs) applied to your backend resources. Ensure that no rules have a higher priority than `AllowAzureLoadBalancerInBound` that block the health probe.
+
+ You can do this task by visiting the network settings of your backend VMs or virtual machine scale sets. If you find that this NSG problem is the case, move the existing `Allow` rule or create a new high-priority rule to allow Azure Load Balancer traffic.
+* Check your OS. Ensure that your VMs are listening on the probe port. Also review the OS firewall rules for the VMs to ensure that they aren't blocking the probe traffic originating from IP address `168.63.129.16`.
+
+ You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal.
+* Ensure that you're using the right protocol. For example, a probe that uses HTTP to probe a port listening for a non-HTTP application fails.
+* Don't place Azure Firewall in the backend pool of load balancers. For more information, see [Integrate Azure Firewall with Azure Standard Load Balancer](../firewall/integrate-lb.md).
+
+## Related content
+
+* [Learn more about Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
* [Learn more about Azure Load Balancer metrics](load-balancer-standard-diagnostics.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
az role assignment create --assignee-object-id "<assignee>" --assignee-principal
Example: assigning permission for an Azure Managed Grafana instance to access an Application Insights resource using a managed identity. ```azurecli
-az role assignment create --assignee-object-id "abcdef01-2345-6789-0abc-def012345678" --assignee-principal-type "ServicePrincipal" \
+az role assignment create --assignee-object-id "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" --assignee-principal-type "ServicePrincipal" \
--role "Monitoring Reader" \scope "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/my-rg/providers/microsoft.insights/components/myappinsights/"
+--scope "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/my-rg/providers/microsoft.insights/components/myappinsights/"
``` For more information about assigning Azure roles using the Azure CLI, refer to the [Role based access control documentation](../role-based-access-control/role-assignments-cli.md).
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Example:
```azurecli az role assignment create --assignee "name@contoso.com" \ --role "Grafana Admin" \scope "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/my-rg/providers/Microsoft.Dashboard/grafana/my-grafana"
+--scope "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/my-rg/providers/Microsoft.Dashboard/grafana/my-grafana"
``` For more information about assigning Azure roles using the Azure CLI, refer to the [Role based access control documentation](../role-based-access-control/role-assignments-cli.md).
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
ms. Previously updated : 07/03/2024 Last updated : 10/29/2024
This article provides an overview of assessments in the [Azure Migrate: Discover
The Business case capability helps you build a business proposal to understand how Azure can bring the most value to your business. It highlights: - On-premises vs Azure total cost of ownership.-- (Optional) Current on-premises vs On-premises with Arc total cost of ownership.-- (Optional) the cost savings and other benefits of using Azure security (Microsoft Defender for Cloud) and management (Azure Monitor and Update Management) via Arc, as well ESUs enabled by Arc for your on-premises servers.
+- Current on-premises vs On-premises with Arc total cost of ownership.
+- Cost savings and other benefits of using Azure security (Microsoft Defender for Cloud) and management (Azure Monitor and Update Management) via Arc, as well as ESUs enabled by Arc for your on-premises servers.
- Year on year cashflow analysis. - Resource utilization based insights to identify servers and workloads that are ideal for cloud. - Quick wins for migration and modernization including end of support Windows OS and SQL versions.
Cost components for running on-premises servers. For TCO calculations, an annual
| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost isn't applicable for Azure cost. | | Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 | | Management | Azure Management Services | Azure Monitor, Azure Backup and Azure Update Manager | Azure Monitor costs for each server as per listed price in the region assuming collection of logs ingestion for the guest operating system and one custom application is enabled for the server, totaling logs data of 3GB/month. <br/><br/> Azure Backup cost for each server/month is dynamically estimated based on the [Azure Backup Pricing](/azure/backup/azure-backup-pricing), which includes a protected instance fee, snapshot storage and recovery services vault storage. <br/><br/> Azure Update Manager is free for Azure servers. |
+| Azure Arc setting | | |For your on-premises servers, this setting assumes that you have Arc-enabled all your servers at the beginning of the migration journey and will migrate them to Azure over time. Azure Arc helps you manage your Azure estate and remaining on-premises estate through a single pane during migration and post-migration. |
#### On-premises with Azure Arc cost
network-watcher Diagnose Network Security Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-network-security-rules.md
Previously updated : 09/18/2024 Last updated : 10/29/2024
You can use [network security groups](../virtual-network/network-security-groups-overview.md) to filter and control inbound and outbound network traffic to and from your Azure resources. You can also use [Azure Virtual Network Manager](../virtual-network-manager/overview.md) to apply admin security rules to your Azure resources to control network traffic.
-In this article, you learn how to use Azure Network Watcher [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md) to check and troubleshoot security rules applied to your Azure traffic. NSG diagnostics checks if the traffic is allowed or denied by applied security rules.
+In this article, you learn how to use Azure Network Watcher [NSG diagnostics](nsg-diagnostics-overview.md) to check and troubleshoot security rules applied to your Azure traffic. NSG diagnostics checks if the traffic is allowed or denied by applied security rules.
The example in this article shows you how a misconfigured network security group can prevent you from using Azure Bastion to connect to a virtual machine.
The example in this article shows you how a misconfigured network security group
# [**Portal**](#tab/portal) -- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Sign in to the [Azure portal](https://portal.azure.com/?WT.mc_id=A261C142F) with your Azure account. # [**PowerShell**](#tab/powershell) -- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Azure Cloud Shell or Azure PowerShell.
The example in this article shows you how a misconfigured network security group
# [**Azure CLI**](#tab/cli) -- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Azure Cloud Shell or Azure CLI.
az group delete --name 'myResourceGroup' --yes --no-wait
-## Next steps
+## Related content
- To learn about other Network Watcher tools, see [What is Azure Network Watcher?](network-watcher-overview.md)-- To learn how to troubleshoot virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md).
+- To learn how to troubleshoot virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md).
network-watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/ip-flow-verify-overview.md
IP flow verify returns **Access denied** or **Access allowed**, the name of the
- You must have a Network Watcher instance in the Azure subscription and region of the virtual machine. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md). - You must have the necessary permissions to access the feature. For more information, see [RBAC permissions required to use Network Watcher capabilities](required-rbac-permissions.md).-- IP flow verify only tests TCP and UDP rules. To test ICMP traffic rules, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).-- IP flow verify only tests security and admin rules applied to a virtual machine's network interface. To test rules applied to virtual machine scale sets, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).
+- IP flow verify only tests TCP and UDP rules. To test ICMP traffic rules, use [NSG diagnostics](nsg-diagnostics-overview.md).
+- IP flow verify only tests security and admin rules applied to a virtual machine's network interface. To test rules applied to virtual machine scale sets, use [NSG diagnostics](nsg-diagnostics-overview.md).
## Next step
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
Network Watcher offers seven network diagnostic tools that help troubleshoot and
### NSG diagnostics
-**NSG diagnostics** allows you to detect traffic filtering issues at a virtual machine, virtual machine scale set, or application gateway level. It checks if a packet is allowed or denied to or from an IP address, IP prefix, or a service tag. It tells you which security rule allowed or denied the traffic. It also allows you to add a new security rule with a higher priority to allow or deny the traffic. For more information, see [NSG diagnostics overview](network-watcher-network-configuration-diagnostics-overview.md) and [Diagnose network security rules](diagnose-network-security-rules.md).
+**NSG diagnostics** allows you to detect traffic filtering issues at a virtual machine, virtual machine scale set, or application gateway level. It checks if a packet is allowed or denied to or from an IP address, IP prefix, or a service tag. It tells you which security rule allowed or denied the traffic. It also allows you to add a new security rule with a higher priority to allow or deny the traffic. For more information, see [NSG diagnostics overview](nsg-diagnostics-overview.md) and [Diagnose network security rules](diagnose-network-security-rules.md).
### Next hop
network-watcher Nsg Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-diagnostics-overview.md
+
+ Title: NSG diagnostics overview
+
+description: Learn about NSG diagnostics tool in Azure Network Watcher how it can help you troubleshoot traffic issues.
++++ Last updated : 10/29/2024++
+# NSG diagnostics overview
+
+The NSG diagnostics is an Azure Network Watcher tool that helps you understand which network traffic is allowed or denied in your Azure virtual network along with detailed information for debugging. NSG diagnostics can help you verify that your network security group rules are set up properly.
+
+## Background
+
+- Your resources in Azure are connected via [virtual networks](../virtual-network/virtual-networks-overview.md) and subnets. The security of these virtual networks and subnets can be managed using [network security groups](../virtual-network/network-security-groups-overview.md).
+- A network security group contains a list of [security rules](../virtual-network/network-security-groups-overview.md#security-rules) that allow or deny network traffic to resources it's connected to. A network security group can be associated to a virtual network subnet or individual network interface (NIC) attached to a virtual machine (VM).
+- All traffic flows in your network are evaluated using the rules in the applicable network security group.
+- Rules are evaluated based on priority number from lowest to highest.
+
+## How does NSG diagnostics work?
+
+The NSG diagnostics tool can simulate a given flow based on the source and destination you provide. It returns whether the flow is allowed or denied with detailed information about the security rule allowing or denying the flow.
+
+## Next step
+
+To learn how to use NSG diagnostics, continue to:
+
+> [!div class="nextstepaction"]
+> [Diagnose network security rules](diagnose-network-security-rules.md)
operator-nexus Concepts Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-compute.md
Previously updated : 05/22/2023 Last updated : 10/25/2024
The following properties reflect the operational state of a BMM:
## Form-factor-specific information Azure Operator Nexus offers a group of on-premises cloud solutions that cater to both [near-edge](reference-near-edge-compute.md) and far-edge environments.+
+### Operator Nexus Network Cloud SKUs
+
+For Stock Keeping Unit (SKU) information please see [Operator Nexus Network Cloud SKUs](./reference-operator-nexus-network-cloud-skus-us.md).
operator-nexus Concepts Nexus Kubernetes Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-kubernetes-placement.md
following sorting rules:
"bin packs" the extra-large VMs in order to reduce fragmentation of the available compute resources.
+1. The "bin packing" rule mentioned above also applies to smaller VMs in addition to
+ large VMs.This helps to "pack" smaller VMs from different clusters onto the same
+ baremetal machines, increasing the overall placement efficiency.
+ For example control plane nodes & small-SKU Nodes (agent pool) from different
+ clusters affine together.
+ ## Example placement scenarios The following sections highlight behavior that Nexus users should expect
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
Previously updated : 02/08/2024 Last updated : 10/29/2024
az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \
| LAW_ID | Log Analytics Workspace ID for the Cluster | | CLUSTER_LOCATION | The local name of the Cluster | | AGGR_RACK_RESOURCE_ID | RackID for Aggregator Rack |
-| AGGR_RACK_SKU | Rack SKU for Aggregator Rack |
+| AGGR_RACK_SKU | Rack SKU for Aggregator Rack *See [Operator Nexus Network Cloud SKUs](./reference-operator-nexus-network-cloud-skus-us.md) |
| AGGR_RACK_SN | Rack Serial Number for Aggregator Rack | | AGGR_RACK_LOCATION | Rack physical location for Aggregator Rack | | AGGR_RACK_BMM | Used for single rack deployment only, empty for multi-rack |
az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \
| SA_USER | Storage Appliance admin user | | SA_SN | Storage Appliance Serial Number | | COMPX_RACK_RESOURCE_ID | RackID for CompX Rack; repeat for each rack in compute-rack-definitions |
-| COMPX_RACK_SKU | Rack SKU for CompX Rack; repeat for each rack in compute-rack-definitions |
+| COMPX_RACK_SKU | Rack SKU for CompX Rack; repeat for each rack in compute-rack-definitions *See [Operator Nexus Network Cloud SKUs](./reference-operator-nexus-network-cloud-skus-us.md) |
| COMPX_RACK_SN | Rack Serial Number for CompX Rack; repeat for each rack in compute-rack-definitions | | COMPX_RACK_LOCATION | Rack physical location for CompX Rack; repeat for each rack in compute-rack-definitions | | COMPX_SVRY_BMC_PASS | CompX Rack ServerY Baseboard Management Controller (BMC) password; repeat for each rack in compute-rack-definitions and for each server in rack |
You can find examples for an 8-Rack 2M16C SKU cluster using these two files:
### Cluster validation
-A successful Operator Nexus Cluster creation results in the creation of an Azure Kubernetes Service (AKS) cluster
+A successful Operator Nexus Cluster creation results in the creation of a Azure resource
inside your subscription. The cluster ID, cluster provisioning state, and deployment state are returned as a result of a successful `cluster create`.
metal machines that failed the hardware validation (for example, `COMP0_SVR0_SER
``` See the article [Tracking Asynchronous Operations Using Azure CLI](./howto-track-async-operations-cli.md) for another example.
+See the article [Troubleshoot BMM provisioning](./troubleshoot-bare-metal-machine-provisioning.md) for more information that may be helpful when specific machines fail validation or deployment.
## Cluster deployment validation
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
We can easily upgrade from any small update in one Kubernetes version to any sma
Note the following important changes to make before you upgrade to any of the available minor versions:
-| Kubernetes Version | Version Bundle | Components | OS components | Breaking Changes | Notes |
-|--|-|--|||--|
-| 1.29.6 | 1 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.29.4 | 1 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.11 | 1 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.9 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Extended Available patches: 1.28.0-5 |
-| 1.27.9 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.3 | 6 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.3 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Extended Available patches: 1.27.1-8 |
-| 1.26.12 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.12 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 6 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Extended Available patches: 1.26.3-8 |
-| 1.25.11 | 6 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.12.0-86<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 8 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4-hotfix<br>etcd v3.5.14<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 7 |Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Extended Available patches: 1.25.4-6 |
+| Kubernetes Version | Version Bundle | Components | OS Components | Breaking Changes | Notes |
+| - | -- | -- | -- | -- | |
+| 1.30.3 | 1 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 3.0.20240824](https://github.com/microsoft/azurelinux/releases/tag/3.0.20240824-3.0) | No breaking changes | |
+| 1.29.7 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.29.4-1 |
+| 1.29.7 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.29.6 | 4 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.29.6 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.28.12 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.28.9-2, 1.28.0-6 |
+| 1.28.12 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.28.11 | 4 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.28.11 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.27.13 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | Extended Available patches 1.27.3-7, 1.27.1-8 |
+| 1.27.13 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.27.9 | 5 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.27.9 | 4 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.26.12 | 4 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.26.12 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.26.6 | 6 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | Extended Available patches: 1.26.3-8 |
+| 1.26.6 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.25.11 | 6 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.25.11 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.25.6 | 8 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | Extended Available patches 1.25.5-5 |
+| 1.25.6 | 7 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.5.1<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
### Version bundle features
operator-nexus Reference Operator Nexus Network Cloud Skus Us https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-operator-nexus-network-cloud-skus-us.md
+
+ Title: Azure Operator Nexus Network Cloud SKUs
+description: SKU options for Azure Operator Nexus Network Cloud
+ Last updated : 10/24/2024+++++
+# Azure Operator Nexus Network Cloud Stock Keeping Units (SKUs)
+
+Operator Nexus Network Cloud SKUs for Azure Operator Nexus are meticulously designed to streamline the procurement and deployment processes, offering standardized bill of materials (BOM), topologies, wiring, and workflows. Microsoft crafts and prevalidates each SKU in collaboration with OEM vendors, ensuring seamless integration and optimal performance for operators.
+
+Operator Nexus Network Cloud SKUs offer a comprehensive range of options, allowing operators to tailor their deployments according to their specific requirements. With prevalidated configurations and standardized BOMs, the procurement and deployment processes are streamlined, ensuring efficiency and performance across the board.
+
+The following table outlines the various configurations of Operator Nexus Network Cloud SKUs, catering to different use-cases and functionalities required by operators.
+
+| Version | Use-Case | Network Cloud SKU ID | Description | BOM Components |
+||--|--|||
+| 1.7.3 | Multi Rack Near-Edge Aggregation (Agg) Rack | VNearEdge1_Aggregator_x70r3_9 | Aggregation Rack with Pure x70r3 | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_4C2M | Support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_8C2M | Support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_12C2M | Support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_16C2M | Support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_4C2M | 100G Fabric support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_8C2M | 100G Fabric support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_12C2M | 100G Fabric support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_16C2M | 100G Fabric support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_4C2M | Support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_8C2M | Support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_12C2M | Support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_16C2M | Support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x70r4 | Aggregation Rack with Pure x70r4. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x70r3 | Aggregation Rack with Pure x70r3. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x20r4 | Aggregation Rack with Pure x70r4. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. |
+| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x20r3 | Aggregation Rack with Pure x70r3. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. |
+
+**Notes:**
+- Bill of materials (BOM) adheres to Nexus Network Cloud specifications.
+- All subscribed customers have the privilege to request BOM details.
operator-nexus Troubleshoot Accepted Cluster Hydration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-accepted-cluster-hydration.md
+
+ Title: "Azure Operator Nexus: Accepted Cluster"
+description: Troubleshoot accepted Cluster resource.
+++++ Last updated : 10/30/2024
+#
++
+# Troubleshoot accepted Cluster resources
+
+Operator Nexus relies on mirroring, or hydrating, resources from the on-premises cluster to Azure. When this process is interrupted, the Cluster resource can move to `Accepted`state.
+
+## Diagnosis
+
+The Cluster status is viewed via the Azure portal or via Azure CLI.
+
+```bash
+az networkcloud cluster show --resource-group <RESOURCE_GROUP> --name <CLUSTER_NAME>
+```
+
+## Mitigation steps
+
+### Triggering the resource sync
++
+1. From the Cluster resource page in the Azure portal, add a tag to the Cluster resource.
+2. The resource moves out of the `Accepted` state.
+
+```bash
+az login
+az account set --subscription <SUBSCRIPTION>
+az resource tag --tags exampleTag=exampleValue --name <CLUSTER> --resource-group <CLUSTER_RG> --resource-type "Microsoft.ContainerService/managedClusters"
+```
+
+## Verification
+
+After the tag is applied, the Cluster moves to `Running` state.
+
+```bash
+az networkcloud cluster show --resource-group <RESOURCE_GROUP> --name <CLUSTER_NAME>
+```
+
+If the Cluster resource maintains the state after a period of time, less than 5 minutes, contact Microsoft support.
+
+## Further information
+
+ Learn more about how resources are hydrated with [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview).
operator-nexus Troubleshoot Reboot Reimage Replace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-reboot-reimage-replace.md
When you're performing the following physical repairs, a replace action is requi
## Summary Restarting, reimaging, and replacing are effective troubleshooting methods that you can use to address technical problems. However, it's important to have a systematic approach and to consider other factors before you try any drastic measures.
+More details about the BMM actions can be found in the [BMM actions](howto-baremetal-functions.md) article.
If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
operator-nexus Troubleshoot Vm Error After Reboot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-vm-error-after-reboot.md
+
+ Title: Troubleshoot VM problems after cordoning off and restarting bare-metal machines for Azure Operator Nexus
+description: Learn what to do when you get VM errors on the Azure portal after you cordon off and restart bare-metal machines.
+++ Last updated : 06/13/2023+++
+# Troubleshoot VM problems after cordoning off and restarting bare-metal machines
+
+Follow this troubleshooting guide after you cordon off and restart bare metal machine (BMMs) for Azure Operator Nexus if:
+
+- You encounter virtual machines (VMs) with an error status on the Azure portal after an upgrade.
+- Traditional methods such as powering off and restarting the VMs don't work.
+
+## Prerequisites
+
+- Install the latest version of the
+ [appropriate Azure CLI extensions](./howto-install-cli-extensions.md).
+- Familiarize yourself with the capabilities referenced in this article by reviewing the [BMM actions](howto-baremetal-functions.md).
+- Gather the following information:
+ - Subscription ID
+ - Cluster name and resource group
+ - Virtual machine name
+- Make sure that the virtual machine has a provisioning state of **Succeeded** and a power state of **On**.
+
+## Symptoms
+
+- During BMM restart or upgrade testing, the VM is in an error state.
+- After the restart, or after powering off and powering back on, the BMM is no longer cordoned off.
+- Although the virtual network function (VNF) successfully came up, established its BGP sessions, and started routing traffic, the VM status in the portal consistently shows an error. Despite this discrepancy, the application remains healthy and continues to function properly.
+- The portal actions and Azure CLI APIs for the NC VM resource itself are no longer achieving the intent. For example:
+ - Selecting **Power Off** (or using the Azure CLI to power off) doesn't actually power off the VM anymore.
+ - Selecting **Restart** (or using the Azure CLI to restart) doesn't actually restart the VM anymore.
+ - The platform has lost the ability to manage this VM resource.
++
+## Troubleshooting steps
+
+1. Gather the VM details and validate the VM status in the portal. Ensure that the VM isn't connected and is powered off.
+1. Validate the status of the virtual machine before and after restart or upgrade.
+1. Check the BGP session and traffic flow before and after restart or upgrade of the VNF.
+
+For more troubleshooting, see [Troubleshoot Azure Operator Nexus server problems](troubleshoot-reboot-reimage-replace.md).
+
+## Procedure
+
+There's a problem with the status update on the VM after the upgrade.
+Although the upgrade and the VM itself are fine, the status is being reported incorrectly, leading to actions being ignored.
+
+Perform the following Azure CLI update on any affected VMs with dummy tag values (the use of `tag1` and `value1`):
+
+~~~bash
+ az networkcloud virtualmachine update --ids <VMresourceId>ΓÇ»--tags tag1=value1
+~~~
+
+This process restores the VM to an online state.
++
+If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Overview of Azure Native ISV Services
-description: Introduction to the Azure Native ISV Services.
+description: Learn about the Azure Native ISV Services' features and benefits, including unified operations and integrations.
Previously updated : 04/08/2024 Last updated : 10/30/2024 # Azure Native ISV Services overview
-Azure Native ISV Services enable you to easily provision, manage, and tightly integrate *independent software vendor (ISV)* software and services on Azure. Azure Native ISV Services is developed and managed by Microsoft and the ISV. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
+Easily provision, manage, and tightly integrate *independent software vendor (ISV)* software and services on Azure with Azure Native ISV Services. Microsoft and the ISV work together to develop and manage the service. Currently, several services are publicly available across these areas: observability, data, networking, and storage.
+
+For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
## Features of Azure Native ISV Services
reliability Reliability Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-fabric.md
- references_regions - build-2023 - ignite-2023 Previously updated : 10/28/2024 Last updated : 10/29/2024 # Reliability in Microsoft Fabric
Fabric makes commercially reasonable efforts to provide availability zone suppor
| South Central US | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | West US 2 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | West US 3 | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
-|**Europe** | | | | | | |
+|**Europe** | | | | | | |
| France Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | Germany West Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| Italy North | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
| North Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| Norway East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
+| Poland Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
| UK South | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | West Europe | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
-| Norway East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
| **Middle East** | | | | | | | | Qatar Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | |
+| Israel Central | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: |
| **Africa** | | | | | | | | South Africa North | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | **Asia Pacific** | | | | | | |
sap Disaster Recovery Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-sap-hana.md
Title: Add HSR third site to HANA Pacemaker cluster
-description: Learn how to extend a highly available SAP HANA solution with a third site for disaster recovery.
+ Title: Add additional secondary sites to HANA Pacemaker cluster
+description: Learn how to extend a highly available SAP HANA solution with additional sites for disaster recovery.
Previously updated : 01/16/2024 Last updated : 10/29/2024
-# Add an HSR third site to a HANA Pacemaker cluster
+# Add additional secondary sites to a HANA Pacemaker cluster
-This article describes requirements and setup of a third HANA replication site to complement an existing Pacemaker cluster. Both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL) specifics are covered.
+This article describes the requirements and setup for configuring additional secondary HANA replication site to complement an existing Pacemaker cluster. Both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL) specifics are covered.
## Overview
-SAP HANA supports system replication (HSR) with more than two sites connected. You can add a third site to an existing HSR pair, managed by Pacemaker in a highly available setup. You can deploy the third site in a second Azure region for disaster recovery (DR) purposes.
+SAP HANA supports system replication (HSR) with more than two connected sites. You can configure additional sites to an existing HSR pair that Pacemaker manages in a highly available setup. For example, you can deploy these additional sites in a second Azure region for disaster recovery (DR) purposes.
-Pacemaker and the HANA cluster resource agent manage the first two sites. The Pacemaker cluster doesn't control the third site.
+Pacemaker and the HANA cluster resource agent manage only the first two sites in HSR. The additional sites aren't controlled by the Pacemaker cluster.
-SAP HANA supports a third system replication site in two modes:
+SAP HANA supports additional secondary sites system replication in two modes:
-- [Multitarget](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ba457510958241889a459e606bbcf3d3.html) replicates data changes from primary to more than one target system. The third site is connected to primary replication in a star topology.-- [Multitier](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/f730f308fede4040bcb5ccea6751e74d.html) is a two-tier replication. A cascading, or chained, setup of three different HANA tiers. The third site connects to the secondary.
+- [Multitarget](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ba457510958241889a459e606bbcf3d3.html) replicates data changes from primary to more than one target system. The additional sites are connected to primary replication in a star topology.
+- [Multitier](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/f730f308fede4040bcb5ccea6751e74d.html) is a cascading, or chained, set up of HANA system replication. The third site connects to the secondary.
For more conceptual details about HANA HSR within one region and across different regions, see [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md#combine-availability-within-one-region-and-across-regions). ## Prerequisites for SLES
-Requirements for a third HSR site are different for HANA scale-up and HANA scale-out.
+Requirements for additional HSR sites are different for HANA scale-up and HANA scale-out.
> [!NOTE]
-> Requirements in this article are only valid for a Pacemaker-enabled landscape. Without Pacemaker, SAP HANA version requirements apply to the chosen replication mode.
-> Pacemaker and the HANA cluster resource agent manage only two sites. The third HSR site isn't controlled by the Pacemaker cluster.
+>
+> - Requirements in this article are only valid for a Pacemaker-enabled landscape. Without Pacemaker, SAP HANA version requirements apply to the chosen replication mode.
+> - Pacemaker and the HANA cluster resource agent manage only two sites. The additional HSR site isn't controlled by the Pacemaker cluster.
+- SUSE supports maximum of one additional system replication site to an SAP HANA database outside the Pacemaker cluster.
- **Both scale-up and scale-out**: SAP HANA SPS 04 or newer is required to use multitarget HSR with a Pacemaker cluster. - **Both scale-up and scale-out**: Maximum of one SAP HANA system replication connected from outside the Linux cluster. - **HANA scale-out only**: SLES 15 SP1 or higher.
Requirements for a third HSR site are different for HANA scale-up and HANA scale
## Prerequisites for RHEL
-Requirements for a third HSR site are different for HANA scale-up and HANA scale-out.
+Requirements for additional HSR sites are different for HANA scale-up and HANA scale-out.
> [!NOTE]
-> Requirements in this article are only valid for a Pacemaker-enabled landscape. Without Pacemaker, SAP HANA version requirements apply for the chosen replication mode.
-> Pacemaker and the HANA cluster resource agent manage only two sites. The third HSR site isn't controlled by the Pacemaker cluster.
+>
+> - Requirements in this article are only valid for a Pacemaker-enabled landscape. Without Pacemaker, SAP HANA version requirements apply for the chosen replication mode.
+> - Pacemaker and the HANA cluster resource agent manage only two sites. The additional HSR sites isn't controlled by the Pacemaker cluster.
+- RedHat supports one or more additional system replication sites to an SAP HANA database outside the Pacemaker cluster.
- **HANA scale-up only**: See RedHat [support policies for RHEL HA clusters](https://access.redhat.com/articles/3397471) for details on the minimum OS, SAP HANA, and cluster resource agents version. - **HANA scale-out only**: HANA multitarget replication isn't supported on Azure with a Pacemaker cluster.
+> [!Tip]
+> The configuration illustrates how to setup third site outside Pacemaker cluster. On RHEL, if you have more than one additional sites outside the Pacemaker cluster, you would need to extend the setup to those other sites as well.
+ ## HANA scale-up: Add HANA multitarget system replication for DR purposes
-With SAP HANA HA hooks SAPHanaSR/susHanaSR for [SLES](./sap-hana-high-availability.md#implement-hana-resource-agents) and [RHEL](./sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr), you can add a third node for DR purposes. The Pacemaker environment is aware of a HANA multitarget DR setup.
+With SAP HANA HA hooks SAPHanaSR/susHanaSR for [SLES](./sap-hana-high-availability.md#implement-hana-resource-agents) and [RHEL](./sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr), you can add additional sites to HANA system replication. The Pacemaker environment is aware of a HANA multitarget setup.
-Failure of the third node won't trigger any cluster action. The cluster detects the replication status of connected sites and the monitored attribute for the third site can change between `SOK` and `SFAIL` states. Any takeover tests to the third/DR site or executing your DR exercise process should first place the cluster resources into maintenance mode to prevent any undesired cluster action.
+Failure of additional sites doesn't trigger any cluster action. The cluster detects the replication status of connected sites and the monitored attribute for the third site can change between `SOK` and `SFAIL` states. Any takeover tests to the additional site or executing your DR exercise process should first place the cluster resources into maintenance mode to prevent any undesired cluster action.
The following example shows a multitarget system replication system. For more information, see [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/2e6c71ab55f147e19b832565311a8e4e.html). ![Diagram that shows an example of a HANA scale-up multitarget system replication system.](./media/sap-hana-high-availability/sap-hana-high-availability-scale-up-hsr-multi-target.png)
The following example shows a multitarget system replication system. For more in
With the SAP HANA HA provider [SAPHanaSrMultiTarget](./sap-hana-high-availability-scale-out-hsr-suse.md#implement-hana-ha-hooks-saphanasrmultitarget-and-suschksrv), you can add a third HANA scale-out site. This third site is often used for DR in another Azure region. The Pacemaker environment is aware of a HANA multitarget DR setup. This section applies to systems running Pacemaker on SUSE only. See the "Prerequisites" section in this document for details.
-Failure of the third node won't trigger any cluster action. The cluster detects the replication status of connected sites and the monitored attribute for the third site can change between the `SOK` and `SFAIL` states. Any takeover tests to the third/DR site or executing your DR exercise process should first place the cluster resources into maintenance mode to prevent any undesired cluster action.
+Failure of the third node doesn't trigger any cluster action. The cluster detects the replication status of connected sites and the monitored attribute for the third site can change between the `SOK` and `SFAIL` states. Any takeover tests to the third/DR site or executing your DR exercise process should first place the cluster resources into maintenance mode to prevent any undesired cluster action.
The following example shows a multitarget system replication system. For more information, see [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/2e6c71ab55f147e19b832565311a8e4e.html). ![Diagram that shows an example of a HANA scale-out multitarget system replication system.](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-multi-target.png)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 10/28/2024 Last updated : 10/29/2024
In the SAP workload documentation space, you can find the following areas:
- **Azure Monitor for SAP solutions**: Microsoft developed monitoring solutions specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver. This section documents the deployment and usage of the service ## Change Log-
+- October 29, 2024: some changes on disk caching and smaller updates in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md). Plus fixing some typoes in HANA storage configuration documents
- October 28, 2024: Added information on RedHat support and the configuration of Azure fence agents for VMs in the Azure Government cloud to the document [Set up Pacemaker on Red Hat Enterprise Linux in Azure](./high-availability-guide-rhel-pacemaker.md). - October 25, 2024: Adding documentation link to [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms-guide-sqlserver.md) that describes how to disable SMT to be able to use some Mv3 SKUs where SQL Server would have a problem with too large NUMA nodes. - October 16, 2024: Included ordering constraints in [High availability of SAP HANA scale-up with Azure NetApp Files on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) to ensure SAP resources on a node stop before any of the NFS mounts.
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
Previously updated : 10/14/2024 Last updated : 10/29/2024
Configuration for SAP **/hana/data** volume:
| M416(d)s_8_v2 | 7,600 | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting | | M416(d)s_8_v3 | 7,600 | 4,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
-| M624(d)s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 4 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 4 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
| M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 4 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 6 x P60<sup>2</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
For the **/hana/log** volume. the configuration would look like:
| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M416(d)s_8_v3 | 7,600 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M624s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
| M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
For the other volumes, the configuration would look like:
| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M416(d)s_8_v3 | 7,600 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M624s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 |
| M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 |
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2'
Previously updated : 09/03/2024 Last updated : 10/29/2024
Configuration for SAP **/hana/data** volume:
| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 80,000 | 9,120 GB | 1,250 MBps| 20,000 | | M416(d)s_8_v3 | 7,600 GiB | 4,000 MBps | 130,000 | 9,120 GB | 1,250 MBps| 30,000 | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 GB | 1,300 MBps| 25,000 |
-| M624s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 130,000 | 13,680 GB | 1,300 MBps| 40,000 |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 130,000 | 13,680 GB | 1,300 MBps| 40,000 |
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 19,200 GB | 2,000 MBps<sup>2</sup> | 40,000 |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 130,000 | 19,200 GB | 4,000 MBps<sup>2</sup> | 60,000 |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 130,000 | 19,200 GB | 4,000 MBps<sup>2</sup> | 60,000 |
| M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 28,400 GB | 2,000 MBps<sup>2</sup> | 60,000 | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 130,000/260,000<sup>3</sup> | 36,0000 GB | 2,000 MBps<sup>2</sup> | 80,000 | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 130,000/260,000<sup>3</sup> | 36,0000 GB | 2,000 MBps<sup>2</sup> | 80,000 |
For the **/hana/log** volume. the configuration would look like:
| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB | | M416(d)s_8_v3 | 7,600 GiB | 4,000 MBps | 130,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
-| M624s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 130,000 | 512 GB | 600 MBps | 6,000 | 1,024 GB |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 130,000 | 512 GB | 600 MBps | 6,000 | 1,024 GB |
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 130,000 | 512 GB | 600 MBps | 10,000 | 1,024 GB |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 130,000 | 512 GB | 600 MBps | 10,000 | 1,024 GB |
| M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 130,000/260,000<sup>3</sup> | 600 MBps | 10,000 | 1,024 GB | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 130,000/260,000<sup>3</sup> | 600 MBps | 10,000 | 1,024 GB |
sap Hana Vm Ultra Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
Previously updated : 09/03/2024 Last updated : 10/29/2024
The recommendations are often exceeding the SAP minimum requirements as stated e
| M416s_8_v2 | 7,600 | 2,000 MBps | 9,500 GB | 1,250 MBps | 20,000 | 512 GB | 400 MBps | 4,000 | | M416(d)s_8_v3 | 7,600 GiB | 4,000 MBps | 1,250 MBps | 20,000 | 512 GB | 400 MBps | 4,000 | | M416ms_v2 | 11,400 GiB | 2,000 MBps | 14,400 GB | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
-| M624s_12_v3, M832s_12_v3 | 11,400 GiB | 4,000 MBps | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
+| M624(d)s_12_v3, M832(d)s_12_v3 | 11,400 GiB | 4,000 MBps | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
| M832isx<sup>1</sup> | 14902 GiB | larger than 2,000 Mbps | 19,200 GB | 2,000 MBps<sup>2</sup> | 40,000 | 512 GB | 600 MBps | 9,000 |
-| M832s_16_v3 | 15,200 GiB | 8,000 Mbps | 4,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 10,000 |
+| M832i(d)s_16_v3 | 15,200 GiB | 8,000 Mbps | 4,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 10,000 |
| M832isx_v2<sup>1</sup> | 23088 GiB | larger than 2,000 Mbps | 28,400 GB | 2,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 9,000 | | M896ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 2,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 10,000 | | M1792ixds_32_v3<sup>1</sup> | 30,400 GiB | 8,000 Mbps | 2,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 10,000 |
service-bus-messaging Batch Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/batch-delete.md
You can delete messages by calling [DeleteMessagesAsync](/dotnet/api/azure.messa
Additionally, you can call [PurgeMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.purgemessagesasync?view=azure-dotnet-preview) to purge all messages from entity.
+### Using Azure portal
+
+You can also purge messages from entity using Service Bus explorer available on Azure portal. You can follow following steps to purge messages:
+
+1. Navigate to 'Service Bus explorer' blade on the entity you want to delete messages from.
+2. Choose 'Receive mode' in Service Bus explorer dropdown.
+
+ :::image type="content" source="./media/batch-delete/choose-receive-mode-service-bus-explorer.png" alt-text="Screenshot of dropdown with Receive mode selected." lightbox="./media/batch-delete/choose-receive-mode-service-bus-explorer.png":::
+
+3. Click on the purge messages option as shown in snapshot.
+
+ :::image type="content" source="./media/batch-delete/purge-messages.png" alt-text="Screenshot of Purge messages selected." lightbox="./media/batch-delete/purge-messages.png":::
+
+4. Another dialog box will appear, enter 'purge' to execute purge messages operation.
+
+ :::image type="content" source="./media/batch-delete/purge-messages-action.png" alt-text="Screenshot of entering Purge to confirm." lightbox="./media/batch-delete/purge-messages-action.png":::
+ When using Azure SDKs to perform these operations, the beforeEnqueueTime parameter defaults to the current UTC time (DateTime.UtcNow()). ItΓÇÖs important to ensure you provide the correct values to prevent unintended message deletion. >[!NOTE]
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
After the lock has been abandoned, the message will be available for receive ope
After a message has been dead-lettered, it will be available from the **Dead-letter** subqueue.
+### Purge messages
+
+To Purge messages, select the **Purge messages** button of Service Bus explorer.
+
+ :::image type="content" source="./media/service-bus-explorer/purge-messages.png" alt-text="Screenshot indicating the purge messages button." lightbox="./media/service-bus-explorer/purge-messages.png":::
+
+Once you enter 'purge' to confirm on the operation, messages would be purged from respective service bus entity.
+ ## Send a message to a queue or topic To send a message to a **queue** or a **topic**, select the **Send messages** button of the Service Bus Explorer.
service-bus-messaging Service Bus Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-replication.md
This feature allows promoting any secondary region to primary, at any time. Prom
> - This feature is currently in public preview, and as such shouldn't be used in production scenarios. > - The below regions are currently supported in the public preview. >
-> | North America | Europe | APAC |
-> |--||-|
-> | Central US EUAP | Italy North | Australia Central |
-> | Canada Central | Spain Central | Australia East |
-> | Canada East | Norway East | |
->
+> | Region | Region | Region |
+> |--|--||
+> | AustraliaCentral | GermanyNorth | NorwayWest |
+> | AustraliaCentral2 | GermanyWestCentral | PolandCentral |
+> | AustraliaEast | IsraelCentral | SouthAfricaNorth |
+> | AustraliaSoutheast | ItalyNorth | SouthAfricaWest |
+> | BrazilSoutheast | JapanEast | SoutheastAsia |
+> | CanadaCentral | JapanWest | SouthIndia |
+> | CanadaEast | JioIndiaCentral | SpainCentral |
+> | CentralIndia | JioIndiaWest | SwedenCentral |
+> | CentralUS | KoreaCentral | SwitzerlandNorth |
+> | CentralUSEUAP | KoreaSouth | SwitzerlandWest |
+> | EastAsia | MexicoCentral | UAECentral |
+> | EastUS2 | NorthCentralUS | UAENorth |
+> | FranceCentral | NorthEurope | UKSouth |
+> | FranceSouth | NorwayEast | UKWest |
+>
> - This feature is currently available on new namespaces. If a namespace had this feature enabled before, it can be disabled (by removing the secondary regions), and re-enabled. > - The following features currently aren't supported. We're continuously working on bringing more features to the public preview, and will update this list with the latest status. > - Large message support.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Here are some considerations when sending large messages on Azure Service Bus -
- While 100-MB message payloads are supported, we recommend that you keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace. - The max message size is enforced only for messages sent to the queue or topic. The size limit isn't enforced for the receive operation. It allows you to update the max message size for a given queue (or topic). - Batching isn't supported. -- Service Bus Explorer doesn't support sending or receiving large messages. [!INCLUDE [service-bus-amqp-support-retirement](../../includes/service-bus-amqp-support-retirement.md)]
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
# Choose how to authorize access to blob data in the Azure portal
-When you access blob data using the [Azure portal](https://portal.azure.com), the portal makes requests to Azure Storage under the covers. A request to Azure Storage can be authorized using either your Microsoft Entra account or the storage account access key. The portal indicates which method you are using, and enables you to switch between the two if you have the appropriate permissions.
+When you access blob data using the [Azure portal](https://portal.azure.com), the portal makes requests to Azure Storage under the covers. A request to Azure Storage can be authorized using either your Microsoft Entra account or the storage account access key. The portal indicates which method you're using, and enables you to switch between the two if you have the appropriate permissions.
-You can also specify how to authorize an individual blob upload operation in the Azure portal. By default the portal uses whichever method you are already using to authorize a blob upload operation, but you have the option to change this setting when you upload a blob.
+You can also specify how to authorize an individual blob upload operation in the Azure portal. By default the portal uses whichever method you're already using to authorize a blob upload operation, but you have the option to change this setting when you upload a blob.
## Permissions needed to access blob data
-Depending on how you want to authorize access to blob data in the Azure portal, you'll need specific permissions. In most cases, these permissions are provided via Azure role-based access control (Azure RBAC). For more information about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+Depending on how you want to authorize access to blob data in the Azure portal, you need specific permissions. In most cases, these permissions are provided via Azure role-based access control (Azure RBAC). For more information about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
### Use the account access key
-To access blob data with the account access key, you must have an Azure role assigned to you that includes the Azure RBAC action **Microsoft.Storage/storageAccounts/listkeys/action**. This Azure role may be a built-in or a custom role. Built-in roles that support **Microsoft.Storage/storageAccounts/listkeys/action** include the following, in order from least to greatest permissions:
+To access blob data with the account access key, you must have an Azure role assigned to you that includes the Azure RBAC action **Microsoft.Storage/storageAccounts/listkeys/action**. This Azure role can be a built-in or a custom role. Built-in roles that support **Microsoft.Storage/storageAccounts/listkeys/action** include the following, in order from least to greatest permissions:
- The [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access) role - The [Storage Account Contributor role](../../role-based-access-control/built-in-roles.md#storage-account-contributor) - The Azure Resource Manager [Contributor role](../../role-based-access-control/built-in-roles.md#contributor) - The Azure Resource Manager [Owner role](../../role-based-access-control/built-in-roles.md#owner)
-When you attempt to access blob data in the Azure portal, the portal first checks whether you have been assigned a role with **Microsoft.Storage/storageAccounts/listkeys/action**. If you have been assigned a role with this action, then the portal uses the account key for accessing blob data. If you have not been assigned a role with this action, then the portal attempts to access data using your Microsoft Entra account.
+When you attempt to access blob data in the Azure portal, the portal first checks whether you have been assigned a role with **Microsoft.Storage/storageAccounts/listkeys/action**. If you have been assigned a role with this action, then the portal uses the account key for accessing blob data. If you haven't been assigned a role with this action, then the portal attempts to access data using your Microsoft Entra account.
> [!IMPORTANT] > When a storage account is locked with an Azure Resource Manager **ReadOnly** lock, the [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is not permitted for that storage account. **List Keys** is a POST operation, and all POST operations are prevented when a **ReadOnly** lock is configured for the account. For this reason, when the account is locked with a **ReadOnly** lock, users must use Microsoft Entra credentials to access blob data in the portal. For information about accessing blob data in the portal with Microsoft Entra ID, see [Use your Microsoft Entra account](#use-your-azure-ad-account).
When you attempt to access blob data in the Azure portal, the portal first check
To access blob data from the Azure portal using your Microsoft Entra account, both of the following statements must be true for you: -- You have been assigned either a built-in or custom role that provides access to blob data.-- You have been assigned the Azure Resource Manager [Reader](../../role-based-access-control/built-in-roles.md#reader) role, at a minimum, scoped to the level of the storage account or higher. The **Reader** role grants the most restricted permissions, but another Azure Resource Manager role that grants access to storage account management resources is also acceptable.
+- You are assigned either a built-in or custom role that provides access to blob data.
+- You are assigned the Azure Resource Manager [Reader](../../role-based-access-control/built-in-roles.md#reader) role, at a minimum, scoped to the level of the storage account or higher. The **Reader** role grants the most restricted permissions, but another Azure Resource Manager role that grants access to storage account management resources is also acceptable.
-The Azure Resource Manager **Reader** role permits users to view storage account resources, but not modify them. It does not provide read permissions to data in Azure Storage, but only to account management resources. The **Reader** role is necessary so that users can navigate to blob containers in the Azure portal.
+The Azure Resource Manager **Reader** role permits users to view storage account resources, but not modify them. It doesn't provide read permissions to data in Azure Storage, but only to account management resources. The **Reader** role is necessary so that users can navigate to blob containers in the Azure portal.
For information about the built-in roles that support access to blob data, see [Authorize access to blobs using Microsoft Entra ID](authorize-access-azure-active-directory.md).
Custom roles can support different combinations of the same permissions provided
## Navigate to blobs in the Azure portal
-To view blob data in the portal, navigate to the **Overview** for your storage account, and click on the links for **Blobs**. Alternatively you can navigate to the **Containers** section in the menu.
+To view blob data in the portal, navigate to the **Overview** for your storage account, and select on the links for **Blobs**. Alternatively you can navigate to the **Containers** section in the menu.
:::image type="content" source="media/authorize-data-operations-portal/blob-access-portal.png" alt-text="Screenshot showing how to navigate to blob data in the Azure portal"::: ## Determine the current authentication method
-When you navigate to a container, the Azure portal indicates whether you are currently using the account access key or your Microsoft Entra account to authenticate.
+When you navigate to a container, the Azure portal indicates whether you're currently using the account access key or your Microsoft Entra account to authenticate.
### Authenticate with the account access key
-If you are authenticating using the account access key, you'll see **Access Key** specified as the authentication method in the portal:
+If you're authenticating using the account access key, you see **Access Key** specified as the authentication method in the portal:
:::image type="content" source="media/authorize-data-operations-portal/auth-method-access-key.png" alt-text="Screenshot showing user currently accessing containers with the account key":::
-To switch to using Microsoft Entra account, click the link highlighted in the image. If you have the appropriate permissions via the Azure roles that are assigned to you, you'll be able to proceed. However, if you lack the right permissions, you'll see an error message like the following one:
+To switch to using Microsoft Entra account, select the link highlighted in the image. If you have the appropriate permissions via the Azure roles that are assigned to you, you're able to proceed. However, if you lack the right permissions, you see an error message like the following one:
:::image type="content" source="media/authorize-data-operations-portal/auth-error-azure-ad.png" alt-text="Error shown if Microsoft Entra account does not support access":::
-Notice that no blobs appear in the list if your Microsoft Entra account lacks permissions to view them. Click on the **Switch to access key** link to use the access key for authentication again.
+Notice that no blobs appear in the list if your Microsoft Entra account lacks permissions to view them. Select the **Switch to access key** link to use the access key for authentication again.
<a name='authenticate-with-your-azure-ad-account'></a> ### Authenticate with your Microsoft Entra account
-If you are authenticating using your Microsoft Entra account, you'll see **Microsoft Entra user Account** specified as the authentication method in the portal:
+If you're authenticating using your Microsoft Entra account, you see **Microsoft Entra user Account** specified as the authentication method in the portal:
:::image type="content" source="media/authorize-data-operations-portal/auth-method-azure-ad.png" alt-text="Screenshot showing user currently accessing containers with Microsoft Entra account":::
-To switch to using the account access key, click the link highlighted in the image. If you have access to the account key, then you'll be able to proceed. However, if you lack access to the account key, you'll see an error message like the following one:
+To switch to using the account access key, select the link highlighted in the image. If you have access to the account key, then you're able to proceed. However, if you lack access to the account key, you see an error message like the following one:
:::image type="content" source="media/authorize-data-operations-portal/auth-error-access-key.png" alt-text="Error shown if you do not have access to account key":::
-Notice that no blobs appear in the list if you do not have access to the account keys. Click on the **Switch to Microsoft Entra user Account** link to use your Microsoft Entra account for authentication again.
+Notice that no blobs appear in the list if you don't have access to the account keys. Select the **Switch to Microsoft Entra user Account** link to use your Microsoft Entra account for authentication again.
## Specify how to authorize a blob upload operation
To specify how to authorize a blob upload operation, follow these steps:
## Default to Microsoft Entra authorization in the Azure portal
-When you create a new storage account, you can specify that the Azure portal will default to authorization with Microsoft Entra ID when a user navigates to blob data. You can also configure this setting for an existing storage account. This setting specifies the default authorization method only, so keep in mind that a user can override this setting and choose to authorize data access with the account key.
+When you create a new storage account, you can specify that the Azure portal defaults to authorization with Microsoft Entra ID when a user navigates to blob data. You can also configure this setting for an existing storage account. This setting specifies the default authorization method only, so keep in mind that a user can override this setting and choose to authorize data access with the account key.
-To specify that the portal will use Microsoft Entra authorization by default for data access when you create a storage account, follow these steps:
+To specify that the portal should use Microsoft Entra authorization by default for data access when you create a storage account, follow these steps:
1. Create a new storage account, following the instructions in [Create a storage account](../common/storage-account-create.md). 1. On the **Advanced** tab, in the **Security** section, check the box next to **Default to Microsoft Entra authorization in the Azure portal**.
To update this setting for an existing storage account, follow these steps:
:::image type="content" source="media/authorize-data-operations-portal/default-auth-account-update-portal.png" alt-text="Screenshot showing how to configure default Microsoft Entra authorization in Azure portal for existing account":::
-The **defaultToOAuthAuthentication** property of a storage account is not set by default and does not return a value until you explicitly set it.
+The **defaultToOAuthAuthentication** property of a storage account isn't set by default and doesn't return a value until you explicitly set it.
## Next steps
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
Title: Create a blob container with JavaScript
+ Title: Create a blob container with JavaScript or TypeScript
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Create a blob container with JavaScript
+# Create a blob container with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-create-container](../../../includes/storage-dev-guides/storage-dev-guide-selector-create-container.md)]
Blobs in Azure Storage are organized into containers. Before you can upload a bl
## Create a container -
-To create a container, create a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) object or [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object) object, then use one of the following create methods:
+To create a container, call the following method from the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class:
- [BlobServiceClient.createContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-createcontainer)+
+You can also create a container using either of the following methods from the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) class:
+ - [ContainerClient.create](/javascript/api/@azure/storage-blob/containerclient?#@azure-storage-blob-containerclient-create) - [ContainerClient.createIfNotExists](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-createifnotexists) - Containers are created immediately beneath the storage account. It's not possible to nest one container beneath another. An exception is thrown if a container with the same name already exists.
-The following example creates a container asynchronously from the BlobServiceClient:
+The following example creates a container asynchronously from a `BlobServiceClient` object:
-```javascript
-async function createContainer(blobServiceClient, containerName){
+### [JavaScript](#tab/javascript)
- // anonymous access at container level
- const options = {
- access: 'container'
- };
- // creating client also creates container
- const containerClient = await blobServiceClient.createContainer(containerName, options);
- console.log(`container ${containerName} created`);
+### [TypeScript](#tab/typescript)
- // do something with container
- // ...
- return containerClient;
-}
-```
++
+## Create the root container
-## Understand the root container
+A root container serves as a default container for your storage account. Each storage account can have one root container, which must be named *$root*. The root container must be explicitly created or deleted.
-A root container, with the specific name `$root`, enables you to reference a blob at the top level of the storage account hierarchy. For example, you can reference a blob _without using a container name in the URI_:
+You can reference a blob stored in the root container without including the root container name. The root container enables you to reference a blob at the top level of the storage account hierarchy. For example, you can reference a blob in the root container as follows:
-`https://myaccount.blob.core.windows.net/default.html`
+`https://accountname.blob.core.windows.net/default.html`
-The root container must be explicitly created or deleted. It isn't created by default as part of service creation. The same code displayed in the previous section can create the root. The container name is `$root`.
+To create the root container, call any create method and specify the container name as *$root*.
## Resources To learn more about creating a container using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/create-container.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/container-create.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for creating a container use the following REST API operation: - [Create Container](/rest/api/storageservices/create-container) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/create-container.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]+
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
- Title: Create a blob container with TypeScript-
-description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library using TypeScript.
---- Previously updated : 08/05/2024----
-# Create a blob container with TypeScript
--
-Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. This article shows how to create containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to create a blob container. To learn more, see the authorization guidance for the following REST API operation:
- - [Create Container](/rest/api/storageservices/create-container#authorization)
--
-## Create a container
--
-To create a container, create a [BlobServiceClient](storage-blob-typescript-get-started.md#create-a-blobserviceclient-object) object or [ContainerClient](storage-blob-typescript-get-started.md#create-a-containerclient-object) object, then use one of the following create methods:
--- [BlobServiceClient.createContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-createcontainer)-- [ContainerClient.create](/javascript/api/@azure/storage-blob/containerclient?#@azure-storage-blob-containerclient-create)-- [ContainerClient.createIfNotExists](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-createifnotexists)--
-Containers are created immediately beneath the storage account. It's not possible to nest one container beneath another. An exception is thrown if a container with the same name already exists.
-
-The following example creates a container asynchronously from the BlobServiceClient:
---
-## Understand the root container
-
-A root container, with the specific name `$root`, enables you to reference a blob at the top level of the storage account hierarchy. For example, you can reference a blob _without using a container name in the URI_:
-
-`https://myaccount.blob.core.windows.net/default.html`
-
-The root container must be explicitly created or deleted. It isn't created by default as part of service creation. The same code displayed in the previous section can create the root. The container name is `$root`.
-
-## Resources
-
-To learn more about creating a container using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for creating a container use the following REST API operation:
--- [Create Container](/rest/api/storageservices/create-container) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/container-create.ts)
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
The following example uses a `BlobServiceClient` object to create a container as
## Create the root container
-A root container serves as a default container for your storage account. Each storage account may have one root container, which must be named *$root*. The root container must be explicitly created or deleted.
+A root container serves as a default container for your storage account. Each storage account can have one root container, which must be named *$root*. The root container must be explicitly created or deleted.
You can reference a blob stored in the root container without including the root container name. The root container enables you to reference a blob at the top level of the storage account hierarchy. For example, you can reference a blob that is in the root container in the following manner:
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
Title: Delete and restore a blob container with JavaScript
+ Title: Delete and restore a blob container with JavaScript or TypeScript
description: Learn how to delete and restore a blob container in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Delete and restore a blob container with JavaScript
+# Delete and restore a blob container with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-delete-container](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-container.md)]
-This article shows how to delete containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers.
+This article shows how to delete containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers.
## Prerequisites
This article shows how to delete containers with the [Azure Storage client libra
## Delete a container
-To delete a container in JavaScript, create a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) or [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object) then use one of the following methods:
+To delete a container, use the following method from the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class:
-- BlobServiceClient.[deleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer#@azure-storage-blob-blobserviceclient-deletecontainer)-- ContainerClient.[delete](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer)-- ContainerClient.[deleteIfExists](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-containerclient-deleteifexists)
+- [BlobServiceClient.deleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer#@azure-storage-blob-blobserviceclient-deletecontainer)
-After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name will fail with HTTP error code 409 (Conflict). Any other operations on the container or the blobs it contains will fail with HTTP error code 404 (Not Found).
+You can also delete a container using the following method from the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) class:
-## Delete container with BlobServiceClient
+- [ContainerClient.delete](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer)
+- [ContainerClient.deleteIfExists](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-containerclient-deleteifexists)
-The following example deletes the specified container. Use the [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) to delete a container:
+After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name fails with HTTP error code `409 (Conflict)`. Any other operations on the container or the blobs it contains fail with HTTP error code `404 (Not Found)`.
-```javascript
-// delete container immediately on blobServiceClient
-async function deleteContainerImmediately(blobServiceClient, containerName) {
- const response = await blobServiceClient.deleteContainer(containerName);
+The following example uses a `BlobServiceClient` object to delete the specified container:
- if (!response.errorCode) {
- console.log(`deleted ${containerItem.name} container`);
- }
-}
-```
+## [JavaScript](#tab/javascript)
-## Delete container with ContainerClient
-The following example shows how to delete all of the containers whose name starts with a specified prefix using a [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object).
+## [TypeScript](#tab/typescript)
-```javascript
-async function deleteContainersWithPrefix(blobServiceClient, blobNamePrefix){
- const containerOptions = {
- includeDeleted: false,
- includeMetadata: false,
- includeSystem: true,
- prefix: blobNamePrefix
- }
-
- for await (const containerItem of blobServiceClient.listContainers(containerOptions)) {
-
- const containerClient = blobServiceClient.getContainerClient(containerItem.name);
+
- const response = await containerClient.delete();
+The following example shows how to delete all containers that start with a specified prefix:
- if(!response.errorCode){
- console.log(`deleted ${containerItem.name} container`);
- }
- }
-}
-```
+## [JavaScript](#tab/javascript)
-## Restore a deleted container
-When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify. You can restore a soft-deleted container using a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) object:
+## [TypeScript](#tab/typescript)
-- BlobServiceClient.[undeleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainert#@azure-storage-blob-blobserviceclient-undeletecontainer)
-The following example finds a deleted container, gets the version ID of that deleted container, and then passes that ID into the **undeleteContainer** method to restore the container.
+
-```javascript
-// Undelete specific container - last version
-async function undeleteContainer(blobServiceClient, containerName) {
+## Restore a deleted container
- // version to undelete
- let containerVersion;
+When container soft delete is enabled for a storage account, a container and its contents can be recovered after it has been deleted, within a retention period that you specify. You can restore a soft-deleted container using a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object:
- const containerOptions = {
- includeDeleted: true,
- prefix: containerName
- }
+- [BlobServiceClient.undeleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainert#@azure-storage-blob-blobserviceclient-undeletecontainer)
- // container listing returns version (timestamp) in the ContainerItem
- for await (const containerItem of blobServiceClient.listContainers(containerOptions)) {
+The following example finds a deleted container, gets the version ID of that deleted container, and then passes that ID into the `undeleteContainer` method to restore the container.
- // if there are multiple deleted versions of the same container,
- // the versions are in asc time order
- // the last version is the most recent
- if (containerItem.name === containerName) {
- containerVersion = containerItem.version;
- }
- }
+## [JavaScript](#tab/javascript)
- const containerClient = await blobServiceClient.undeleteContainer(
- containerName,
- containerVersion,
- // optional/new container name - if unused, original container name is used
- //newContainerName
- );
+## [TypeScript](#tab/typescript)
- // undelete was successful
- console.log(`${containerName} is undeleted`);
- // do something with containerClient
- // ...
-}
-```
+ ## Resources To learn more about deleting a container using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-containers.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/containers-delete.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting or restoring a container use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Delete Container](/rest/api/storageservices/delete-container) (REST API) - [Restore Container](/rest/api/storageservices/restore-container) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-containers.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md)+
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
- Title: Delete and restore a blob container with TypeScript-
-description: Learn how to delete and restore a blob container in your Azure Storage account using the JavaScript client library using TypeScript.
---- Previously updated : 08/05/2024---
-# Delete and restore a blob container with TypeScript
--
-This article shows how to delete containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to delete a blob container, or to restore a soft-deleted container. To learn more, see the authorization guidance for the following REST API operations:
- - [Delete Container](/rest/api/storageservices/delete-container#authorization)
- - [Restore Container](/rest/api/storageservices/restore-container#authorization)
-
-## Delete a container
-
-To delete a container in TypeScript, create a BlobServiceClient or ContainerClient then use one of the following methods:
--- BlobServiceClient.[deleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer#@azure-storage-blob-blobserviceclient-deletecontainer)-- ContainerClient.[delete](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainer)-- ContainerClient.[deleteIfExists](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-containerclient-deleteifexists)-
-After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name will fail with HTTP error code 409 (Conflict). Any other operations on the container or the blobs it contains will fail with HTTP error code 404 (Not Found).
-
-## Delete container with BlobServiceClient
-
-The following example deletes the specified container. Use the BlobServiceClient to delete a container:
--
-## Delete container with ContainerClient
-
-The following example shows how to delete all of the containers whose name starts with a specified prefix using a [ContainerClient](storage-blob-typescript-get-started.md#create-a-containerclient-object).
--
-## Restore a deleted container
-
-When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify. You can restore a soft-deleted container using a [BlobServiceClient](storage-blob-typescript-get-started.md#create-a-blobserviceclient-object) object:
--- BlobServiceClient.[undeleteContainer](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-deletecontainert#@azure-storage-blob-blobserviceclient-undeletecontainer)-
-The following example finds a deleted container, gets the version ID of that deleted container, and then passes that ID into the **undeleteContainer** method to restore the container.
--
-## Resources
-
-To learn more about deleting a container using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting or restoring a container use the following REST API operations:
--- [Delete Container](/rest/api/storageservices/delete-container) (REST API)-- [Restore Container](/rest/api/storageservices/restore-container) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/containers-delete.ts)--
-### See also
--- [Soft delete for containers](soft-delete-container-overview.md)-- [Enable and manage soft delete for containers](soft-delete-container-enable.md)
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
Title: Create and manage container leases with JavaScript
+ Title: Create and manage container leases with JavaScript or TypeScript
description: Learn how to manage a lock on a container in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Create and manage container leases with JavaScript
+# Create and manage container leases with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-lease-container](../../../includes/storage-dev-guides/storage-dev-guide-selector-lease-container.md)]
To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/
The following example acquires a 30-second lease for a container:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js" id="Snippet_AcquireContainerLease":::
+## [TypeScript](#tab/typescript)
++++ ## Renew a lease
-You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it expires, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
To renew a lease, use one of the following methods on a [BlobLeaseClient](/javas
The following example renews a container lease:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js" id="Snippet_RenewContainerLease":::
+## [TypeScript](#tab/typescript)
++++ ## Release a lease You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
You can release a lease using one of the following methods on a [BlobLeaseClient
The following example releases a lease on a container:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js" id="Snippet_ReleaseContainerLease":::
+## [TypeScript](#tab/typescript)
++++ ## Break a lease You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
You can break a lease using one of the following methods on a [BlobLeaseClient](
The following example breaks a lease on a container:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js" id="Snippet_BreakContainerLease":::
+## [TypeScript](#tab/typescript)
++++ [!INCLUDE [storage-dev-guide-container-lease](../../../includes/storage-dev-guides/storage-dev-guide-container-lease.md)] ## Resources To learn more about managing container leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-container.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing container leases use the following REST API operation: - [Lease Container](/rest/api/storageservices/lease-container)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-container.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also -- [Managing Concurrency in Blob storage](concurrency-manage.md)
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
+
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
- Title: Create and manage container leases with TypeScript-
-description: Learn how to manage a lock on a container in your Azure Storage account with TypeScript using the JavaScript client library.
------ Previously updated : 08/05/2024---
-# Create and manage container leases with TypeScript
--
-This article shows how to create and manage container leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with a container lease. To learn more, see the authorization guidance for the following REST API operation:
- - [Lease Container](/rest/api/storageservices/lease-container#authorization)
-
-## About container leases
--
-Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about blob leases using the client library, see [Create and manage blob leases with TypeScript](storage-blob-lease-typescript.md).
-
-## Acquire a lease
-
-When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
-
-To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
--- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)-
-The following example acquires a 30-second lease for a container:
--
-## Renew a lease
-
-You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
-
-To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)-
-The following example renews a container lease:
--
-## Release a lease
-
-You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
-
-You can release a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)-
-The following example releases a lease on a container:
--
-## Break a lease
-
-You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
-
-You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)-
-The following example breaks a lease on a container:
---
-## Resources
-
-To learn more about managing container leases using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing container leases use the following REST API operation:
--- [Lease Container](/rest/api/storageservices/lease-container)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-container.ts)--
-### See also
--- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
Title: Use JavaScript to manage properties and metadata for a blob container
+ Title: Use JavaScript or TypeScript to manage properties and metadata for a blob container
description: Learn how to set and retrieve system properties and store custom metadata on blob containers in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Manage container properties and metadata with JavaScript
+# Manage container properties and metadata with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-manage-properties-container](../../../includes/storage-dev-guides/storage-dev-guide-selector-manage-properties-container.md)]
Blob containers support system properties and user-defined metadata, in addition
## Retrieve container properties
-To retrieve container properties, create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object then use the following method:
+To retrieve container properties, use the following method:
-- ContainerClient.[getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties) (returns [ContainerProperties](/javascript/api/@azure/storage-blob/containerproperties))
+- [ContainerClient.getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties)
+
+The following code example fetches a container's properties and writes some property values to a console window:
-The following code example fetches a container's properties and writes the property values to a console window:
+## [JavaScript](#tab/javascript)
-```javascript
-async function getContainerProperties(containerClient) {
- // Get Properties including existing metadata
- const containerProperties = await containerClient.getProperties();
- if(!containerProperties.errorCode){
- console.log(containerProperties);
- }
-}
-```
+## [TypeScript](#tab/typescript)
+++ ## Set and retrieve metadata You can specify metadata as one or more name-value pairs container resource. To set metadata, create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object then use the following method: -- ContainerClient.[setMetadata](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-setmetadata)
+- [ContainerClient.setMetadata](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-setmetadata)
-The following code example sets metadata on a container.
+The following code example sets metadata on a container:
-```javascript
-/*
-const metadata = {
- // values must be strings
- lastFileReview: currentDate.toString(),
- reviewer: `johnh`
-}
-*/
-async function setContainerMetadata(containerClient, metadata) {
+## [JavaScript](#tab/javascript)
- await containerClient.setMetadata(metadata);
-}
-```
+## [TypeScript](#tab/typescript)
-To retrieve metadata, [get the container properties](#retrieve-container-properties) then use the returned **metadata** property.
-- [ContainerClient.getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties)++
+To retrieve metadata, [get the container properties](#retrieve-container-properties) then use the returned **metadata** property.
## Resources To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/container-set-properties-and-metadata.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
The `getProperties` method retrieves container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation.
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
- Title: Use TypeScript to manage properties and metadata for a blob container-
-description: Learn how to set and retrieve system properties and store custom metadata on blob containers in your Azure Storage account using the JavaScript client library using TypeScript.
------ Previously updated : 08/05/2024----
-# Manage container properties and metadata with TypeScript
--
-Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with container properties or metadata. To learn more, see the authorization guidance for the following REST API operations:
- - [Get Container Properties](/rest/api/storageservices/get-container-properties#authorization)
- - [Set Container Metadata](/rest/api/storageservices/set-container-metadata#authorization)
- - [Get Container Metadata](/rest/api/storageservices/get-container-metadata#authorization)
-
-## About properties and metadata
--- **System properties**: System properties exist on each Blob storage resource. Some of them can be read or set, while others are read-only. Under the covers, some system properties correspond to certain standard HTTP headers. The Azure Storage client library for JavaScript maintains these properties for you.--- **User-defined metadata**: User-defined metadata consists of one or more name-value pairs that you specify for a Blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.-
- Metadata name/value pairs are valid HTTP headers and should adhere to all restrictions governing HTTP headers. For more information about metadata naming requirements, see [Metadata names](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#metadata-names).
-
-## Retrieve container properties
-
-To retrieve container properties, create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object then use the following method:
--- ContainerClient.[getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties) (returns [ContainerProperties](/javascript/api/@azure/storage-blob/containerproperties))-
-The following code example fetches a container's properties and writes the property values to a console window:
--
-## Set and retrieve metadata
-
-You can specify metadata as one or more name-value pairs container resource. To set metadata, create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object then use the following method:
--- ContainerClient.[setMetadata](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-setmetadata)-
-The following code example sets metadata on a container.
--
-To retrieve metadata, [get the container properties](#retrieve-container-properties) then use the returned **metadata** property.
--- [ContainerClient.getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties)-
-## Resources
-
-To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations:
--- [Get Container Properties](/rest/api/storageservices/get-container-properties) (REST API)-- [Set Container Metadata](/rest/api/storageservices/set-container-metadata) (REST API)-- [Get Container Metadata](/rest/api/storageservices/get-container-metadata) (REST API)-
-The `getProperties` method retrieves container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation.
-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js)-
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
Title: List blob containers with JavaScript
+ Title: List blob containers with JavaScript or TypeScript
description: Learn how to list blob containers in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# List blob containers with JavaScript
+# List blob containers with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-list-container](../../../includes/storage-dev-guides/storage-dev-guide-selector-list-container.md)]
-When you list the containers in an Azure Storage account from your code, you can specify a number of options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
+When you list the containers in an Azure Storage account from your code, you can specify several options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
## Prerequisites
When you list the containers in an Azure Storage account from your code, you can
## About container listing options
-To list containers in your storage account, create a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) object then call the following method:
+When listing containers from your code, you can specify options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can also filter the results by a prefix, and return container metadata with the results. These options are described in the following sections.
-- BlobServiceClient.[listContainers](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-listcontainers)
+To list containers in your storage account, call the following method:
-### List containers with optional prefix
+- [BlobServiceClient.listContainers](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-listcontainers)
-By default, a listing operation returns up to 5000 results at a time.
+This method returns a list of [ContainerItem](/javascript/api/@azure/storage-blob/containeritem) objects. Containers are ordered lexicographically by name.
-The BlobServiceClient.[listContainers](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-listcontainers) returns a list of [ContainerItem](/javascript/api/@azure/storage-blob/containeritem) objects. Use the containerItem.name to create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) in order to get a more complete [ContainerProperties](/javascript/api/@azure/storage-blob/containerproperties) object.
+### Manage how many results are returned
-```javascript
-async function listContainers(blobServiceClient, containerNamePrefix) {
-
- const options = {
- includeDeleted: false,
- includeMetadata: true,
- includeSystem: true,
- prefix: containerNamePrefix
- }
-
- for await (const containerItem of blobServiceClient.listContainers(options)) {
-
- // ContainerItem
- console.log(`For-await list: ${containerItem.name}`);
-
- // ContainerClient
- const containerClient = blobServiceClient.getContainerClient(containerItem.name);
-
- // ... do something with container
- }
-}
-```
-
-## List containers with paging
-
-To return a smaller set of results, provide a nonzero value for the size of the page of results to return.
-
-If your storage account contains more than 5000 containers, or if you have specified a page size such that the listing operation returns a subset of containers in the storage account, then Azure Storage returns a *continuation token* with the list of containers. A continuation token is an opaque value that you can use to retrieve the next set of results from Azure Storage.
-
-In your code, check the value of the continuation token to determine whether it is empty. When the continuation token is empty, then the set of results is complete. If the continuation token is not empty, then call the listing method again, passing in the continuation token to retrieve the next set of results, until the continuation token is empty.
-
-```javascript
-async function listContainersWithPagingMarker(blobServiceClient) {
-
- // add prefix to filter list
- const containerNamePrefix = '';
-
- // page size
- const maxPageSize = 2;
-
- const options = {
- includeDeleted: false,
- includeMetadata: true,
- includeSystem: true,
- prefix: containerNamePrefix
- }
-
- let i = 1;
- let marker;
- let iterator = blobServiceClient.listContainers(options).byPage({ maxPageSize });
- let response = (await iterator.next()).value;
-
- // Prints 2 container names
- if (response.containerItems) {
- for (const container of response.containerItems) {
- console.log(`IteratorPaged: Container ${i++}: ${container.name}`);
- }
- }
-
- // Gets next marker
- marker = response.continuationToken;
-
- // Passing next marker as continuationToken
- iterator = blobServiceClient.listContainers().byPage({ continuationToken: marker, maxPageSize: maxPageSize * 2 });
- response = (await iterator.next()).value;
-
- // Print next 4 container names
- if (response.containerItems) {
- for (const container of response.containerItems) {
- console.log(`Container ${i++}: ${container.name}`);
- }
- }
-}
-```
-
-Use the options parameter to the **listContainers** method to filter results with a prefix.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
### Filter results with a prefix
-To filter the list of containers, specify a string for the **prefix** property. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix.
+To filter the list of containers, specify a string for the `prefix` parameter in [ServiceListContainersOptions](/javascript/api/@azure/storage-blob/servicelistcontainersoptions). The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix.
-```javascript
-async function listContainers(blobServiceClient, containerNamePrefix) {
+### Include container metadata
- const options = {
- includeDeleted: false,
- includeMetadata: true,
- includeSystem: true,
+To include container metadata with the results, set the `includeMetadata` parameter to `true` in [ServiceListContainersOptions](/javascript/api/@azure/storage-blob/servicelistcontainersoptions). Azure Storage includes metadata with each container returned, so you don't need to fetch the container metadata separately.
- // filter with prefix
- prefix: containerNamePrefix
- }
+### Include deleted containers
- for await (const containerItem of blobServiceClient.listContainers(options)) {
+To include soft-deleted containers with the results, set the `includeDeleted` parameter in [ServiceListContainersOptions](/javascript/api/@azure/storage-blob/servicelistcontainersoptions).
- // do something with containerItem
+## Code example: List containers
- }
-}
-```
+The following example asynchronously lists the containers in a storage account that begin with a specified prefix. The example lists containers that begin with the specified prefix and returns the specified number of results per call to the listing operation. It then uses the continuation token to get the next segment of results. The example also returns container metadata with the results.
-### Include metadata in results
+## [JavaScript](#tab/javascript)
-To return container metadata with the results, specify the **metadata** value for the BlobContainerTraits enum. Azure Storage includes metadata with each container returned, so you do not need to fetch the container metadata as a separate operation.
-```javascript
-async function listContainers(blobServiceClient, containerNamePrefix) {
+## [TypeScript](#tab/typescript)
- const options = {
- includeDeleted: false,
- includeSystem: true,
- prefix: containerNamePrefix,
- // include metadata
- includeMetadata: true,
- }
-
- for await (const containerItem of blobServiceClient.listContainers(options)) {
-
- // do something with containerItem
-
- }
-}
-```
+ ## Resources
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-containers.js)
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-containers.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/list-containers.ts) code samples from this article (GitHub)
[!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ## See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)+
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
- Title: List blob containers with TypeScript-
-description: Learn how to list blob containers with TypeScript in your Azure Storage account using the JavaScript client library using TypeScript.
------ Previously updated : 08/05/2024----
-# List blob containers with TypeScript
--
-When you list the containers in an Azure Storage account from your code, you can specify a number of options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to list blob containers. To learn more, see the authorization guidance for the following REST API operation:
- - [List Containers](/rest/api/storageservices/list-containers2#authorization)
-
-## About container listing options
-
-To list containers in your storage account, create a [BlobServiceClient](storage-blob-typescript-get-started.md#create-a-blobserviceclient-object) object then call the following method:
--- BlobServiceClient.[listContainers](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-listcontainers)-
-### List containers with optional prefix
-
-By default, a listing operation returns up to 5000 results at a time.
-
-The BlobServiceClient.[listContainers](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-listcontainers) returns a list of [ContainerItem](/javascript/api/@azure/storage-blob/containeritem) objects. Use the containerItem.name to create a [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) in order to get a more complete [ContainerProperties](/javascript/api/@azure/storage-blob/containerproperties) object.
--
-## List containers with paging
-
-To return a smaller set of results, provide a nonzero value for the size of the page of results to return.
-
-If your storage account contains more than 5000 containers, or if you have specified a page size such that the listing operation returns a subset of containers in the storage account, then Azure Storage returns a *continuation token* with the list of containers. A continuation token is an opaque value that you can use to retrieve the next set of results from Azure Storage.
-
-In your code, check the value of the continuation token to determine whether it is empty. When the continuation token is empty, then the set of results is complete. If the continuation token is not empty, then call the listing method again, passing in the continuation token to retrieve the next set of results, until the continuation token is empty.
--
-Use the options parameter to the **listContainers** method to filter results with a prefix.
-
-### Filter results with a prefix
-
-To filter the list of containers, specify a string for the **prefix** property. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix.
-
-```typescript
-async function listContainers(
- blobServiceClient: BlobServiceClient,
- containerNamePrefix: string
-) {
-
- const options: ServiceListContainersOptions = {
- includeDeleted: false,
- includeMetadata: true,
- includeSystem: true,
-
- // filter by prefix
- prefix: containerNamePrefix
- };
-
- for await (const containerItem of blobServiceClient.listContainers(options)) {
--
- // do something with containerItem
-
- }
-}
-```
-
-### Include metadata in results
-
-To return container metadata with the results, specify the **metadata** value for the BlobContainerTraits enum. Azure Storage includes metadata with each container returned, so you do not need to fetch the container metadata as a separate operation.
-
-```typescript
-async function listContainers(
- blobServiceClient: BlobServiceClient,
- containerNamePrefix: string
-) {
-
- const options: ServiceListContainersOptions = {
- includeDeleted: false,
- includeSystem: true,
- prefix: containerNamePrefix,
-
- // include metadata
- includeMetadata: true,
- };
-
- for await (const containerItem of blobServiceClient.listContainers(options)) {
-
- // do something with containerItem
-
- }
-}
-```
-
-## Resources
-
-To learn more about listing containers using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for listing containers use the following REST API operation:
--- [List Containers](/rest/api/storageservices/list-containers2) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/containers-list.ts)--
-## See also
--- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
Title: Copy a blob with asynchronous scheduling using JavaScript
+ Title: Copy a blob with asynchronous scheduling using JavaScript or TypeScript
description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Copy a blob with asynchronous scheduling using JavaScript
+# Copy a blob with asynchronous scheduling using JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-copy-async](../../../includes/storage-dev-guides/storage-dev-guide-selector-copy-async.md)]
If the copy source is a blob in a different storage account, the operation can c
The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js" id="Snippet_copy_from_azure_async":::
+## [TypeScript](#tab/typescript)
++++ > [!NOTE] > User delegation SAS tokens offer greater security, as they're signed with Microsoft Entra credentials instead of an account key. To create a user delegation SAS token, the Microsoft Entra security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
The following example shows a scenario for copying a source blob from a differen
You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js" id="Snippet_copy_blob_external_source_async":::
+## [TypeScript](#tab/typescript)
++++ ## Check the status of a copy operation To check the status of an asynchronous `Copy Blob` operation, you can poll the [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties) method and check the copy status. The following code example shows how to check the status of a pending copy operation:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js" id="Snippet_check_copy_status_async":::
+## [TypeScript](#tab/typescript)
++++ ## Abort a copy operation Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
To abort a pending copy operation, call the following operation:
This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js" id="Snippet_abort_copy_async":::
+## [TypeScript](#tab/typescript)
++++ ## Resources To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/copy-blob.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods covered in this article use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]+
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
- Title: Copy a blob with asynchronous scheduling using TypeScript-
-description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the client library for JavaScript and TypeScript.
--- Previously updated : 08/05/2024-----
-# Copy a blob with asynchronous scheduling using TypeScript
--
-This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation.
-
-The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a copy operation, or to abort a pending copy. To learn more, see the authorization guidance for the following REST API operation:
- - [Copy Blob](/rest/api/storageservices/copy-blob#authorization)
- - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization)
--
-## Copy a blob with asynchronous scheduling
-
-This section gives an overview of methods provided by the Azure Storage client library for JavaScript and TypeScript to perform a copy operation with asynchronous scheduling.
-
-The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begin an asynchronous copy of data from the source blob:
--- [BlobClient.beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl)-
-The `beginCopyFromURL` method returns a long running operation poller that allows you to wait indefinitely until the copy is completed.
-
-## Copy a blob from a source within Azure
-
-If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Microsoft Entra ID, a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md).
-
-If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
-
-The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
--
-> [!NOTE]
-> User delegation SAS tokens offer greater security, as they're signed with Microsoft Entra credentials instead of an account key. To create a user delegation SAS token, the Microsoft Entra security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
-
-## Copy a blob from a source outside of Azure
-
-You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
--
-## Check the status of a copy operation
-
-To check the status of an asynchronous `Copy Blob` operation, you can poll the [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties) method and check the copy status.
-
-The following code example shows how to check the status of a pending copy operation:
--
-## Abort a copy operation
-
-Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
-
-To abort a pending copy operation, call the following operation:
--- [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl)-
-This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
--
-## Resources
-
-To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods covered in this article use the following REST API operations:
--- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/copy-blob.ts)-
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Title: Copy a blob with JavaScript
+ Title: Copy a blob with JavaScript or TypeScript
description: Learn how to copy a blob in Azure Storage by using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Copy a blob with JavaScript
+# Copy a blob with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-copy](../../../includes/storage-dev-guides/storage-dev-guide-selector-copy.md)]
Copy operations can be used to move data within a storage account, between stora
| REST API operation | When to use | Client library methods | Guidance | | | | | |
-| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) | [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md) |
-| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl) | [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md) |
-| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) | [Copy a blob with asynchronous scheduling using JavaScript](storage-blob-copy-async-javascript.md) |
+| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) | [Copy a blob from a source object URL with JavaScript or TypeScript](storage-blob-copy-url-javascript.md) |
+| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl) | [Copy a blob from a source object URL with JavaScript or TypeScript](storage-blob-copy-url-javascript.md) |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) | [Copy a blob with asynchronous scheduling using JavaScript or TypeScript](storage-blob-copy-async-javascript.md) |
For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation:
For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/pu
- [Client library reference documentation](/javascript/api/@azure/storage-blob) - [Client library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) - [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)+
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
- Title: Copy a blob with TypeScript-
-description: Learn how to copy a blob with TypeScript in Azure Storage by using the JavaScript client library.
--- Previously updated : 08/05/2024-----
-# Copy a blob with TypeScript
--
-This article provides an overview of copy operations using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme).
-
-## About copy operations
-
-Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme).
-
-| REST API operation | When to use | Client library methods | Guidance |
-| | | | |
-| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) | [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md) |
-| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl) | [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md) |
-| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) | [Copy a blob with asynchronous scheduling using TypeScript](storage-blob-copy-async-typescript.md) |
-
-For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation:
--- [appendBlockFromURL](/javascript/api/@azure/storage-blob/appendblobclient#@azure-storage-blob-appendblobclient-appendblockfromurl)-
-For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library method wraps this operation:
--- [uploadPagesFromURL](/javascript/api/@azure/storage-blob/pageblobclient#@azure-storage-blob-pageblobclient-uploadpagesfromurl)-
-## Client library resources
--- [Client library reference documentation](/javascript/api/@azure/storage-blob)-- [Client library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob)-- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+ # Copy a blob from a source object URL with JavaScript
The following method wraps the [Put Blob From URL](/rest/api/storageservices/put
These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
-For large objects, you may choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
+For large objects, you might choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
- [BlockBlobClient.stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl)
If you're copying a blob from a source within Azure, access to the source blob c
The following example shows a scenario for copying from a source blob within Azure:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.js" id="Snippet_copy_from_azure_put_blob_from_url":::
+## [TypeScript](#tab/typescript)
++++ The [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) method can also accept a [BlockBlobSyncUploadFromURLOptions](/javascript/api/@azure/storage-blob/blockblobsyncuploadfromurloptions) parameter to specify further options for the operation. ## Copy a blob from a source outside of Azure You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.js" id="Snippet_copy_from_external_source_put_blob_from_url":::
+## [TypeScript](#tab/typescript)
++++ ## Resources To learn more about copying blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/copy-blob-put-from-url.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods covered in this article use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API) - [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]+
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
- Title: Copy a blob from a source object URL with TypeScript-
-description: Learn how to copy a blob from a source object URL in Azure Storage by using the client library for JavaScript and TypeScript.
--- Previously updated : 08/05/2024-----
-# Copy a blob from a source object URL with TypeScript
--
-This article shows how to copy a blob from a source object URL using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL.
-
-The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using TypeScript](storage-blob-copy-async-typescript.md).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operation:
- - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization)
- - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization)
--
-## Copy a blob from a source object URL
-
-This section gives an overview of methods provided by the Azure Storage client library for JavaScript and TypeScript to perform a copy operation from a source object URL.
-
-The following method wraps the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and creates a new block blob where the contents of the blob are read from a given URL:
--- [BlockBlobClient.syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl)-
-These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
-
-For large objects, you may choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
--- [BlockBlobClient.stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl)-
-## Copy a blob from a source within Azure
-
-If you're copying a blob from a source within Azure, access to the source blob can be authorized via Microsoft Entra ID, a shared access signature (SAS), or an account key.
-
-The following example shows a scenario for copying from a source blob within Azure:
--
-The [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) method can also accept a [BlockBlobSyncUploadFromURLOptions](/javascript/api/@azure/storage-blob/blockblobsyncuploadfromurloptions) parameter to specify further options for the operation.
-
-## Copy a blob from a source outside of Azure
-
-You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
--
-## Resources
-
-To learn more about copying blobs using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods covered in this article use the following REST API operations:
--- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)-- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/copy-blob-put-from-url.ts)-
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
Because anyone with the SAS token can use it to access the container and blobs,
## Use the DefaultAzureCredential in Azure Cloud
-To authenticate to Azure, _without secrets_, set up **managed identity**. This allows your code to use [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential).
+To authenticate to Azure, _without secrets_, set up **managed identity**. This approach allows your code to use [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential).
To set up managed identity for the Azure cloud:
To set up managed identity for the Azure cloud:
* Set the appropriate [Storage roles](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac) for the identity * Configure your Azure environment to work with your managed identity
-When these two tasks are complete, use the DefaultAzureCredential instead of a connection string or account key. This allows all your environments to use the _exact same source code_ without the issue of using secrets in source code.
+When these two tasks are complete, use the DefaultAzureCredential instead of a connection string or account key. This approach allows all your environments to use the _exact same source code_ without the issue of using secrets in source code.
## Use the DefaultAzureCredential in local development
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Title: Delete and restore a blob with JavaScript
+ Title: Delete and restore a blob with JavaScript or TypeScript
description: Learn how to delete and restore a blob in your Azure Storage account using the JavaScript client library Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Delete and restore a blob with JavaScript
+# Delete and restore a blob with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
This article shows how to delete blobs with the [Azure Storage client library fo
[!INCLUDE [storage-dev-guide-delete-blob-note](../../../includes/storage-dev-guides/storage-dev-guide-delete-blob-note.md)]
-To delete a blob, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then call either of these methods:
+To delete a blob, call one of the following methods:
- [BlobClient.delete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-delete) - [BlobClient.deleteIfExists](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-deleteifexists)
-The following example deletes a blob.
+If the blob has any associated snapshots, you must delete all of its snapshots to delete the blob. The following code example shows how to delete a blob and its snapshots:
-```javascript
-async function deleteBlob(containerClient, blobName){
+## [JavaScript](#tab/javascript)
- // include: Delete the base blob and all of its snapshots.
- // only: Delete only the blob's snapshots and not the blob itself.
- const options = {
- deleteSnapshots: 'include' // or 'only'
- }
- // Create blob client from container client
- const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
+## [TypeScript](#tab/typescript)
- await blockBlobClient.delete(options);
- console.log(`deleted blob ${blobName}`);
-
-}
-```
-
-The following example deletes a blob if it exists.
-
-```javascript
-async function deleteBlobIfItExists(containerClient, blobName){
-
- // include: Delete the base blob and all of its snapshots.
- // only: Delete only the blob's snapshots and not the blob itself.
- const options = {
- deleteSnapshots: 'include' // or 'only'
- }
-
- // Create blob client from container client
- const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
-
- await blockBlobClient.deleteIfExists(options);
-
- console.log(`deleted blob ${blobName}`);
-
-}
-```
+ ## Restore a deleted blob
-Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period has expired, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
+Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period expires, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
You can use the Azure Storage client libraries to restore a soft-deleted blob or snapshot. #### Restore soft-deleted objects when versioning is disabled
-To restore deleted blobs, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then call the following method:
+To restore soft-deleted blobs, call the following method:
- [BlobClient.undelete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-undelete)
-This method restores soft-deleted blobs and any deleted snapshots associated with it. Calling this method for a blob that has not been deleted has no effect.
+This method restores soft-deleted blobs and any deleted snapshots associated with it. Calling this method for a blob that hasn't been deleted has no effect.
-```javascript
-async function undeleteBlob(containerClient, blobName){
+## [JavaScript](#tab/javascript)
- // Create blob client from container client
- const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
- await blockBlobClient.undelete();
+## [TypeScript](#tab/typescript)
- console.log(`undeleted blob ${blobName}`);
-}
-```
+ ## Resources To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-blob.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/delete-blob.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Delete Blob](/rest/api/storageservices/delete-blob) (REST API) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-blob.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also - [Soft delete for blobs](soft-delete-blob-overview.md) - [Blob versioning](versioning-overview.md)+
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
- Title: Delete and restore a blob with TypeScript-
-description: Learn how to delete and restore a blob with TypeScript in your Azure Storage account using the JavaScript client library.
--- Previously updated : 08/12/2024-----
-# Delete and restore a blob with TypeScript
--
-This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to delete a blob, or to restore a soft-deleted blob. To learn more, see the authorization guidance for the following REST API operations:
- - [Delete Blob](/rest/api/storageservices/delete-blob#authorization)
- - [Undelete Blob](/rest/api/storageservices/undelete-blob#authorization)
-
-## Delete a blob
--
-To delete a blob, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then call either of these methods:
--- [BlobClient.delete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-delete)-- [BlobClient.deleteIfExists](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-deleteifexists)-
-The following example deletes a blob.
---
-The following example deletes a blob if it exists.
---
-## Restore a deleted blob
-
-Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period has expired, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
-
-You can use the Azure Storage client libraries to restore a soft-deleted blob or snapshot.
-
-#### Restore soft-deleted objects when versioning is disabled
-
-To restore deleted blobs, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then call the following method:
--- [BlobClient.undelete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-undelete)-
-This method restores soft-deleted blobs and any deleted snapshots associated with it. Calling this method for a blob that has not been deleted has no effect.
---
-## Resources
-
-To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations:
--- [Delete Blob](/rest/api/storageservices/delete-blob) (REST API)-- [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-delete.ts)--
-### See also
--- [Soft delete for blobs](soft-delete-blob-overview.md)-- [Blob versioning](versioning-overview.md)
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Title: Download a blob with JavaScript
+ Title: Download a blob with JavaScript or TypeScript
description: Learn how to download a blob in Azure Storage by using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Download a blob with JavaScript
+# Download a blob with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-download](../../../includes/storage-dev-guides/storage-dev-guide-selector-download.md)]
You can use any of the following methods to download a blob:
The following example downloads a blob by using a file path with the [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) method. This method is only available in the Node.js runtime:
-```javascript
-async function downloadBlobToFile(containerClient, blobName, fileNameWithPath) {
+## [JavaScript](#tab/javascript)
- const blobClient = containerClient.getBlobClient(blobName);
-
- await blobClient.downloadToFile(fileNameWithPath);
- console.log(`download of ${blobName} success`);
-}
-```
+
+## [TypeScript](#tab/typescript)
+++ ## Download as a stream The following example downloads a blob by creating a Node.js writable stream object and then piping to that stream with the [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method.
-```javascript
-async function downloadBlobAsStream(containerClient, blobName, writableStream) {
+## [JavaScript](#tab/javascript)
- const blobClient = containerClient.getBlobClient(blobName);
- const downloadResponse = await blobClient.download();
+## [TypeScript](#tab/typescript)
- downloadResponse.readableStreamBody.pipe(writableStream);
- console.log(`download of ${blobName} succeeded`);
-}
-```
++ ## Download to a string The following Node.js example downloads a blob to a string with [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method. In Node.js, blob data returns in a `readableStreamBody`.
-```javascript
-
-async function downloadBlobToString(containerClient, blobName) {
+## [JavaScript](#tab/javascript)
- const blobClient = containerClient.getBlobClient(blobName);
- const downloadResponse = await blobClient.download();
+## [TypeScript](#tab/typescript)
- const downloaded = await streamToBuffer(downloadResponse.readableStreamBody);
- console.log('Downloaded blob content:', downloaded.toString());
-}
-async function streamToBuffer(readableStream) {
- return new Promise((resolve, reject) => {
- const chunks = [];
- readableStream.on('data', (data) => {
- chunks.push(data instanceof Buffer ? data : Buffer.from(data));
- });
- readableStream.on('end', () => {
- resolve(Buffer.concat(chunks));
- });
- readableStream.on('error', reject);
- });
-}
-```
+ If you're working with JavaScript in the browser, blob data returns in a promise [blobBody](/javascript/api/@azure/storage-blob/blobdownloadresponseparsed#@azure-storage-blob-blobdownloadresponseparsed-blobbody). To learn more, see the example usage for browsers at [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download).
If you're working with JavaScript in the browser, blob data returns in a promise
To learn more about how to download blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+View code samples from this article (GitHub):
+
+- Download to file for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-file.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/src/download-blob-to-file.ts)
+- Download to stream for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-stream.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/src/download-blob-to-stream.ts)
+- Download to string for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-string.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/src/download-blob-to-string.ts)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for downloading blobs use the following REST API operation: - [Get Blob](/rest/api/storageservices/get-blob) (REST API)
-### Code samples
-
-View code samples from this article (GitHub):
-- [Download to file](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-file.js)-- [Download to stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-stream.js)-- [Download to string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-string.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]+
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
- Title: Download a blob with TypeScript-
-description: Learn how to download a blob with TypeScript in Azure Storage by using the client library for JavaScript and TypeScript.
---- Previously updated : 08/05/2024-----
-# Download a blob with TypeScript
--
-This article shows how to download a blob using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can download blob data to various destinations, including a local file path, stream, or text string.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a download operation. To learn more, see the authorization guidance for the following REST API operation:
- - [Get Blob](/rest/api/storageservices/get-blob#authorization)
-
-## Download a blob
-
-You can use any of the following methods to download a blob:
--- [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download)-- [BlobClient.downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1) (only available in Node.js runtime)-- [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) (only available in Node.js runtime)
-
-## Download to a file path
-
-The following example downloads a blob by using a file path with the [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) method. This method is only available in the Node.js runtime:
---
-## Download as a stream
-
-The following example downloads a blob by creating a Node.js writable stream object and then piping to that stream with the [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method.
---
-## Download to a string
-
-The following Node.js example downloads a blob to a string with [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method. In Node.js, blob data returns in a `readableStreamBody`.
---
-If you're working with JavaScript in the browser, blob data returns in a promise [blobBody](/javascript/api/@azure/storage-blob/blobdownloadresponseparsed#@azure-storage-blob-blobdownloadresponseparsed-blobbody). To learn more, see the example usage for browsers at [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download).
-
-## Resources
-
-To learn more about how to download blobs using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods for downloading blobs use the following REST API operation:
--- [Get Blob](/rest/api/storageservices/get-blob) (REST API)-
-### Code samples
-
-View code samples from this article (GitHub):
-- [Download to file](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-file.js)-- [Download to stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-stream.js)-- [Download to string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-string.js)-
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Title: Get container and blob url with JavaScript
+ Title: Get container and blob url with JavaScript or TypeScript
description: Learn how to get a container or blob URL in Azure Storage by using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Get URL for container or blob with JavaScript
+# Get a URL for a container or blob with JavaScript or TypeScript
You can get a container or blob URL by using the `url` property of the client object: -- ContainerClient.[url](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-url)-- BlobClient.[url](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-url)-- BlockBlobClient.[url](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-url)--
-The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
+- [ContainerClient.url](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-url)
+- [BlobClient.url](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-url)
+- [BlockBlobClient.url](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-url)
> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article.
+> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript or TypeScript](storage-blob-javascript-get-started.md) article.
-## Get URL for container and blob
+## Get a URL for a container or blob
The following example gets a container URL and a blob URL by accessing the client's **url** property:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/get-url.js" id="Snippet_GetUrl":::
+## [TypeScript](#tab/typescript)
++++ > [!TIP]
-> For loops, you must use the object's `name` property to create a client then get the URL with the client. Iterators don't return client objects, they return item objects.
+> When iterating over objects in a loop, use the object's `name` property to create a client, then get the URL with the client. Iterators don't return client objects, they return item objects.
+
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/get-url.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-get-url.ts) code samples from this article (GitHub)
## See also
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
- Title: Get container and blob url with TypeScript-
-description: Learn how to get a container or blob URL with TypeScript in Azure Storage by using the JavaScript client library using TypeScript.
--- Previously updated : 08/05/2024-----
-# Get URL for container or blob with TypeScript
-
-You can get a container or blob URL by using the `url` property of the client object:
--- ContainerClient.[url](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-url)-- BlobClient.[url](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-url)-- BlockBlobClient.[url](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-url)--
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md) article.
-
-## Get URL for container and blob
-
-The following example gets a container URL and a blob URL by accessing the client's **url** property:
--
-> [!TIP]
-> For loops, you must use the object's `name` property to create a client then get the URL with the client. Iterators don't return client objects, they return item objects.
-
-### Code samples
-
-View code samples from this article (GitHub):
-- [Get URL for container and blob](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-get-url.ts)-
-## See also
--- [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md)-- [Get Blob](/rest/api/storageservices/get-blob) (REST API)
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Title: Get started with Azure Blob Storage and JavaScript
+ Title: Get started with Azure Blob Storage and JavaScript or TypeScript
-description: Get started developing a JavaScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint.
+description: Get started developing a JavaScript or TypeScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint.
Previously updated : 08/05/2024- Last updated : 10/28/2024+
-# Get started with Azure Blob Storage and JavaScript
+# Get started with Azure Blob Storage and JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library v12 for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-
-The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
[API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-js/issues)
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - Azure storage account - [create a storage account](../common/storage-account-create.md) - [Node.js LTS](https://nodejs.org/)
+- [TypeScript](https://www.typescriptlang.org/download), if applicable
- For client (browser) applications, you need [bundling tools](https://github.com/Azure/azure-sdk-for-js/blob/main/documentation/Bundling.md).
+-
## Set up your project
-1. Open a command prompt and change into your project folder. Change `YOUR-DIRECTORY` to your folder name:
-
- ```bash
- cd YOUR-DIRECTORY
- ```
-
-1. If you don't have a `package.json` file already in your directory, initialize the project to create the file:
-
- ```bash
- npm init -y
- ```
-
-1. Install the Azure Blob Storage client library for JavaScript:
+This section walks you through preparing a project to work with the Azure Blob Storage client library for JavaScript.
- ```bash
- npm install @azure/storage-blob
- ```
-
-1. If you want to use passwordless connections using Microsoft Entra ID, install the Azure Identity client library for JavaScript:
-
- ```bash
- npm install @azure/identity
- ```
-
-## Authorize access and connect to Blob Storage
+Open a command prompt and navigate to your project folder. Change `<project-directory>` to your folder name:
-Microsoft Entra ID provides the most secure connection by managing the connection identity ([**managed identity**](../../active-directory/managed-identities-azure-resources/overview.md)). This **passwordless** functionality allows you to develop an application that doesn't require any secrets (keys or connection strings) stored in the code.
-
-### Set up identity access to the Azure cloud
-
-To connect to Azure without passwords, you need to set up an Azure identity or use an existing identity. Once the identity is set up, make sure to assign the appropriate roles to the identity.
-
-To authorize passwordless access with Microsoft Entra ID, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
-
-|Environment|Method|
-|--|--|
-|Developer environment|[Visual Studio Code](/azure/developer/javascript/sdk/authentication/local-development-environment-developer-account?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|Developer environment|[Service principal](/azure/developer/javascript/sdk/authentication/local-development-environment-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|Azure-hosted apps|[Azure-hosted apps setup](/azure/developer/javascript/sdk/authentication/azure-hosted-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|On-premises|[On-premises app setup](/azure/developer/javascript/sdk/authentication/on-premises-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-
-### Set up storage account roles
-
-Your storage resource needs to have one or more of the following [Azure RBAC](../../role-based-access-control/built-in-roles.md) roles assigned to the identity resource you plan to connect with. [Setup the Azure Storage roles](assign-azure-role-data-access.md?tabs=portal) for each identity you created in the previous step: Azure cloud, local development, on-premises.
+```bash
+cd <project-directory>
+```
-After you complete the setup, each identity needs at least one of the appropriate roles:
+If you don't have a `package.json` file already in your directory, initialize the project to create the file:
-- A [data access](../common/authorize-data-access.md) role - such as:
- - **Storage Blob Data Reader**
- - **Storage Blob Data Contributor**
+```bash
+npm init -y
+```
-- A [resource](../common/authorization-resource-provider.md) role - such as:
- - **Reader**
- - **Contributor**
+From your project directory, install packages for the Azure Blob Storage and Azure Identity client libraries using the `npm install` or `yarn add` commands. The **@azure/identity** package is needed for passwordless connections to Azure services.
-## Build your application
+### [JavaScript](#tab/javascript)
-As you build your application, your code will primarily interact with three types of resources:
+```bash
+npm install @azure/storage-blob @azure/identity
+```
-- The storage account, which is the unique top-level namespace for your Azure Storage data.-- Containers, which organize the blob data in your storage account.-- Blobs, which store unstructured data like text and binary data.
+### [TypeScript](#tab/typescript)
-The following diagram shows the relationship between these resources.
+```bash
+npm install typescript @azure/storage-blob @azure/identity
+```
-![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
+
-Each type of resource is represented by one or more associated JavaScript clients:
+## Authorize access and connect to Blob Storage
-| Class | Description |
-|||
-| [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) | Represents the Blob Storage endpoint for your storage account. |
-| [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) | Allows you to manipulate Azure Storage containers and their blobs. |
-| [BlobClient](/javascript/api/@azure/storage-blob/blobclient) | Allows you to manipulate Azure Storage blobs.|
+To connect an app to Blob Storage, create an instance of the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
-## Create a BlobServiceClient object
+To learn more about creating and managing client objects, including best practices, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
-The [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object is the top object in the SDK. This client allows you to manipulate the service, containers and blobs.
+You can authorize a `BlobServiceClient` object by using a Microsoft Entra authorization token, an account access key, or a shared access signature (SAS). For optimal security, Microsoft recommends using Microsoft Entra ID with managed identities to authorize requests against blob data. For more information, see [Authorize access to blobs using Microsoft Entra ID](authorize-access-azure-active-directory.md).
## [Microsoft Entra ID (recommended)](#tab/azure-ad)
-Once your Azure storage account identity roles and your local environment are set up, create a JavaScript file which includes the [``@azure/identity``](https://www.npmjs.com/package/@azure/identity) package. Create a credential, such as the [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential), to implement passwordless connections to Blob Storage. Use that credential to authenticate with a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object.
+To authorize with Microsoft Entra ID, you need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your app runs. Use the following table as a guide:
+| Where the app runs | Security principal | Guidance |
+| | | |
+| Local machine (developing and testing) | Service principal | To learn how to register the app, set up a Microsoft Entra group, assign roles, and configure environment variables, see [Authorize access using developer service principals](/azure/developer/javascript/sdk/authentication/local-development-environment-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
+| Local machine (developing and testing) | User identity | To learn how to set up a Microsoft Entra group, assign roles, and sign in to Azure, see [Authorize access using developer credentials](/azure/developer/javascript/sdk/authentication/local-development-environment-developer-account?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
+| Hosted in Azure | Managed identity | To learn how to enable managed identity and assign roles, see [Authorize access from Azure-hosted apps using a managed identity](/azure/developer/javascript/sdk/authentication/azure-hosted-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
+| Hosted outside of Azure (for example, on-premises apps) | Service principal | To learn how to register the app, assign roles, and configure environment variables, see [Authorize access from on-premises apps using an application service principal](/azure/developer/javascript/sdk/authentication/on-premises-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control. If you use a local service principal as part of your DefaultAzureCredential set up, any security information for that credential will also go into the `.env` file.
+#### Authorize access using DefaultAzureCredential
-If you plan to deploy the application to servers and clients that run outside of Azure, create one of the [credentials](https://www.npmjs.com/package/@azure/identity#credential-classes) that meets your needs.
+An easy and secure way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. You can then use that credential to create a `BlobServiceClient` object.
-## [Account key](#tab/account-key)
+The following example creates a `BlobServiceClient` object using `DefaultAzureCredential`:
-Create a [StorageSharedKeyCredential](/javascript/api/@azure/storage-blob/storagesharedkeycredential) from the storage account name and account key. Then pass the StorageSharedKeyCredential to the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class constructor to create a client.
-
-The `dotenv` package is used to read your storage account name and key from a `.env` file. This file should not be checked into source control.
+```javascript
+const accountName = "<account-name>";
+const accountURL = `https://${accountName}.blob.core.windows.net`;
+const blobServiceClient = new BlobServiceClient(
+ accountURL,
+ new DefaultAzureCredential()
+);
+```
-For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
-
-> [!IMPORTANT]
-> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
+This code example can be used for JavaScript or TypeScript projects.
## [SAS token](#tab/sas-token)
-Create a Uri to your resource by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) with the Uri. The SAS token is a series of name/value pairs in the querystring in the format such as:
+To use a shared access signature (SAS) token, append the token to the account URL string separated by a `?` delimiter. Then, create a `BlobServiceClient` object with the URL.
+```javascript
+const accountName = "<account-name>";
+const sasToken = "<sas-token>";
+const accountURL = `https://${accountName}.blob.core.windows.net?${sasToken}`;
+const blobServiceClient = new BlobServiceClient(accountURL);
```
-https://YOUR-RESOURCE-NAME.blob.core.windows.net?YOUR-SAS-TOKEN
-```
-
-Depending on which tool you use to generate your SAS token, the querystring `?` may already be added to the SAS token.
+This code example can be used for JavaScript or TypeScript projects.
-The `dotenv` package is used to read your storage account name and SAS token from a `.env` file. This file should not be checked into source control.
-
-To generate and manage SAS tokens, see any of these articles:
+To learn more about generating and managing SAS tokens, see the following articles:
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)-- [Create a service SAS for a container or blob](sas-service-create.md)-
-> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
---
-## Create a ContainerClient object
-
-You can create the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object either from the BlobServiceClient, or directly.
-
-### Create ContainerClient object from BlobServiceClient
-
-Create the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object from the BlobServiceClient.
--
-### Create ContainerClient directly
-
-#### [Microsoft Entra ID (recommended)](#tab/azure-ad)
---
-#### [Account key](#tab/account-key)
--
-> [!IMPORTANT]
-> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
-
-#### [SAS token](#tab/sas-token)
-
+- [Create an account SAS with JavaScript](storage-blob-account-delegation-sas-create-javascript.md)
+- [Create a service SAS with JavaScript](sas-service-create-javascript.md)
+- [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md)
> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
+> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key.
-
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control.
-
-## Create a BlobClient object
-
-You can create any of the BlobClient objects, listed below, either from a ContainerClient, or directly.
-
-List of Blob clients:
-
-* [BlobClient](/javascript/api/@azure/storage-blob/blobclient)
-* [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient)
-* [AppendBlobClient](/javascript/api/@azure/storage-blob/appendblobclient)
-* [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient)
-* [PageBlobClient](/javascript/api/@azure/storage-blob/pageblobclient)
-
-### Create BlobClient object from ContainerClient
+## [Account key](#tab/account-key)
+To use a storage account shared key, provide the key as a string and initialize a `BlobServiceClient` object.
-### Create BlobClient directly
+```javascript
+const credential = new StorageSharedKeyCredential(accountName, accountKey);
+const blobServiceClient = new BlobServiceClient(
+ `https://${accountName}.blob.core.windows.net`,
+ credential
+);
+```
-#### [Microsoft Entra ID (recommended)](#tab/azure-ad)
+This code example can be used for JavaScript or TypeScript projects.
+You can also create a `BlobServiceClient` object using a connection string.
-#### [Account key](#tab/account-key)
+```javascript
+const blobServiceClient = BlobServiceClient.fromConnectionString(connectionString);
+```
+For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
> [!IMPORTANT] > The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
-#### [SAS token](#tab/sas-token)
--
-> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
--
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control.
-
-## See also
+ -- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)-- [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples)-- [API reference](/javascript/api/@azure/storage-blob/)-- [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+## Build your app
+
+As you build apps to work with data resources in Azure Blob Storage, your code primarily interacts with three resource types: storage accounts, containers, and blobs. To learn more about these resource types, how they relate to one another, and how apps interact with resources, see [Understand how apps interact with Blob Storage data resources](storage-blob-object-model.md).
+
+The following guides show you how to access data and perform specific actions using the Azure Storage client library for JavaScript:
+
+| Guide | Description |
+| | |
+| [Configure a retry policy](storage-retry-policy-javascript.md) | Implement retry policies for client operations. |
+| [Copy blobs](storage-blob-copy-javascript.md) | Copy a blob from one location to another. |
+| [Create a container](storage-blob-container-create-javascript.md) | Create blob containers. |
+| [Create a user delegation SAS](storage-blob-create-user-delegation-sas-javascript.md) | Create a user delegation SAS for a container or blob. |
+| [Create and manage blob leases](storage-blob-lease-javascript.md) | Establish and manage a lock on a blob. |
+| [Create and manage container leases](storage-blob-container-lease-javascript.md) | Establish and manage a lock on a container. |
+| [Delete and restore](storage-blob-delete-javascript.md) | Delete blobs and restore soft-deleted blobs. |
+| [Delete and restore containers](storage-blob-container-delete-javascript.md) | Delete containers and restore soft-deleted containers. |
+| [Download blobs](storage-blob-download-javascript.md) | Download blobs by using strings, streams, and file paths. |
+| [Find blobs using tags](storage-blob-tags-javascript.md) | Set and retrieve tags, and use tags to find blobs. |
+| [List blobs](storage-blobs-list-javascript.md) | List blobs in different ways. |
+| [List containers](storage-blob-containers-list-javascript.md) | List containers in an account and the various options available to customize a listing. |
+| [Manage properties and metadata (blobs)](storage-blob-properties-metadata-javascript.md) | Get and set properties and metadata for blobs. |
+| [Manage properties and metadata (containers)](storage-blob-container-properties-metadata-javascript.md) | Get and set properties and metadata for containers. |
+| [Performance tuning for data transfers](storage-blobs-tune-upload-download-javascript.md) | Optimize performance for data transfer operations. |
+| [Set or change a blob's access tier](storage-blob-use-access-tier-javascript.md) | Set or change the access tier for a block blob. |
+| [Upload blobs](storage-blob-upload-javascript.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. |
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
Title: Create and manage blob leases with JavaScript
+ Title: Create and manage blob leases with JavaScript or TypeScript
description: Learn how to manage a lock on a blob in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Create and manage blob leases with JavaScript
+# Create and manage blob leases with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-lease-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-lease-blob.md)]
To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/
The following example acquires a 30-second lease for a blob:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js" id="Snippet_AcquireBlobLease":::
+## [TypeScript](#tab/typescript)
++++ ## Renew a lease
-You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it expires, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
To renew a lease, use one of the following methods on a [BlobLeaseClient](/javas
The following example renews a lease for a blob:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js" id="Snippet_RenewBlobLease":::
+## [TypeScript](#tab/typescript)
++++ ## Release a lease You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
You can release a lease using one of the following methods on a JavaScript [Blob
The following example releases a lease on a blob:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js" id="Snippet_ReleaseBlobLease":::
+## [TypeScript](#tab/typescript)
++++ ## Break a lease You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
You can break a lease using one of the following methods on a [BlobLeaseClient](
The following example breaks a lease on a blob:
+## [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js" id="Snippet_BreakBlobLease":::
+## [TypeScript](#tab/typescript)
++++ [!INCLUDE [storage-dev-guide-blob-lease](../../../includes/storage-dev-guides/storage-dev-guide-blob-lease.md)] ## Resources To learn more about managing blob leases using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-blob.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing blob leases use the following REST API operation: - [Lease Blob](/rest/api/storageservices/lease-blob)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/lease-blob.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also - [Managing Concurrency in Blob storage](concurrency-manage.md)+
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
- Title: Create and manage blob leases with TypeScript-
-description: Learn how to manage a lock on a blob in your Azure Storage account with TypeScript using the JavaScript client library.
------ Previously updated : 08/05/2024---
-# Create and manage blob leases with TypeScript
--
-This article shows how to create and manage blob leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with a blob lease. To learn more, see the authorization guidance for the following REST API operation:
- - [Lease Blob](/rest/api/storageservices/lease-blob#authorization)
-
-## About blob leases
--
-Lease operations are handled by the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about container leases using the client library, see [Create and manage container leases with TypeScript](storage-blob-container-lease-typescript.md).
-
-## Acquire a lease
-
-When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
-
-To acquire a lease, create an instance of the [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) class, and then use one of the following methods:
--- [acquireLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-acquirelease)-
-The following example acquires a 30-second lease for a blob:
--
-## Renew a lease
-
-You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
-
-To renew a lease, use one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [renewLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-renewlease)-
-The following example renews a lease for a blob:
--
-## Release a lease
-
-You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
-
-You can release a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [releaseLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-releaselease)-
-The following example releases a lease on a blob:
--
-## Break a lease
-
-You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired until the original lease expires or is released.
-
-You can break a lease using one of the following methods on a [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient) instance:
--- [breakLease](/javascript/api/@azure/storage-blob/blobleaseclient#@azure-storage-blob-blobleaseclient-breaklease)-
-The following example breaks a lease on a blob:
---
-## Resources
-
-To learn more about managing blob leases using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing blob leases use the following REST API operation:
--- [Lease Blob](/rest/api/storageservices/lease-blob)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/lease-blob.ts)--
-### See also
--- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
description: Learn how to set and retrieve system properties and store custom me
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+ # Manage blob properties and metadata with JavaScript
In addition to the data they contain, blobs support system properties and user-d
> > To learn more about this feature, see [Manage and find data on Azure Blob storage with blob index (preview)](storage-manage-find-blobs.md).
-## Set blob http headers
-
-The following code example sets blob HTTP system properties on a blob.
-
-To set the HTTP properties for a blob, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then call [BlobClient.setHTTPHeaders](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-sethttpheaders). Review the [BlobHTTPHeaders properties](/javascript/api/@azure/storage-blob/blobhttpheaders) to know which HTTP properties you want to set. Any HTTP properties not explicitly set are cleared.
-
-```javascript
-/*
-properties= {
- blobContentType: 'text/plain',
- blobContentLanguage: 'en-us',
- blobContentEncoding: 'utf-8',
- // all other http properties are cleared
- }
-*/
-async function setHTTPHeaders(blobClient, headers) {
-
- await blobClient.setHTTPHeaders(headers);
-
- console.log(`headers set successfully`);
-}
-```
-
-## Set metadata
-
-You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then send a JSON object of name-value pairs with
--- [BlobClient.setMetadata](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-setmetadata) returns a [BlobGetPropertiesResponse object](/javascript/api/@azure/storage-blob/blobgetpropertiesresponse).-
-The following code example sets metadata on a blob.
-
-```javascript
-/*
-metadata= {
- reviewedBy: 'Bob',
- releasedBy: 'Jill',
-}
-*/
-async function setBlobMetadata(blobClient, metadata) {
-
- await blobClient.setMetadata(metadata);
-
- console.log(`metadata set successfully`);
-
-}
-```
-
-To read the metadata, get the blob's properties (shown below), specifically referencing the `metadata` property.
-
-## Get blob properties
-
-The following code example gets a blob's system properties, including HTTP headers and metadata, and displays those values.
-
-```javascript
-async function getProperties(blobClient) {
-
- const properties = await blobClient.getProperties();
- console.log(blobClient.name + ' properties: ');
-
- for (const property in properties) {
-
- switch (property) {
- // nested properties are stringified and returned as strings
- case 'metadata':
- case 'objectReplicationRules':
- console.log(` ${property}: ${JSON.stringify(properties[property])}`);
- break;
- default:
- console.log(` ${property}: ${properties[property]}`);
- break;
- }
- }
-}
-```
-
-The output for these console.log lines looks like:
-
-```console
-my-blob.txt properties:
- lastModified: Thu Apr 21 2022 13:02:53 GMT-0700 (Pacific Daylight Time)
- createdOn: Thu Apr 21 2022 13:02:53 GMT-0700 (Pacific Daylight Time)
- metadata: {"releasedby":"Jill","reviewedby":"Bob"}
- objectReplicationPolicyId: undefined
- objectReplicationRules: {}
- blobType: BlockBlob
- copyCompletedOn: undefined
- copyStatusDescription: undefined
- copyId: undefined
- copyProgress: undefined
- copySource: undefined
- copyStatus: undefined
- isIncrementalCopy: undefined
- destinationSnapshot: undefined
- leaseDuration: undefined
- leaseState: available
- leaseStatus: unlocked
- contentLength: 19
- contentType: text/plain
- etag: "0x8DA23D1EBA8E607"
- contentMD5: undefined
- contentEncoding: utf-8
- contentDisposition: undefined
- contentLanguage: en-us
- cacheControl: undefined
- blobSequenceNumber: undefined
- clientRequestId: 58da0441-7224-4837-9b4a-547f9a0c7143
- requestId: 26acb38a-001e-0046-27ba-55ef22000000
- version: 2021-04-10
- date: Thu Apr 21 2022 13:02:52 GMT-0700 (Pacific Daylight Time)
- acceptRanges: bytes
- blobCommittedBlockCount: undefined
- isServerEncrypted: true
- encryptionKeySha256: undefined
- encryptionScope: undefined
- accessTier: Hot
- accessTierInferred: true
- archiveStatus: undefined
- accessTierChangedOn: undefined
- versionId: undefined
- isCurrentVersion: undefined
- tagCount: undefined
- expiresOn: undefined
- isSealed: undefined
- rehydratePriority: undefined
- lastAccessed: undefined
- immutabilityPolicyExpiresOn: undefined
- immutabilityPolicyMode: undefined
- legalHold: undefined
- errorCode: undefined
- body: true
- _response: [object Object]
- objectReplicationDestinationPolicyId: undefined
- objectReplicationSourceProperties:
-```
+## Set and retrieve properties
+
+To set properties on a blob, use the following method:
+
+- [BlobClient.setHTTPHeaders](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-sethttpheaders)
+
+The following code example sets the `blobContentType` and `blobContentLanguage` system properties on a blob.
+
+Any properties not explicitly set are cleared. The following code example first gets the existing properties on the blob, then uses them to populate the headers that aren't being updated.
+
+## [JavaScript](#tab/javascript)
++
+## [TypeScript](#tab/typescript)
++++
+To retrieve properties on a blob, use the following method:
+
+- [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties)
+
+The following code example gets a blob's system properties and displays some of the values:
+
+## [JavaScript](#tab/javascript)
++
+## [TypeScript](#tab/typescript)
++++
+## Set and retrieve metadata
+
+You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, send a [Metadata](/javascript/api/@azure/storage-blob/metadata) object containing name-value pairs using the following method:
+
+- [BlobClient.setMetadata](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-setmetadata)
+
+The following code example sets metadata on a blob:
+
+## [JavaScript](#tab/javascript)
++
+## [TypeScript](#tab/typescript)
++++
+To retrieve metadata, call the [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties) method on your blob to populate the metadata collection, then read the values from the [metadata](/javascript/api/@azure/storage-blob/blobgetpropertiesresponse#@azure-storage-blob-blobgetpropertiesresponse-metadata) property. The `getProperties` method retrieves blob properties and metadata by calling both the `Get Blob Properties` operation and the `Get Blob Metadata` operation.
## Resources To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/blob-set-properties-and-metadata.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-set-properties-and-metadata.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API) - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/blob-set-properties-and-metadata.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]+
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
- Title: Manage properties and metadata for a blob with TypeScript-
-description: Learn how to set and retrieve system properties and store custom metadata on blobs with TypeScript in your Azure Storage account using the JavaScript client library.
--- Previously updated : 08/05/2024-----
-# Manage blob properties and metadata with TypeScript
--
-In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with blob properties or metadata. To learn more, see the authorization guidance for the following REST API operations:
- - [Set Blob Properties](/rest/api/storageservices/set-blob-properties#authorization)
- - [Get Blob Properties](/rest/api/storageservices/get-blob-properties#authorization)
- - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata#authorization)
- - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata#authorization)
-
-## About properties and metadata
--- **System properties**: System properties exist on each Blob storage resource. Some of them can be read or set, while others are read-only. Under the covers, some system properties correspond to certain standard HTTP headers. The Azure Storage client library for JavaScript maintains these properties for you.--- **User-defined metadata**: User-defined metadata consists of one or more name-value pairs that you specify for a Blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.-
- Metadata name/value pairs are valid HTTP headers and should adhere to all restrictions governing HTTP headers. For more information about metadata naming requirements, see [Metadata names](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#metadata-names).
-
-> [!NOTE]
-> Blob index tags also provide the ability to store arbitrary user-defined key/value attributes alongside an Azure Blob storage resource. While similar to metadata, only blob index tags are automatically indexed and made searchable by the native blob service. Metadata cannot be indexed and queried unless you utilize a separate service such as Azure Search.
->
-> To learn more about this feature, see [Manage and find data on Azure Blob storage with blob index (preview)](storage-manage-find-blobs.md).
-
-## Set blob http headers
-
-The following code example sets blob HTTP system properties on a blob.
-
-To set the HTTP properties for a blob, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then call [BlobClient.setHTTPHeaders](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-sethttpheaders). Review the [BlobHTTPHeaders properties](/javascript/api/@azure/storage-blob/blobhttpheaders) to know which HTTP properties you want to set. Any HTTP properties not explicitly set are cleared.
---
-## Set metadata
-
-You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then send a JSON object of name-value pairs with
--- [BlobClient.setMetadata](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-setmetadata) returns a [BlobGetPropertiesResponse object](/javascript/api/@azure/storage-blob/blobgetpropertiesresponse).-
-The following code example sets metadata on a blob.
--
-To read the metadata, get the blob's properties (shown below), specifically referencing the `metadata` property.
-
-## Get blob properties
-
-The following code example gets a blob's system properties, including HTTP headers and metadata, and displays those values.
--
-Blob properties can include:
-
-```json
-lastModified: Mon Mar 20 2023 11:04:17 GMT-0700 (Pacific Daylight Time)
-createdOn: Mon Mar 20 2023 11:04:17 GMT-0700 (Pacific Daylight Time)
-metadata: {"releasedby":"Jill","reviewedby":"Bob"}
-objectReplicationPolicyId: undefined
-objectReplicationRules: {}
-blobType: BlockBlob
-copyCompletedOn: undefined
-copyStatusDescription: undefined
-copyId: undefined
-copyProgress: undefined
-copySource: undefined
-copyStatus: undefined
-isIncrementalCopy: undefined
-destinationSnapshot: undefined
-leaseDuration: undefined
-leaseState: available
-leaseStatus: unlocked
-contentLength: 19
-contentType: text/plain
-etag: "0x8DB296D85EED062"
-contentMD5: undefined
-isServerEncrypted: true
-encryptionKeySha256: undefined
-encryptionScope: undefined
-accessTier: Hot
-accessTierInferred: true
-archiveStatus: undefined
-accessTierChangedOn: undefined
-versionId: undefined
-isCurrentVersion: undefined
-tagCount: undefined
-expiresOn: undefined
-isSealed: undefined
-rehydratePriority: undefined
-lastAccessed: undefined
-immutabilityPolicyExpiresOn: undefined
-immutabilityPolicyMode: undefined
-legalHold: undefined
-errorCode: undefined
-body: true
-_response: [object Object]
-objectReplicationDestinationPolicyId: undefined
-objectReplicationSourceProperties:
-```
--
-## Resources
-
-To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations:
--- [Set Blob Properties](/rest/api/storageservices/set-blob-properties) (REST API)-- [Get Blob Properties](/rest/api/storageservices/get-blob-properties) (REST API)-- [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API)-- [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-set-properties-and-metadata.ts)-
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Title: Use blob index tags to manage and find data with JavaScript
+ Title: Use blob index tags to manage and find data with JavaScript or TypeScript
description: Learn how to categorize, manage, and query for blob objects by using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Use blob index tags to manage and find data with JavaScript
+# Use blob index tags to manage and find data with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-index-tags](../../../includes/storage-dev-guides/storage-dev-guide-selector-index-tags.md)]
This article shows how to use blob index tags to manage and find data using the
[!INCLUDE [storage-dev-guide-auth-set-blob-tags](../../../includes/storage-dev-guides/storage-dev-guide-auth-set-blob-tags.md)]
-To set tags at blob upload time, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then use the following method:
+You can set tags by using the following method:
- [BlobClient.setTags](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-settags)
-The following example performs this task.
+The specified tags in this method replace existing tags. If old values must be preserved, they must be downloaded and included in the call to this method. The following example shows how to set tags:
-```javascript
-// A blob can have up to 10 tags.
-//
-// const tags = {
-// project: 'End of month billing summary',
-// reportOwner: 'John Doe',
-// reportPresented: 'April 2022'
-// }
-async function setTags(containerClient, blobName, tags) {
+### [JavaScript](#tab/javascript)
- // Create blob client from container client
- const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
- // Set tags
- await blockBlobClient.setTags(tags);
+### [TypeScript](#tab/typescript)
- console.log(`uploading blob ${blobName}`);
-}
-```
-You can delete all tags by passing an empty JSON object into the setTags method.
+
-| Related articles |
-|--|
-| [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
-| [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) |
+You can delete all tags by passing an empty JSON object into the `setTags` method.
## Get tags [!INCLUDE [storage-dev-guide-auth-get-blob-tags](../../../includes/storage-dev-guides/storage-dev-guide-auth-get-blob-tags.md)]
-To get tags, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then use the following method:
+You can get tags by using the following method:
- [BlobClient.getTags](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-gettags)
-The following example shows how to get and iterate over the blob's tags.
+The following example shows how to retrieve and iterate over the blob's tags.
-```javascript
-async function getTags(containerClient, blobName) {
+### [JavaScript](#tab/javascript)
- // Create blob client from container client
- const blockBlobClient = await containerClient.getBlockBlobClient(blobName);
- // Get tags
- const result = await blockBlobClient.getTags();
+### [TypeScript](#tab/typescript)
- for (const tag in result.tags) {
- console.log(`TAG: ${tag}: ${result.tags[tag]}`);
- }
-}
-```
+ ## Filter and find data with blob index tags
The following table shows some query strings:
|`@container = 'my-container' AND createdBy = 'Jill'`|**Filter by container** and specific property. In this query, `createdBy` is a text match and doesn't indicate an authorization match through Active Directory. |
-To find blobs, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then use the following method:
+You can find data by using the following method:
- [BlobServiceClient.findBlobsByTags](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-findblobsbytags)
-The following example finds all blobs matching the tagOdataQuery parameter.
-
-```javascript
-async function findBlobsByQuery(blobServiceClient, tagOdataQuery) {
-
- // page size
- const maxPageSize = 10;
-
- let i = 1;
- let marker;
-
- const listOptions = {
- includeMetadata: true,
- includeSnapshots: false,
- includeTags: true,
- includeVersions: false
- };
-
- let iterator = blobServiceClient.findBlobsByTags(tagOdataQuery, listOptions).byPage({ maxPageSize });
- let response = (await iterator.next()).value;
-
- // Prints blob names
- if (response.blobs) {
- for (const blob of response.blobs) {
- console.log(`Blob ${i++}: ${blob.name} - ${JSON.stringify(blob.tags)}`);
- }
- }
-
- // Gets next marker
- marker = response.continuationToken;
-
- // no more blobs
- if (!marker) return;
-
- // Passing next marker as continuationToken
- iterator = blobServiceClient
- .findBlobsByTags(tagOdataQuery, listOptions)
- .byPage({ continuationToken: marker, maxPageSize });
- response = (await iterator.next()).value;
-
- // Prints blob names
- if (response.blobs) {
- for (const blob of response.blobs) {
- console.log(`Blob ${i++}: ${blob.name} - ${JSON.stringify(blob.tags)}`);
- }
- }
-}
-```
+The following example finds all blobs matching the `tagOdataQuery` parameter.
+
+### [JavaScript](#tab/javascript)
++
+### [TypeScript](#tab/typescript)
+++ And example output for this function shows the matched blobs and their tags, based on the console.log code in the preceding function:
And example output for this function shows the matched blobs and their tags, bas
To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/set-and-retrieve-blob-tags.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-set-and-retrieve-tags.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing and using blob index tags use the following REST API operations:
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/set-and-retrieve-blob-tags.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
- Title: Use blob index tags with TypeScript -
-description: Learn how to categorize, manage, and query for blob objects with TypeScript by using the JavaScript client library.
--- Previously updated : 08/05/2024-----
-# Use blob index tags to manage and find data with TypeScript
--
-This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with blob index tags. To learn more, see the authorization guidance for the following REST API operations:
- - [Get Blob Tags](/rest/api/storageservices/get-blob-tags#authorization)
- - [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization)
- - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
--
-## Set tags
--
-To set tags at blob upload time, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then use the following method:
--- [BlobClient.setTags](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-settags)-
-The following example performs this task.
--
-You can delete all tags by passing an empty JSON object into the setTags method.
-
-| Related articles |
-|--|
-| [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
-| [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) |
-
-## Get tags
--
-To get tags, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then use the following method:
--- [BlobClient.getTags](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-gettags)-
-The following example shows how to get and iterate over the blob's tags.
--
-## Filter and find data with blob index tags
--
-> [!NOTE]
-> You can't use index tags to retrieve previous versions. Tags for previous versions aren't passed to the blob index engine. For more information, see [Conditions and known issues](storage-manage-find-blobs.md#conditions-and-known-issues).
-
-Data is queried with a JSON object sent as a string. The properties don't need to have additional string quotes but the values do need additional string quotes.
-
-The following table shows some query strings:
-
-|Query string for tags (tagOdataQuery)|Description|
-|--|--|
-|`id='1' AND project='billing'`|Filter blobs across all containers based on these two properties|
-|`owner='PhillyProject' AND createdOn >= '2021-12' AND createdOn <= '2022-06'`|Filter blobs across all containers based on strict property value for `owner` and range of dates for `createdOn` property.|
-|`@container = 'my-container' AND createdBy = 'Jill'`|**Filter by container** and specific property. In this query, `createdBy` is a text match and doesn't indicate an authorization match through Active Directory. |
--
-To find blobs, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then use the following method:
--- [BlobServiceClient.findBlobsByTags](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-findblobsbytags)-
-The following example finds all blobs matching the tagOdataQuery parameter.
---
-And example output for this function shows the matched blobs and their tags, based on the console.log code in the preceding function:
-
-|Response|
-|-|
-|Blob 1: set-tags-1650565920363-query-by-tag-blob-a-1.txt - {"createdOn":"2022-01","owner":"PhillyProject","project":"set-tags-1650565920363"}|
-
-## Resources
-
-To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing and using blob index tags use the following REST API operations:
--- [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API)-- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API)-- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/set-and-retrieve-blob-tags.js)--
-### See also
--- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
- Title: Get started with Azure Blob Storage and TypeScript-
-description: Get started developing a TypeScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint.
------ Previously updated : 08/05/2024----
-# Get started with Azure Blob Storage and TypeScript
--
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-
-[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
-
-## Prerequisites
--- Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- Azure storage account - [create a storage account](../common/storage-account-create.md)-- [Node.js LTS](https://nodejs.org/)-- For client (browser) applications, you need [bundling tools](https://github.com/Azure/azure-sdk-for-js/blob/main/documentation/Bundling.md).-
-## Set up your project
-
-1. Open a command prompt and change into your project folder. Change `YOUR-DIRECTORY` to your folder name:
-
- ```bash
- cd YOUR-DIRECTORY
- ```
-
-1. If you don't have a `package.json` file already in your directory, initialize the project to create the file:
-
- ```bash
- npm init -y
- ```
-
-1. Install TypeScript and the Azure Blob Storage client library for JavaScript with TypeScript types included:
-
- ```bash
- npm install typescript @azure/storage-blob
- ```
-
-1. If you want to use passwordless connections using Microsoft Entra ID, install the Azure Identity client library for JavaScript:
-
- ```bash
- npm install @azure/identity
- ```
-
-## Authorize access and connect to Blob Storage
-
-Microsoft Entra ID provides the most secure connection by managing the connection identity ([**managed identity**](../../active-directory/managed-identities-azure-resources/overview.md)). This **passwordless** functionality allows you to develop an application that doesn't require any secrets (keys or connection strings) stored in the code.
-
-### Set up identity access to the Azure cloud
-
-To connect to Azure without passwords, you need to set up an Azure identity or use an existing identity. Once the identity is set up, make sure to assign the appropriate roles to the identity.
-
-To authorize passwordless access with Microsoft Entra ID, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
-
-|Environment|Method|
-|--|--|
-|Developer environment|[Visual Studio Code](/azure/developer/javascript/sdk/authentication/local-development-environment-developer-account?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|Developer environment|[Service principal](/azure/developer/javascript/sdk/authentication/local-development-environment-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|Azure-hosted apps|[Azure-hosted apps setup](/azure/developer/javascript/sdk/authentication/azure-hosted-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-|On-premises|[On-premises app setup](/azure/developer/javascript/sdk/authentication/on-premises-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)|
-
-### Set up storage account roles
-
-Your storage resource needs to have one or more of the following [Azure RBAC](../../role-based-access-control/built-in-roles.md) roles assigned to the identity resource you plan to connect with. [Setup the Azure Storage roles](assign-azure-role-data-access.md?tabs=portal) for each identity you created in the previous step: Azure cloud, local development, on-premises.
-
-After you complete the setup, each identity needs at least one of the appropriate roles:
--- A [data access](../common/authorize-data-access.md) role - such as:
- - **Storage Blob Data Reader**
- - **Storage Blob Data Contributor**
--- A [resource](../common/authorization-resource-provider.md) role - such as:
- - **Reader**
- - **Contributor**
-
-## Build your application
-
-As you build your application, your code will primarily interact with three types of resources:
--- The storage account, which is the unique top-level namespace for your Azure Storage data.-- Containers, which organize the blob data in your storage account.-- Blobs, which store unstructured data like text and binary data.-
-The following diagram shows the relationship between these resources.
-
-![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
-
-Each type of resource is represented by one or more associated JavaScript clients:
-
-| Class | Description |
-|||
-| [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) | Represents the Blob Storage endpoint for your storage account. |
-| [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) | Allows you to manipulate Azure Storage containers and their blobs. |
-| [BlobClient](/javascript/api/@azure/storage-blob/blobclient) | Allows you to manipulate Azure Storage blobs.|
-
-## Create a BlobServiceClient object
-
-The [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object is the top object in the SDK. This client allows you to manipulate the service, containers and blobs.
-
-## [Microsoft Entra ID (recommended)](#tab/azure-ad)
-
-Once your Azure storage account identity roles and your local environment are set up, create a TypeScript file which includes the [``@azure/identity``](https://www.npmjs.com/package/@azure/identity) package. Create a credential, such as the [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential), to implement passwordless connections to Blob Storage. Use that credential to authenticate with a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object.
--
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control. If you use a local service principal as part of your DefaultAzureCredential set up, any security information for that credential will also go into the `.env` file.
-
-If you plan to deploy the application to servers and clients that run outside of Azure, create one of the [credentials](https://www.npmjs.com/package/@azure/identity#credential-classes) that meets your needs.
-
-## [Account key](#tab/account-key)
-
-Create a [StorageSharedKeyCredential](/javascript/api/@azure/storage-blob/storagesharedkeycredential) from the storage account name and account key. Then pass the StorageSharedKeyCredential to the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class constructor to create a client.
--
-The `dotenv` package is used to read your storage account name and key from a `.env` file. This file should not be checked into source control.
-
-For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
-
-> [!IMPORTANT]
-> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
-
-## [SAS token](#tab/sas-token)
-
-Create a Uri to your resource by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) with the Uri. The SAS token is a series of name/value pairs in the querystring in the format such as:
-
-```
-https://YOUR-RESOURCE-NAME.blob.core.windows.net?YOUR-SAS-TOKEN
-```
-
-Depending on which tool you use to generate your SAS token, the querystring `?` may already be added to the SAS token.
--
-The `dotenv` package is used to read your storage account name and SAS token from a `.env` file. This file should not be checked into source control.
-
-To generate and manage SAS tokens, see any of these articles:
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)-- [Create a service SAS for a container or blob](sas-service-create.md)-
-> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
---
-## Create a ContainerClient object
-
-You can create the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object either from the BlobServiceClient, or directly.
-
-### Create ContainerClient object from BlobServiceClient
-
-Create the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) object from the BlobServiceClient.
--
-### Create ContainerClient directly
-
-#### [Microsoft Entra ID (recommended)](#tab/azure-ad)
---
-#### [Account key](#tab/account-key)
--
-> [!IMPORTANT]
-> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
--
-#### [SAS token](#tab/sas-token)
--
-> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
--
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control.
-
-## Create a BlobClient object
-
-You can create any of the BlobClient objects, listed below, either from a ContainerClient, or directly.
-
-List of Blob clients:
-
-* [BlobClient](/javascript/api/@azure/storage-blob/blobclient)
-* [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient)
-* [AppendBlobClient](/javascript/api/@azure/storage-blob/appendblobclient)
-* [BlobLeaseClient](/javascript/api/@azure/storage-blob/blobleaseclient)
-* [PageBlobClient](/javascript/api/@azure/storage-blob/pageblobclient)
-
-### Create BlobClient object from ContainerClient
--
-### Create BlobClient directly
-
-#### [Microsoft Entra ID (recommended)](#tab/azure-ad)
--
-#### [Account key](#tab/account-key)
--
-> [!IMPORTANT]
-> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
-
-#### [SAS token](#tab/sas-token)
--
-> [!NOTE]
-> For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn more, see [Create a user delegation SAS with JavaScript](storage-blob-create-user-delegation-sas-javascript.md).
--
-The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control.
-
-## See also
--- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)-- [API reference](/javascript/api/@azure/storage-blob/)-- [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Title: Upload a blob with JavaScript
+ Title: Upload a blob with JavaScript or TypeScript
description: Learn how to upload a blob to your Azure Storage account using the JavaScript client library. Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Upload a blob with JavaScript
+# Upload a blob with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-upload](../../../includes/storage-dev-guides/storage-dev-guide-selector-upload.md)]
Each of these methods can be called using a [BlockBlobClient](/javascript/api/@a
The following example uploads a block blob from a local file path:
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-local-file-path.js" id="Snippet_UploadBlob":::
+### [TypeScript](#tab/typescript)
++++ ## Upload a block blob from a stream The following example uploads a block blob by creating a readable stream and uploading the stream:
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-stream.js" id="Snippet_UploadBlob":::
+### [TypeScript](#tab/typescript)
++++ ## Upload a block blob from a buffer The following example uploads a block blob from a Node.js buffer:
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-buffer.js" id="Snippet_UploadBlob":::
+### [TypeScript](#tab/typescript)
++++ ## Upload a block blob from a string The following example uploads a block blob from a string:
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js" id="Snippet_UploadBlob":::
+### [TypeScript](#tab/typescript)
++++ ## Upload a block blob with configuration options You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The code examples in this section show how to set configuration options using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface, and how to pass those options as a parameter to an upload method call.
You can configure properties in [BlockBlobParallelUploadOptions](/javascript/api
The following code example shows how to set values for [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) and include the options as part of an upload method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-transfer-options.js" id="Snippet_UploadBlobTransferOptions":::
+### [TypeScript](#tab/typescript)
++++ To learn more about tuning data transfer options, see [Performance tuning for uploads and downloads with JavaScript](storage-blobs-tune-upload-download-javascript.md). ### Upload a block blob with index tags
Blob index tags categorize data in your storage account using key-value tag attr
The following example uploads a block blob with index tags set using [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions):
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-index-tags.js" id="Snippet_UploadBlobIndexTags":::
+### [TypeScript](#tab/typescript)
++++ ### Set a blob's access tier on upload You can set a blob's access tier on upload by using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface. The following code example shows how to set the access tier when uploading a blob:
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-access-tier.js" id="Snippet_UploadAccessTier":::
+### [TypeScript](#tab/typescript)
++++ Setting the access tier is only allowed for block blobs. You can set the access tier for a block blob to `Hot`, `Cool`, `Cold`, or `Archive`. To set the access tier to `Cold`, you must use a minimum [client library](/javascript/api/preview-docs/@azure/storage-blob/) version of 12.13.0. To learn more about access tiers, see [Access tiers overview](access-tiers-overview.md).
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
View code samples from this article (GitHub): -- [Upload from local file path](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-local-file-path.js)-- [Upload from buffer](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-buffer.js)-- [Upload from stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-stream.js)-- [Upload from string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js)-- [Upload with transfer options](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-transfer-options.js)-- [Upload with index tags](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-index-tags.js)-- [Upload with access tier](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-access-tier.js)
+- Upload from local file path for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-local-file-path.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-local-file-path.ts)
+- Upload from buffer for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-buffer.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-buffer.ts)
+- Upload from stream for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-stream.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-stream.ts)
+- Upload from string for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-string.ts)
+- Upload with transfer options for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-transfer-options.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-transfer-options.ts)
+- Upload with index tags for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-index-tags.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-index-tags.ts)
+- Upload with access tier for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-access-tier.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-access-tier.ts)
[!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]
View code samples from this article (GitHub):
- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)+
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
- Title: Upload a blob with TypeScript-
-description: Learn how to upload a blob with TypeScript to your Azure Storage account using the client library for JavaScript and TypeScript.
--- Previously updated : 08/05/2024-----
-# Upload a blob with TypeScript
--
-This article shows how to upload a blob using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can upload data to a block blob from a file path, a stream, a buffer, or a text string. You can also upload blobs with index tags.
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
- - [Put Blob](/rest/api/storageservices/put-blob#authorization)
- - [Put Block](/rest/api/storageservices/put-block#authorization)
-
-## Upload data to a block blob
-
-You can use any of the following methods to upload data to a block blob:
--- [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) (non-parallel uploading method)-- [uploadData](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploaddata)-- [uploadFile](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadfile) (only available in Node.js runtime)-- [uploadStream](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadstream) (only available in Node.js runtime)-
-Each of these methods can be called using a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object.
-
-## Upload a block blob from a file path
-
-The following example uploads a block blob from a local file path:
--
-## Upload a block blob from a stream
-
-The following example uploads a block blob by creating a readable stream and uploading the stream:
--
-## Upload a block blob from a buffer
-
-The following example uploads a block blob from a Node.js buffer:
--
-## Upload a block blob from a string
-
-The following example uploads a block blob from a string:
--
-## Upload a block blob with configuration options
-
-You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The code examples in this section show how to set configuration options using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface, and how to pass those options as a parameter to an upload method call.
-
-### Specify data transfer options on upload
-
-You can configure properties in [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) to improve performance for data transfer operations. The following table lists the properties you can configure, along with a description:
-
-| Property | Description |
-| | |
-| [`blockSize`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-blocksize) | The maximum block size to transfer for each request as part of an upload operation. |
-| [`concurrency`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-concurrency) | The maximum number of parallel requests that are issued at any given time as a part of a single parallel transfer.
-| [`maxSingleShotSize`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-maxsingleshotsize) | If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. Default value is 256 MiB. |
-
-The following code example shows how to set values for [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) and include the options as part of an upload method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
--
-### Upload a block blob with index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data.
-
-The following example uploads a block blob with index tags set using [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions):
--
-### Set a blob's access tier on upload
-
-You can set a blob's access tier on upload by using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface. The following code example shows how to set the access tier when uploading a blob:
--
-Setting the access tier is only allowed for block blobs. You can set the access tier for a block blob to `Hot`, `Cool`, `Cold`, or `Archive`. To set the access tier to `Cold`, you must use a minimum [client library](/javascript/api/preview-docs/@azure/storage-blob/) version of 12.13.0.
-
-To learn more about access tiers, see [Access tiers overview](access-tiers-overview.md).
-
-## Resources
-
-To learn more about uploading blobs using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods for uploading blobs use the following REST API operations:
--- [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Block](/rest/api/storageservices/put-block) (REST API)-
-### Code samples
-
-View code samples from this article (GitHub):
--- [Upload from local file path](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-local-file-path.ts)-- [Upload from buffer](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-buffer.ts)-- [Upload from stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-stream.ts)-- [Upload from string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-string.ts)-- [Upload with transfer options](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-transfer-options.ts)-- [Upload with index tags](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-index-tags.ts)-- [Upload with access tier](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-with-access-tier.ts)--
-### See also
--- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
Title: Set or change a blob's access tier with JavaScript
+ Title: Set or change a blob's access tier with JavaScript or TypeScript
description: Learn how to set or change a blob's access tier in your Azure Storage account using the JavaScript client library.
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+
-# Set or change a block blob's access tier with JavaScript
+# Set or change a block blob's access tier with JavaScript or TypeScript
[!INCLUDE [storage-dev-guide-selector-access-tier](../../../includes/storage-dev-guides/storage-dev-guide-selector-access-tier.md)]
This article shows how to set or change a blob's [access tier](access-tiers-over
To [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) a blob into a specific access tier, use the [BlockBlobUploadOptions](/javascript/api/@azure/storage-blob/blockblobuploadoptions). The `tier` property choices are: `Hot`, `Cool`, `Cold`, or `Archive`.
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js" id="Snippet_UploadAccessTier" highlight="13-15, 26":::
+### [TypeScript](#tab/typescript)
++++ ## Change a blob's access tier after upload To change the access tier of a blob after it's uploaded to storage, use [setAccessTier](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-setaccesstier). Along with the tier, you can set the [BlobSetTierOptions](/javascript/api/@azure/storage-blob/blobsettieroptions) property [rehydration priority](archive-rehydrate-overview.md) to bring the block blob out of an archived state. Possible values are `High` or `Standard`.
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/change-blob-access-tier.js" id="Snippet_BatchChangeAccessTier" highlight="8,11,13-16":::
+### [TypeScript](#tab/typescript)
++++ ## Copy a blob into a different access tier Use the BlobClient.[beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) method to copy a blob. To change the access tier during the copy operation, use the [BlobBeginCopyFromURLOptions](/javascript/api/@azure/storage-blob/blobbegincopyfromurloptions) `tier` property and specify a different access [tier](storage-blob-storage-tiers.md) than the source blob.
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-to-different-access-tier.js" id="Snippet_CopyWithAccessTier" highlight="8":::
+### [TypeScript](#tab/typescript)
++++ ## Use a batch to change access tier for many blobs The batch represents an aggregated set of operations on blobs, such as [delete](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-deleteblobs-1) or [set access tier](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-setblobsaccesstier-1). You need to pass in the correct credential to successfully perform each operation. In this example, the same credential is used for a set of blobs in the same container. Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient). Use the client to create a batch with the [createBatch()](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-createbatch) method. When the batch is ready, [submit](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-submitbatch) the batch for processing. Use the returned structure to validate each blob's operation was successful.
+### [JavaScript](#tab/javascript)
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/batch-set-access-tier.js" id="Snippet_BatchChangeAccessTier" highlight="16,20":::+
+### [TypeScript](#tab/typescript)
+++ ## Code samples
-* [Set blob's access tier during upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js)
-* [Change blob's access tier after upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/change-blob-access-tier.js)
-* [Copy blob into different access tier](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-to-different-access-tier.js)
-* [Use a batch to change access tier for many blobs](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/batch-set-access-tier.js)
+- Set blob access tier during upload for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/src/blob-upload-from-string-with-access-tier.ts)
+- Change blob access tier after upload for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/change-blob-access-tier.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-change-access-tier.ts)
+- Copy blob into different access tier for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-to-different-access-tier.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-copy-to-different-access-tier.ts)
+- Use a batch to change access tier for many blobs for [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/batch-set-access-tier.js) or [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-batch-set-access-tier-for-container.ts)
## Next steps
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
- Title: Set or change a blob's access tier with TypeScript-
-description: Learn how to set or change a blob's access tier with TypeScript in your Azure Storage account using the JavaScript client library.
------ Previously updated : 08/05/2024---
-# Set or change a block blob's access tier with TypeScript
--
-This article shows how to set or change a blob's [access tier](access-tiers-overview.md) with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to set the blob's access tier. To learn more, see the authorization guidance for the following REST API operation:
- - [Set Blob Tier](/rest/api/storageservices/set-blob-tier#authorization)
--
-> [!NOTE]
-> To set the access tier to `Cold` using TypeScript, you must use a minimum [client library](/javascript/api/preview-docs/@azure/storage-blob/) version of 12.13.0.
-
-## Set a blob's access tier during upload
-
-To [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) a blob into a specific access tier, use the [BlockBlobUploadOptions](/javascript/api/@azure/storage-blob/blockblobuploadoptions). The `tier` property choices are: `Hot`, `Cool`, `Cold`, or `Archive`.
---
-## Change a blob's access tier after upload
-
-To change the access tier of a blob after it's uploaded to storage, use [setAccessTier](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-setaccesstier). Along with the tier, you can set the [BlobSetTierOptions](/javascript/api/@azure/storage-blob/blobsettieroptions) property [rehydration priority](archive-rehydrate-overview.md) to bring the block blob out of an archived state. Possible values are `High` or `Standard`.
--
-## Copy a blob into a different access tier
-
-Use the BlobClient.[beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) method to copy a blob. To change the access tier during the copy operation, use the [BlobBeginCopyFromURLOptions](/javascript/api/@azure/storage-blob/blobbegincopyfromurloptions) `tier` property and specify a different access [tier](storage-blob-storage-tiers.md) than the source blob.
--
-## Use a batch to change access tier for many blobs
-
-The batch represents an aggregated set of operations on blobs, such as [delete](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-deleteblobs-1) or [set access tier](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-setblobsaccesstier-1). You need to pass in the correct credential to successfully perform each operation. In this example, the same credential is used for a set of blobs in the same container.
-
-Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient). Use the client to create a batch with the [createBatch()](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-createbatch) method. When the batch is ready, [submit](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-submitbatch) the batch for processing. Use the returned structure to validate each blob's operation was successful.
-
-
-## Code samples
-
-* [Set blob's access tier during upload](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js)
-* [Change blob's access tier after upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-change-access-tier.ts)
-* [Copy blob into different access tier](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-copy-to-different-access-tier.ts)
-* [Use a batch to change access tier for many blobs](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-batch-set-access-tier-for-container.ts)
-
-## Next steps
--- [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+ # List blobs with JavaScript
This article shows how to list blobs using the [Azure Storage client library for
## About blob listing options
-When you list blobs from your code, you can specify a number of options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can specify a prefix to return blobs whose names begin with that character or string. And you can list blobs in a flat listing structure, or hierarchically. A hierarchical listing returns blobs as though they were organized into folders.
+When you list blobs from your code, you can specify several options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can specify a prefix to return blobs whose names begin with that character or string. And you can list blobs in a flat listing structure, or hierarchically. A hierarchical listing returns blobs as though they were organized into folders.
-To list the blobs in a storage account, create a [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object) then call one of these methods:
+To list the blobs in a container using a flat listing, call the following method:
-- ContainerClient.[listBlobsByHierarcy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy)-- ContainerClient.[listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsflat)
+- [ContainerClient.listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsflat)
-Related functionality can be found in the following methods:
+To list the blobs in a container using a hierarchical listing, call the following method:
-- BlobServiceClient.[findBlobsByTag](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-findblobsbytags)-- ContainerClient.[findBlobsByTag](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-findblobsbytags)
+- ContainerClient.[listBlobsByHierarchy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy)
### Manage how many results are returned
By default, a listing operation returns up to 5000 results at a time, but you ca
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
-
-```javascript
-const listOptions = {
- includeCopy: false, // include metadata from previous copies
- includeDeleted: false, // include deleted blobs
- includeDeletedWithVersions: false, // include deleted blobs with versions
- includeLegalHold: false, // include legal hold
- includeMetadata: true, // include custom metadata
- includeSnapshots: true, // include snapshots
- includeTags: true, // include indexable tags
- includeUncommitedBlobs: false, // include uncommitted blobs
- includeVersions: false, // include all blob version
- prefix: '' // filter by blob name prefix
-};
-```
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. For example, passing the prefix string `sample-` returns only blobs whose names start with `sample-`.
-### Return metadata
+### Include blob metadata or other information
-You can return blob metadata with the results by specifying the `includeMetadata` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions).
+To include blob metadata with the results, set the `includeMetadata` property to `true` as part of [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). You can also include snapshots, tags, or versions in the results by setting the appropriate property to `true`.
### Flat listing versus hierarchical listing
If you name your blobs using a delimiter, then you can choose to list blobs hier
## Use a flat listing
-By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
-
-The following example lists the blobs in the specified container using a flat listing.
-
-```javascript
-async function listBlobsFlatWithPageMarker(containerClient) {
-
- // page size - artificially low as example
- const maxPageSize = 2;
-
- let i = 1;
- let marker;
+By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
- // some options for filtering list
- const listOptions = {
- includeMetadata: false,
- includeSnapshots: false,
- includeTags: false,
- includeVersions: false,
- prefix: ''
- };
+The following example lists the blobs in the specified container using a flat listing. This example includes blob snapshots and blob metadata, if they exist:
- let iterator = containerClient.listBlobsFlat(listOptions).byPage({ maxPageSize });
- let response = (await iterator.next()).value;
+### [JavaScript](#tab/javascript)
- // Prints blob names
- for (const blob of response.segment.blobItems) {
- console.log(`Flat listing: ${i++}: ${blob.name}`);
- }
- // Gets next marker
- marker = response.continuationToken;
+### [TypeScript](#tab/typescript)
- // Passing next marker as continuationToken
- iterator = containerClient.listBlobsFlat().byPage({
- continuationToken: marker,
- maxPageSize: maxPageSize * 2
- });
- response = (await iterator.next()).value;
- // Prints next blob names
- for (const blob of response.segment.blobItems) {
- console.log(`Flat listing: ${i++}: ${blob.name}`);
- }
-}
-```
+ The sample output is similar to: ```console
-Flat listing: 1: a1
-Flat listing: 2: a2
-Flat listing: 3: folder1/b1
-Flat listing: 4: folder1/b2
-Flat listing: 5: folder2/sub1/c
-Flat listing: 6: folder2/sub1/d
+Blobs flat list (by page):
+- Page:
+ - a1
+ - a2
+- Page:
+ - folder1/b1
+ - folder1/b2
+- Page:
+ - folder2/sub1/c
+ - folder2/sub1/d
``` > [!NOTE]
Flat listing: 6: folder2/sub1/d
When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
-To list blobs hierarchically, call the [BlobContainerClient.listBlobsByHierarchy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy) method.
-
-The following example lists the blobs in the specified container using a hierarchical listing, with an optional segment size specified, and writes the blob name to the console window.
-
-```javascript
-// Recursively list virtual folders and blobs
-// Pass an empty string for prefixStr to list everything in the container
-async function listBlobHierarchical(containerClient, prefixStr) {
-
- // page size - artificially low as example
- const maxPageSize = 2;
+To list blobs hierarchically, use the following method:
- // some options for filtering list
- const listOptions = {
- includeMetadata: false,
- includeSnapshots: false,
- includeTags: false,
- includeVersions: false,
- prefix: prefixStr
- };
+- [BlobContainerClient.listBlobsByHierarchy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy)
- let delimiter = '/';
- let i = 1;
- console.log(`Folder ${delimiter}${prefixStr}`);
+The following example lists the blobs in the specified container using a hierarchical listing. In this example, the prefix parameter is initially set to an empty string to list all blobs in the container. The example then calls the listing operation recursively to traverse the virtual directory hierarchy and list blobs.
- for await (const response of containerClient
- .listBlobsByHierarchy(delimiter, listOptions)
- .byPage({ maxPageSize })) {
+### [JavaScript](#tab/javascript)
- console.log(` Page ${i++}`);
- const segment = response.segment;
- if (segment.blobPrefixes) {
+### [TypeScript](#tab/typescript)
- // Do something with each virtual folder
- for await (const prefix of segment.blobPrefixes) {
- // build new prefix from current virtual folder
- await listBlobHierarchical(containerClient, prefix.name);
- }
- }
- for (const blob of response.segment.blobItems) {
-
- // Do something with each blob
- console.log(`\tBlobItem: name - ${blob.name}`);
- }
- }
-}
-```
+ The sample output is similar to:
Folder /folder2/sub1/
To learn more about how to list blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
+### Code samples
+
+- View [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-blobs.js) and [TypeScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blobs-list.ts) code samples from this article (GitHub)
+ ### REST API operations The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for listing blobs use the following REST API operation: - [List Blobs](/rest/api/storageservices/list-blobs) (REST API)
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-blobs.js)- [!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)] ### See also
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
- Title: List blobs with TypeScript-
-description: Learn how to list blobs with TypeScript in your storage account using the Azure Storage client library for JavaScript.
------ Previously updated : 08/05/2024----
-# List blobs with TypeScript
--
-This article shows how to list blobs using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
-
-## Prerequisites
--- The examples in this article assume you already have a project set up to work with the Azure Blob Storage client library for JavaScript. To learn about setting up your project, including package installation, importing modules, and creating an authorized client object to work with data resources, see [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to list blobs. To learn more, see the authorization guidance for the following REST API operation:
- - [List Blobs](/rest/api/storageservices/list-blobs#authorization)
-
-## About blob listing options
-
-When you list blobs from your code, you can specify a number of options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can specify a prefix to return blobs whose names begin with that character or string. And you can list blobs in a flat listing structure, or hierarchically. A hierarchical listing returns blobs as though they were organized into folders.
-
-To list the blobs in a storage account, create a [ContainerClient](storage-blob-typescript-get-started.md#create-a-containerclient-object) then call one of these methods:
---- ContainerClient.[listBlobsByHierarcy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy)-- ContainerClient.[listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsflat)-
-Related functionality can be found in the following methods:
--- BlobServiceClient.[findBlobsByTag](/javascript/api/@azure/storage-blob/blobserviceclient#@azure-storage-blob-blobserviceclient-findblobsbytags)-- ContainerClient.[findBlobsByTag](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-findblobsbytags)-
-### Manage how many results are returned
-
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results)
-
-### Filter results with a prefix
-
-To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
-
-```typescript
-const listOptions: ContainerListBlobsOptions = {
- includeCopy: false, // include metadata from previous copies
- includeDeleted: false, // include deleted blobs
- includeDeletedWithVersions: false, // include deleted blobs with versions
- includeLegalHost: false, // include legal host id
- includeMetadata: true, // include custom metadata
- includeSnapshots: true, // include snapshots
- includeTags: true, // include indexable tags
- includeUncommittedBlobs: false, // include uncommitted blobs
- includeVersions: false, // include all blob version
- prefix: '' // filter by blob name prefix
-};
-```
-
-### Return metadata
-
-You can return blob metadata with the results by specifying the `includeMetadata` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions).
-
-### Flat listing versus hierarchical listing
-
-Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
-
-To organize blobs into virtual directories, use a delimiter character in the blob name. The default delimiter character is a forward slash (/), but you can specify any character as the delimiter.
-
-If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-
-## Use a flat listing
-
-By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
-
-The following example lists the blobs in the specified container using a flat listing.
---
-The sample output is similar to:
-
-```console
-Flat listing: 1: a1
-Flat listing: 2: a2
-Flat listing: 3: folder1/b1
-Flat listing: 4: folder1/b2
-Flat listing: 5: folder2/sub1/c
-Flat listing: 6: folder2/sub1/d
-```
-
-> [!NOTE]
-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
-
-## Use a hierarchical listing
-
-When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
-
-To list blobs hierarchically, call the [BlobContainerClient.listBlobsByHierarchy](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-listblobsbyhierarchy) method.
-
-The following example lists the blobs in the specified container using a hierarchical listing, with an optional segment size specified, and writes the blob name to the console window.
---
-The sample output is similar to:
-
-```console
-Folder /
- Page 1
- BlobItem: name - a1
- BlobItem: name - a2
- Page 2
-Folder /folder1/
- Page 1
- BlobItem: name - folder1/b1
- BlobItem: name - folder1/b2
-Folder /folder2/
- Page 1
-Folder /folder2/sub1/
- Page 1
- BlobItem: name - folder2/sub1/c
- BlobItem: name - folder2/sub1/d
- Page 2
- BlobItem: name - folder2/sub1/e
-```
-
-> [!NOTE]
-> Blob snapshots cannot be listed in a hierarchical listing operation.
-
-## Resources
-
-To learn more about how to list blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for listing blobs use the following REST API operation:
--- [List Blobs](/rest/api/storageservices/list-blobs) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blobs-list.ts)--
-### See also
--- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)-- [Blob versioning](versioning-overview.md)
storage Storage Blobs Tune Upload Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-javascript.md
Previously updated : 08/05/2024 Last updated : 10/28/2024 ms.devlang: javascript-+ # Performance tuning for uploads and downloads with JavaScript
To keep data moving efficiently, the client libraries might not always reach the
The following code example shows how to set values for [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) and include the options as part of an upload method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
+#### [JavaScript](#tab/javascript)
+ ```javascript // Specify data transfer options const uploadOptions = {
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
await blockBlobClient.uploadFile(localFilePath, uploadOptions); ```
+#### [TypeScript](#tab/typescript)
+
+```typescript
+// Specify data transfer options
+const uploadOptions: BlockBlobParallelUploadOptions = {
+ blockSize: 4 * 1024 * 1024, // 4 MiB max block size
+ concurrency: 2, // maximum number of parallel transfer workers
+ maxSingleShotSize: 8 * 1024 * 1024, // 8 MiB initial transfer size
+};
+
+// Create blob client from container client
+const blockBlobClient: BlockBlobClient = containerClient.getBlockBlobClient(blobName);
+
+await blockBlobClient.uploadFile(localFilePath, uploadOptions);
+```
+++ In this example, we set the maximum number of parallel transfer workers to 2 using the `concurrency` property. We also set `maxSingleShotSize` to 8 MiB. If the blob size is smaller than 8 MiB, only a single request is necessary to complete the upload operation. If the blob size is larger than 8 MiB, the blob is uploaded in chunks with a maximum chunk size of 4 MiB, which we define in the `blockSize` property. ### Performance considerations for uploads
During a download using `downloadToBuffer`, the Storage client libraries split a
The following code example shows how to set values for [BlobDownloadToBufferOptions](/javascript/api/@azure/storage-blob/blobdownloadtobufferoptions) and include the options as part of a [downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer) method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
+#### [JavaScript](#tab/javascript)
+ ```javascript // Specify data transfer options
- const downloadToBufferOptions = {
- blockSize: 4 * 1024 * 1024, // 4 MiB max block size
- concurrency: 2, // maximum number of parallel transfer workers
- }
+const downloadToBufferOptions = {
+ blockSize: 4 * 1024 * 1024, // 4 MiB max block size
+ concurrency: 2, // maximum number of parallel transfer workers
+}
- // Download data to buffer
- const result = await client.downloadToBuffer(offset, count, downloadToBufferOptions);
+// Download data to buffer
+const result = await client.downloadToBuffer(offset, count, downloadToBufferOptions);
```
+#### [TypeScript](#tab/typescript)
+
+```typescript
+// Specify data transfer options
+const downloadToBufferOptions: BlobDownloadToBufferOptions = {
+ blockSize: 4 * 1024 * 1024, // 4 MiB max block size
+ concurrency: 2, // maximum number of parallel transfer workers
+}
+
+// Download data to buffer
+const result = await client.downloadToBuffer(offset, count, downloadToBufferOptions);
+```
+++ ## Related content - To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md).
storage Storage Retry Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-javascript.md
Previously updated : 08/05/2024- Last updated : 10/28/2024+
-# Implement a retry policy with JavaScript
+# Implement a retry policy with JavaScript or TypeScript
Any application that runs in the cloud or communicates with remote services and resources must be able to handle transient faults. It's common for these applications to experience faults due to a momentary loss of network connectivity, a request timeout when a service or resource is busy, or other factors. Developers should build applications to handle transient faults transparently to improve stability and resiliency.
The following table lists the parameters available when creating a [StorageRetry
In the following code example, we configure the retry options in an instance of [StorageRetryOptions](/javascript/api/@azure/storage-blob/storageretryoptions), pass it to a new [StoragePipelineOptions](/javascript/api/@azure/storage-blob/storagepipelineoptions) instance, and pass `pipeline` when instantiating `BlobServiceClient`:
+### [JavaScript](#tab/javascript)
+ ```javascript const options = { retryOptions: {
const blobServiceClient = new BlobServiceClient(
); ```
+### [TypeScript](#tab/typescript)
+
+```typescript
+const options: StoragePipelineOptions = {
+ retryOptions: {
+ maxTries: 4,
+ retryDelayInMs: 3 * 1000,
+ maxRetryDelayInMs: 120 * 1000,
+ retryPolicyType: StorageRetryPolicyType.EXPONENTIAL
+ },
+};
+
+const pipeline: Pipeline = newPipeline(credential, options);
+
+const blobServiceClient = new BlobServiceClient(
+ `https://${accountName}.blob.core.windows.net`,
+ credential,
+ pipeline
+);
+```
+++ In this example, each service request issued from the `BlobServiceClient` object uses the retry options as defined in `retryOptions`. This policy applies to client requests. You can configure various retry strategies for service clients based on the needs of your app. ## Related content
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN with the Azure portal, Azu
Previously updated : 02/13/2024 Last updated : 10/24/2024 # Deploy an Elastic SAN
-This article explains how to deploy and configure an elastic storage area network (SAN). If you're interested in Azure Elastic SAN, or have any feedback you'd like to provide, fill out [this](https://aka.ms/ElasticSANPreviewSignup) optional survey.
+This article explains how to deploy and configure an elastic storage area network (SAN).
## Prerequisites
This article explains how to deploy and configure an elastic storage area networ
# [PowerShell](#tab/azure-powershell)
-Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. One set creates an elastic SAN with [autoscaling](elastic-san-planning.md#autoscaling-preview) (preview) enabled, and the other creates an elastic SAN with [autoscaling](elastic-san-planning.md#autoscaling-preview) disabled. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
| Placeholder | Description | |-|-|
Use one of these sets of sample code to create an Elastic SAN that uses locally
| `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. | | `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+| `<AutoScalePolicyEnforcement>` | The setting that determines whether or not autoscaling is enabled for the Elastic SAN. <br>*This value is optional but if passed in, must be 'Enabled' or 'Disabled'* |
+| `<UnusedSizeTiB>` | The capacity (in TiB) on your Elastic SAN that you want to keep free and unused. If you use more space than this amount, the scale-up operation is automatically triggered, increasing the size of your SAN. This parameter is optional but is required to enable autoscaling. |
+|`<IncreaseCapacityUnitByTiB>` | This parameter sets the TiB of additional capacity units that your SAN scales up by when autoscale gets triggered. This parameter is optional but is required to enable autoscaling. |
+|`<CapacityUnitScaleUpLimit>` | This parameter sets the maximum capacity (size) that your SAN can grow to using autoscaling. Your SAN won't automatically scale past this size. This parameter is optional but is required to enable autoscaling. |
-The following command creates an Elastic SAN that uses **locally redundant** storage.
+The following command creates an Elastic SAN that uses locally redundant storage without autoscaling enabled.
```azurepowershell # Define some variables.
Connect-AzAccount
New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS ```
-The following command creates an Elastic SAN that uses **zone-redundant** storage.
+The following command creates an Elastic SAN that uses locally redundant storage with autoscaling enabled.
+
+```azurepowershell
+# Define some variables.
+$RgName = "<ResourceGroupName>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+$VolumeName = "<VolumeName>"
+$Location = "<Location>"
+$Zone = <Zone>
+$AutoScalePolicyEnforcement = "Enabled"
+$UnusedSizeTiB = <UnusedSizeTiB>
+$IncreaseCapacityUnitByTiB = <IncreaseCapacityUnitByTiB>
+$CapacityUnitScaleUpLimit = <CapacityUnitScaleUpLimit>
+
+# Connect to Azure
+Connect-AzAccount
+
+# Create the SAN.
+New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS -AutoScalePolicyEnforcement $AutoScalePolicyEnforcement -UnusedSizeTiB $UnusedSizeTiB -IncreaseCapacityUnitByTiB $IncreaseCapacityUnitByTiB -CapacityUnitScaleUpLimit $CapacityUnitScaleUpLimit
+```
+
+The following command creates an Elastic SAN that uses zone-redundant storage, without enabling autoscale.
```azurepowershell # Define some variables.
New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -Location $Location
# [Azure CLI](#tab/azure-cli)
-Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. One set creates an elastic SAN with [autoscaling](elastic-san-planning.md#autoscaling-preview) (preview) enabled, and the other creates an elastic SAN with [autoscaling](elastic-san-planning.md#autoscaling-preview) disabled. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
| Placeholder | Description | |-|-|
Use one of these sets of sample code to create an Elastic SAN that uses locally
| `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. | | `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN uses locally redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+| `<AutoScalePolicyEnforcement>` | The setting that determines whether or not autoscaling is enabled for the Elastic SAN. <br>*This value is optional but if passed in, must be 'Enabled' or 'Disabled'* |
+| `<UnusedSizeTiB>` | The capacity (in TiB) on your Elastic SAN that you want to keep free and unused. If you use more space than this amount, the scale-up operation is automatically triggered, increasing the size of your SAN. This parameter is optional but is required to enable autoscaling. |
+|`<IncreaseCapacityUnitByTiB>` | This parameter sets the TiB of additional capacity units that your SAN scales up by when autoscale gets triggered. This parameter is optional but is required to enable autoscaling. |
+|`<CapacityUnitScaleUpLimit>` | This parameter sets the maximum capacity (size) that your SAN can grow to using autoscaling. Your SAN won't automatically scale past this size. This parameter is optional but is required to enable autoscaling. |
+
-The following command creates an Elastic SAN that uses **locally redundant** storage.
+The following command creates an Elastic SAN that uses locally redundant storage without autoscaling enabled.
```azurecli # Define some variables.
az login
az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}" --availability-zones $Zone ```
-The following command creates an Elastic SAN that uses **zone-redundant** storage.
+The following command creates an Elastic SAN that uses locally redundant storage with autoscaling enabled.
+
+```azurecli
+# Define some variables.
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+VolumeName="<VolumeName>"
+Location="<Location>"
+Zone=<Zone>
+AutoScalePolicyEnforcement="Enabled"
+UnusedSizeTiB="<UnusedSizeTiB>"
+IncreaseCapacityUnitByTiB="<IncreaseCapacityUnitByTiB>"
+CapacityUnitScaleUpLimit="<CapacityUnitScaleUpLimit>"
+
+# Connect to Azure
+az login
+
+# Create an Elastic SAN
+az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}" --availability-zones $Zone --auto-scale-policy-enforcement $AutoScalePolicyEnforcement --unused-size-tib $UnusedSizeTiB --increase-capacity-unit-by-tib $IncreaseCapacityUnitByTiB --capacity-unit-scale-up-limit $CapacityUnitScaleUpLimitTiB
+```
+
+The following command creates an Elastic SAN that uses zone-redundant storage, with autoscaling disabled.
```azurecli # Define some variables.
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
description: Learn how to increase or decrease the size of an Azure Elastic SAN
Previously updated : 05/31/2024 Last updated : 10/24/2024
This article covers increasing or decreasing the size of an Elastic storage area
To increase the size of your volumes, increase the size of your Elastic SAN first. To decrease the size of your SAN, make sure your volumes aren't using the extra size and then change the size of the SAN.
-# [PowerShell](#tab/azure-powershell)
+# [PowerShell](#tab/azure-powershell-basesize)
```azurepowershell
Update-AzElasticSan -ResourceGroupName $resourceGroupName -Name $sanName -BaseSi
```
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/azure-cli-basesize)
```azurecli # You can either update the base size or the additional size.
Update-AzElasticSan -ResourceGroupName $resourceGroupName -Name $sanName -BaseSi
az elastic-san update -e $sanName -g $resourceGroupName --base-size-tib $newBaseSizeTib ```++
+## Autoscale (preview)
+
+As a preview feature, you can automatically scale up your SAN by specific increments until a specified maximum size. The capacity increments have a minimum of 1 TiB, and you can only set up an autoscale policy for additional capacity units. So when autoscaling, your performance won't automatically scale up as your storage does. Here's an example of setting an autoscale policy using Azure CLI:
+
+`az elastic-san update -n mySanName -g myVolGroupName --auto-scale-policy-enforcement "Enabled" --unused-size-tib 20 --increase-capacity-unit-by-tib 5 --capacity-unit-scale-up-limit-tib 150`
+
+Running that example command would set the following policy on the SAN it's run on: If your SAN's unused capacity (free space) is less than 20 TiB, increase the SAN's additional capacity by 5 TiB, until its unused capacity is at least 20 TiB. Don't allow the SAN's total capacity to exceed 150 TiB.
+
+You can't use an autoscale policy to scale down. To reduce the size of your SAN, follow the manual process in the previous section. If you have configured an autoscaling policy, disable it before reducing the size of your SAN.
+++
+The following script can be run to enable an autoscale policy for an existing Elastic SAN.
+
+# [PowerShell](#tab/azure-powershell-autoscale)
+```azurepowershell
+# Define some variables.
+autoscalePolicyEnforcement = "Enabled" # Whether autoscale is enabled or disabled at the SAN level
+unusedSizeTiB = "<UnusedSizeTiB>" # Unused capacity on the SAN
+increaseCapacityUnit = "<IncreaseCapacityUnit>" # Amount by which the SAN will scale up if the policy is triggered
+capacityUnitScaleUpLimit = "<CapacityUnitScaleUpLimit>" # Maximum capacity until which scale up operations will occur
+
+Update-AzElasticSan -ResourceGroupName myresourcegroup -Name myelasticsan -AutoScalePolicyEnforcement $autoscalePolicyEnforcement -UnusedSizeTiB $unusedSizeTiB -IncreaseCapacityUnitByTiB $increaseCapacityUnit -CapacityUnitScaleUpLimitTiB $capacityUnitScaleUpLimit
+```
+
+# [Azure CLI](#tab/azure-cli-autoscale)
+```azurecli
+# Define some variables.
+autoscalePolicyEnforcement = "Enabled" # Whether autoscale is enabled or disabled at the SAN level
+unusedSizeTiB = "<UnusedSizeTiB>" # Unused capacity on the SAN
+increaseCapacityUnit = "<IncreaseCapacityUnit>" # Amount by which the SAN will scale up if the policy is triggered
+capacityUnitScaleUpLimit = "<CapacityUnitScaleUpLimit>" # Maximum capacity until which scale up operations will occur
+
+az elastic-san update -n $sanName -g $resourceGroupName --auto-scale-policy-enforcement $autoscalePolicyEnforcement --unused-size-tib $unusedSizeTiB --increase-capacity-unit-by-tib $increaseCapacityUnit --capacity-unit-scale-up-limit-tib $capacityUnitScaleUpLimit
+```
## Resize a volume
-Once you've expanded the size of your SAN, you can either create more volumes, or expand the size of an existing volume. You cannot decrease the size of your volumes.
+Once you expand the size of your SAN, you can either create more volumes, or expand the size of an existing volume. You can't decrease the size of your volumes.
# [PowerShell](#tab/azure-powershell)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Plan for an Azure Elastic SAN deployment. Learn about storage capac
Previously updated : 02/13/2024 Last updated : 10/24/2024 - ignite-2023-elastic-SAN
You create volumes from the storage that you allocated to your Elastic SAN. When
Using the same example of a 100 TiB SAN that has 500,000 IOPS and 20,000 MB/s. Say this SAN had 100 1 TiB volumes. You could potentially have six of these volumes operating at their maximum performance (80,000 IOPS, 1,280 MB/s) since this would be below the SAN's limits. But if seven volumes all needed to operate at maximum at the same time, they wouldn't be able to. Instead the performance of the SAN would be split evenly among them.
+### Autoscaling (preview)
+
+As a preview feature, you can automatically scale up your SAN by specific increments until a specified maximum size using an autoscale policy. An autoscale policy is helpful for environments where storage consumption continually increases, like environments using volume snapshots. Volume snapshots consume some of the total capacity of an elastic SAN, and having an autoscale policy helps ensure your SAN doesn't run out of space to store volume snapshots.
+
+When setting an autoscale policy, there's a minimum capacity increment of 1 TiB, and you can only automatically scale additional capacity, rather than base capacity. So when autoscaling, the IOPS and throughput of your SAN won't automatically scale up.
+
+Here's an example of how an autoscale policy works. Say you have an elastic SAN that has 100 TiB total storage capacity. This SAN has volume snapshots configured, so you want the capacity to automatically scale to accommodate your snapshots. You can set a policy so that whenever the unused capacity is less than or equal to 20 TiB, additional capacity on your SAN increases by 5 TiB, up to a maximum of 150 TiB total storage. So, if you use 80 TiB of space, it automatically provisions an additional 5 TiB, so your SAN now has a total storage capacity of 105 TiB.
+ ## Networking In the Elastic SAN, you can enable or disable public network access at the Elastic SAN level. You can also configure access to volume groups in the SAN over both public [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints, regardless of individual configurations for the volume group.
storage Files Monitoring Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-monitoring-alerts.md
To create an alert that will notify you if a file share is being throttled, foll
1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
-2. In the **Condition** tab, select the **Transactions** metric.
+2. In the **Scope** tab, select the **Select Scope** dialog box.
-3. In the **Dimension name** drop-down list, select **Response type**.
+3. In the **Select a resource** blade, expand the **storage account** and check the **file** resource and press apply.
-4. In the **Dimension values** drop-down list, select the appropriate response types for your file share.
+4. In the **Condition** tab, select the **Transactions** metric.
+
+5. In the **Dimension name** drop-down list, select **Response type**.
+
+6. In the **Dimension values** drop-down list, select the appropriate response types for your file share.
For standard file shares, select the following response types:
To create an alert that will notify you if a file share is being throttled, foll
1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
-2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **File Capacity** metric.
+2. In the **Scope** tab, select the **Select Scope** dialog box.
+
+3. In the **Select a resource** blade, expand the **storage account** and check the **file** resource and press apply.
-3. For **premium file shares**, select the **Dimension name** drop-down list, and then select **File Share**. For **standard file shares**, skip to step 5.
+4. In the **Condition** tab of the **Create an alert rule** dialog box, select the **File Capacity** metric.
+
+5. For **premium file shares**, select the **Dimension name** drop-down list, and then select **File Share**. For **standard file shares**, skip to step 5.
> [!NOTE] > If the file share is a standard file share, the **File Share** dimension won't list the file share(s) because per-share metrics aren't available for standard file shares. Alerts for standard file shares are based on all file shares in the storage account. Because per-share metrics aren't available for standard file shares, the recommendation is to have one file share per storage account.
-4. Select the **Dimension values** drop-down and select the file share(s) that you want to alert on.
+6. Select the **Dimension values** drop-down and select the file share(s) that you want to alert on.
-5. Enter the **Threshold value** in bytes. For example, if the file share size is 100 TiB and you want to receive an alert when the file share size is 80% of capacity, the threshold value in bytes is 87960930222080.
+7. Enter the **Threshold value** in bytes. For example, if the file share size is 100 TiB and you want to receive an alert when the file share size is 80% of capacity, the threshold value in bytes is 87960930222080.
-6. Define the alert parameters (threshold value, operator, lookback period, and frequency of evaluation).
+8. Define the alert parameters (threshold value, operator, lookback period, and frequency of evaluation).
-7. Select the **Actions** tab to add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
+9. Select the **Actions** tab to add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
-8. Select the **Details** tab to fill in the details of the alert such as the alert name, description, and severity.
+10. Select the **Details** tab to fill in the details of the alert such as the alert name, description, and severity.
-9. Select **Review + create** to create the alert.
+11. Select **Review + create** to create the alert.
## How to create an alert if the Azure file share egress has exceeded 500 GiB in a day 1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
-2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Egress** metric.
+2. In the **Scope** tab, select the **Select Scope** dialog box.
-3. For **premium file shares**, select the **Dimension name** drop-down list and select **File Share**. For **standard file shares**, skip to step 5.
+3. In the **Select a resource** blade, expand the **storage account** and check the **file** resource and press apply.
+
+4. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Egress** metric.
+
+5. For **premium file shares**, select the **Dimension name** drop-down list and select **File Share**. For **standard file shares**, skip to step 5.
> [!NOTE] > If the file share is a standard file share, the **File Share** dimension won't list the file share(s) because per-share metrics aren't available for standard file shares. Alerts for standard file shares are based on all file shares in the storage account. Because per-share metrics aren't available for standard file shares, the recommendation is to have one file share per storage account.
To create an alert for high server latency (average), follow these steps.
1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
-2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Success Server Latency** metric.
+2. In the **Scope** tab, select the **Select Scope** dialog box.
+
+3. In the **Select a resource** blade, expand the **storage account** and check the **file** resource and press apply.
-3. Select the **Dimension values** drop-down and select the file share(s) that you want to alert on.
+4. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Success Server Latency** metric.
+
+5. Select the **Dimension values** drop-down and select the file share(s) that you want to alert on.
> [!NOTE] > To alert on the overall latency experience, leave **Dimension values** unchecked. To alert on the latency of specific transactions, select the API Name in the drop-down list. For example, selecting the Read and Write API names with the equal operator will only display latency for data transactions. Selecting the Read and Write API name with the not equal operator will only display latency for metadata transactions.
-4. Define the **Alert Logic** by selecting either Static or Dynamic. For Static, select **Average** Aggregation, **Greater than** Operator, and Threshold value. For Dynamic, select **Average** Aggregation, **Greater than** Operator, and Threshold Sensitivity.
+6. Define the **Alert Logic** by selecting either Static or Dynamic. For Static, select **Average** Aggregation, **Greater than** Operator, and Threshold value. For Dynamic, select **Average** Aggregation, **Greater than** Operator, and Threshold Sensitivity.
> [!TIP] > If you're using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently experiencing high latency. If you're using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data. We recommend using the Dynamic logic with Medium threshold sensitivity and further adjust as needed. To learn more, see [Understanding dynamic thresholds](/azure/azure-monitor/alerts/alerts-dynamic-thresholds#understand-dynamic-thresholds-charts).
-5. Define the lookback period and frequency of evaluation.
+7. Define the lookback period and frequency of evaluation.
-6. Select the **Actions** tab to add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
+8. Select the **Actions** tab to add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
-7. Select the **Details** tab to fill in the details of the alert such as the alert name, description, and severity.
+9. Select the **Details** tab to fill in the details of the alert such as the alert name, description, and severity.
-8. Select **Review + create** to create the alert.
+10. Select **Review + create** to create the alert.
## How to create an alert if the Azure file share availability is less than 99.9% 1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
-2. In the **Condition** tab, select the **Availability** metric.
+2. In the **Scope** tab, select the **Select Scope** dialog box.
-3. In the **Alert logic** section, provide the following:
+3. In the **Select a resource** blade, expand the **storage account** and check the **file** resource and press apply.
+
+4. In the **Condition** tab, select the **Availability** metric.
+
+5. In the **Alert logic** section, provide the following:
- **Threshold** = **Static** - **Aggregation type** = **Average** - **Operator** = **Less than** - **Threshold value** enter **99.9**
-4. In the **Split by dimensions** section:
+6. In the **Split by dimensions** section:
- Select the **Dimension name** drop-down and select **File Share**. - Select the **Dimension values** drop-down and select the file share(s) that you want to alert on. > [!NOTE] > If the file share is a standard file share, the **File Share** dimension won't list the file share(s) because per-share metrics aren't available for standard file shares. Availability alerts for standard file shares will be at the storage acount level.
-6. In the **When to evaluate** section, select the following:
+7. In the **When to evaluate** section, select the following:
- **Check every** = **5 minutes** - **Lookback period** = **1 hour**
-7. Click **Next** to go to the **Actions** tab and add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
+8. Click **Next** to go to the **Actions** tab and add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
-8. Click **Next** to go to the **Details** tab and fill in the details of the alert such as the alert name, description, and severity.
+9. Click **Next** to go to the **Details** tab and fill in the details of the alert such as the alert name, description, and severity.
-9. Select **Review + create** to create the alert.
+10. Select **Review + create** to create the alert.
## Related content
storage Redundancy Premium File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/redundancy-premium-file-shares.md
Previously updated : 10/14/2024 Last updated : 10/30/2024
ZRS for premium file shares is available for a subset of Azure regions:
- (Asia Pacific) Korea Central - (Asia Pacific) East Asia - (Asia Pacific) Japan East
+- (Asia Pacific) Japan West
- (Asia Pacific) Central India - (Canada) Canada Central - (Europe) France Central
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
description: Learn about ways to improve performance and throughput for premium
Previously updated : 10/03/2024 Last updated : 10/30/2024
This preview feature improves the following metadata APIs and can be used from b
Currently this preview feature is only available for premium SMB file shares (file shares in the FileStorage storage account kind). There are no additional costs associated with using this feature. ### Register for the feature
-To get started, register for the feature using Azure portal or PowerShell.
+
+To get started, register for the feature using the Azure portal or Azure PowerShell.
# [Azure portal](#tab/portal)
Register-AzProviderFeature -FeatureName AzurePremiumFilesMetadataCacheFeature -P
```
+> [!IMPORTANT]
+> Allow 1-2 days for accounts to be onboarded once registration is complete.
+ ### Regional availability Currently the metadata caching preview is only available in the following Azure regions. To request additional region support, [sign up for the public preview](https://aka.ms/PremiumFilesMetadataCachingPreview). - Asia East - Australia Central
+- Australia East
+- Australia Southeast
- Brazil South - Canada Central
+- Canada East
+- Europe North
- France Central - Germany West Central - Japan East
Currently the metadata caching preview is only available in the following Azure
- Jio India West - India Central - India South
+- India West
+- Israel Central
+- Italy North
- Korea Central
+- Korea South
- Mexico Central - Norway East - Poland Central
Currently the metadata caching preview is only available in the following Azure
- UAE North - UK West - UK South
+- US North Central
- US South Central - US West Central
+- US West 2
- US West 3 > [!TIP]
virtual-desktop Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-overview.md
Title: Get started with the Azure Virtual Desktop Agent description: An overview of the Azure Virtual Desktop Agent and update processes.-+ Last updated 12/16/2020--++ + # Get started with the Azure Virtual Desktop Agent In the Azure Virtual Desktop Service framework, there are three main components: the Remote Desktop client, the service, and the virtual machines. These virtual machines live in the customer subscription where the Azure Virtual Desktop agent and agent bootloader are installed. The agent acts as the intermediate communicator between the service and the virtual machines, enabling connectivity. Therefore, if you're experiencing any issues with the agent installation, update, or configuration, your virtual machines won't be able to connect to the service. The agent bootloader is the executable that loads the agent.
virtual-desktop Agent Updates Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-updates-diagnostics.md
Title: Set up diagnostics for monitoring agent updates description: How to set up diagnostic reports to monitor agent updates.-+ Last updated 03/20/2023--++ + # Set up diagnostics to monitor agent updates Diagnostic logs can tell you which agent version is installed for an update, when it was installed, and if the update was successful. If an update is unsuccessful, it might be because the session host was turned off during the update. If that happened, you should turn the session host back on.
virtual-desktop Host Pool Management Approaches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/host-pool-management-approaches.md
Title: Host pool management approaches (preview) - Azure Virtual Desktop
-description: Learn about the different host pool management approaches in Azure Virtual Desktop.
+ Title: Host pool management approaches - Azure Virtual Desktop
+description: Learn about the different host pool management approaches of session host configuration management and standard management in Azure Virtual Desktop.
Last updated 10/01/2024
-# Host pool management approaches for Azure Virtual Desktop (preview)
+# Host pool management approaches for Azure Virtual Desktop
> [!IMPORTANT]
-> Host pools with a session host configuration for Azure Virtual Desktop are currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability.
+> Host pools with a session host configuration for Azure Virtual Desktop are currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability. To register for the limited preview, complete this form: [https://forms.office.com/r/ZziQRGR1Lz](https://forms.office.com/r/ZziQRGR1Lz).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Host pools are logical groupings of session host virtual machines that have the same configuration and serve the same workload. You can choose one of two host pool management approaches, *standard* and using a *session host configuration*. In this article, you learn about each management approach and the differences between them to help you decide which one to use.
+Host pools are logical groupings of session host virtual machines that have the same configuration and serve the same workload. You can choose one of two host pool management approaches, *standard* and using a *session host configuration* (preview). In this article, you learn about each management approach and the differences between them to help you decide which one to use.
> [!CAUTION] > Currently the host pool management approach is set when you create a host pool and can't be changed later. The management approach is stored in the host pool's properties. Later in the preview for using a session host configuration, we plan to enable any host pool to use a session host configuration.
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
Title: Manage session hosts with Microsoft Intune - Azure Virtual Desktop
-description: Recommended ways for you to manage your Azure Virtual Desktop session hosts.
+ Title: Manage the operating system of session hosts - Azure Virtual Desktop
+description: Recommended ways for you to manage your Azure Virtual Desktop session hosts, such as Microsoft Intune and Microsoft Configuration Manager.
Last updated 04/11/2023
-# Manage session hosts with Microsoft Intune
-We recommend using [Microsoft Intune](https://www.microsoft.com/endpointmanager) to manage your Azure Virtual Desktop environment. Microsoft Intune is a unified management platform that includes Microsoft Configuration Manager and Microsoft Intune.
-
-## Microsoft Configuration Manager
+# Manage the operating system of session hosts
-Microsoft Configuration Manager versions 1906 and later can manage your domain-joined and Microsoft Entra hybrid joined session hosts. For more information, see [Supported OS versions for clients and devices for Configuration Manager](/mem/configmgr/core/plan-design/configs/supported-operating-systems-for-clients-and-devices#azure-virtual-desktop).
+We recommend using [Microsoft Intune](https://www.microsoft.com/endpointmanager) to manage your Azure Virtual Desktop environment. Microsoft Intune is a unified management platform that includes Microsoft Configuration Manager and Microsoft Intune.
## Microsoft Intune
For Windows 11 and Windows 10 multi-session hosts, Intune supports both device-b
> [!NOTE] > Managing Azure Virtual Desktop session hosts using Intune is currently supported in the Azure Public and [Azure Government clouds](/enterprise-mobility-security/solutions/ems-intune-govt-service-description).
+## Microsoft Configuration Manager
+
+Microsoft Configuration Manager versions 1906 and later can manage your domain-joined and Microsoft Entra hybrid joined session hosts. For more information, see [Supported OS versions for clients and devices for Configuration Manager](/mem/configmgr/core/plan-design/configs/supported-operating-systems-for-clients-and-devices#azure-virtual-desktop).
+ ## Licensing [Microsoft Intune licenses](https://microsoft.com/microsoft-365/enterprise-mobility-security/compare-plans-and-pricing) are included with most Microsoft 365 subscriptions.
virtual-desktop Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/network-connectivity.md
Title: Understanding Azure Virtual Desktop network connectivity description: Learn about Azure Virtual Desktop network connectivity.-+ Last updated 11/16/2020-++ # Understanding Azure Virtual Desktop network connectivity
virtual-desktop Rdp Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-bandwidth.md
Title: Remote Desktop Protocol bandwidth requirements Azure Virtual Desktop - Azure description: Understanding RDP bandwidth requirements for Azure Virtual Desktop.-+ Last updated 11/16/2020-++ + # Remote Desktop Protocol (RDP) bandwidth requirements Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device. Depending on the use case, availability of computing resources, and network bandwidth, RDP dynamically adjusts various parameters to deliver the best user experience.
virtual-desktop Rdp Quality Of Service Qos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-quality-of-service-qos.md
Title: Implement Quality of Service (QoS) for Azure Virtual Desktop description: How to set up QoS for Azure Virtual Desktop.-+ Last updated 10/18/2021-++ + # Implement Quality of Service (QoS) for Azure Virtual Desktop [RDP Shortpath for managed networks](./shortpath.md) provides a direct UDP-based transport between Remote Desktop Client and Session host. RDP Shortpath for managed networks enables configuration of Quality of Service (QoS) policies for the RDP data.
virtual-desktop Scheduled Agent Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/scheduled-agent-updates.md
Title: Azure Virtual Desktop Scheduled Agent Updates description: How to use the Scheduled Agent Updates feature to choose a date and time to update your Azure Virtual Desktop agent components.-+ Last updated 07/20/2022--++ + # Scheduled Agent Updates for Azure Virtual Desktop host pools The Scheduled Agent Updates feature lets you create up to two maintenance windows for the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent to get updated so that updates don't happen during peak business hours. To monitor agent updates, you can use Log Analytics to see when agent component updates are available and when updates are unsuccessful.
virtual-desktop Session Host Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/session-host-update-diagnostics.md
Last updated 10/01/2024
# Example diagnostic queries for session host update in Azure Virtual Desktop > [!IMPORTANT]
-> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability.
+> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability. To register for the limited preview, complete this form: [https://forms.office.com/r/ZziQRGR1Lz](https://forms.office.com/r/ZziQRGR1Lz).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
virtual-desktop Session Host Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/session-host-update.md
Last updated 10/01/2024
# Session host update for Azure Virtual Desktop (preview) > [!IMPORTANT]
-> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability.
+> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability. To register for the limited preview, complete this form: [https://forms.office.com/r/ZziQRGR1Lz](https://forms.office.com/r/ZziQRGR1Lz).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
virtual-desktop Set Up Golden Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-golden-image.md
Title: Create an Azure Virtual Desktop golden image description: A walkthrough for how to set up a golden image for your Azure Virtual Desktop deployment in the Azure portal.-+ Last updated 12/01/2021--++ + # Create a golden image in Azure This article will walk you through how to use the Azure portal to create a custom image to use for your Azure Virtual Desktop session hosts. This custom image, which we'll call a "golden image," contains all apps and configuration settings you want to apply to your deployment.
-There are other approaches to customizing your session hosts, such as using device management tools like [Microsoft Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) or automating your image build using tools like [Azure Image Builder](/azure/virtual-machines/windows/image-builder-virtual-desktop) with [Azure DevOps](/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops&preserve-view=true). Which strategy works best depends on the complexity and size of your planned Azure Virtual Desktop environment and your current application deployment processes.
+There are other approaches to customizing your session hosts, such as using device management tools like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) or automating your image build using tools like [Azure Image Builder](/azure/virtual-machines/windows/image-builder-virtual-desktop) with [Azure DevOps](/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops&preserve-view=true). Which strategy works best depends on the complexity and size of your planned Azure Virtual Desktop environment and your current application deployment processes.
## Create an image from an Azure VM When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](prerequisites.md#operating-systems-and-licenses). We recommend using a Windows 10 or 11 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 or 11 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](/azure/virtual-machines/generation-2). > [!IMPORTANT]
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
Title: Troubleshoot Azure Virtual Desktop Agent Issues - Azure description: How to resolve common Azure Virtual Desktop Agent and connectivity issues.-+ Last updated 04/21/2023--++ + # Troubleshoot common Azure Virtual Desktop Agent issues The Azure Virtual Desktop Agent can cause connection issues because of multiple factors:
virtual-desktop Troubleshoot Session Host Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-session-host-update.md
Last updated 10/01/2024
# Troubleshoot session host update in Azure Virtual Desktop > [!IMPORTANT]
-> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability.
+> Session host update for Azure Virtual Desktop is currently in PREVIEW. This preview is provided as-is, with all faults and as available, and are excluded from the service-level agreements (SLAs) or any limited warranties Microsoft provides for Azure services in general availability. To register for the limited preview, complete this form: [https://forms.office.com/r/ZziQRGR1Lz](https://forms.office.com/r/ZziQRGR1Lz).
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
virtual-desktop Tutorial Try Deploy Windows 11 Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-try-deploy-windows-11-desktop.md
To create a personal host pool, workspace, application group, and session host V
| Virtual machine location | Select the Azure region where you want to deploy your session host VMs. It must be the same region that your virtual network is in. | | Availability options | Select **No infrastructure redundancy required**. This means that your session host VMs aren't deployed in an availability set or in availability zones. | | Security type | Select **Trusted launch virtual machines**. Leave the subsequent defaults of **Enable secure boot** and **Enable vTPM** checked, and **Integrity monitoring** unchecked. For more information, see [Trusted launch](security-guide.md#trusted-launch). |
- | Image | Select **Windows 11 Enterprise, version 22H2**. |
+ | Image | Select **Windows 11 Enterprise, version 23H2**. |
| Virtual machine size | Accept the default SKU. If you want to use a different SKU, select **Change size**, then select from the list. | | Number of VMs | Enter **1** as a minimum. You can deploy up to 500 session host VMs at this point if you wish, or you can add more separately.<br /><br />With a personal host pool, each session host can only be assigned to one user, so you need one session host for each user connecting to this host pool. Once you've completed this tutorial, you can create a pooled host pool, where multiple users can connect to the same session host. | | OS disk type | Select **Premium SSD** for best performance. |
virtual-desktop Classic Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/classic-retirement.md
Title: Azure Virtual Desktop (classic) retirement - Azure description: Information about the retirement of Azure Virtual Desktop (classic). --++ Last updated 09/27/2023+ # Azure Virtual Desktop (classic) retirement
virtual-desktop Configure Vm Gpu 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-vm-gpu-2019.md
Title: Configure GPU for Azure Virtual Desktop (classic) - Azure description: How to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop (classic).-+ Last updated 03/30/2020-++ # Configure graphics processing unit (GPU) acceleration for Azure Virtual Desktop (classic)
virtual-desktop Manual Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manual-delete.md
Title: Delete Azure Virtual Desktop (classic) - Azure description: How to clean up Azure Virtual Desktop (classic) when it is no longer used.-+ Last updated 11/22/2021--++ # Delete Azure Virtual Desktop (classic)
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 10/16/2024 Last updated : 10/30/2024 # What's new in the Remote Desktop client for Windows
virtual-desktop Whats New Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-msixmgr.md
Title: What's new in the MSIXMGR tool - Azure Virtual Desktop description: Learn about what's new in the release notes for the MSIXMGR tool. --++ Last updated 04/18/2023+ # What's new in the MSIXMGR tool
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
Previously updated : 07/18/2023 Last updated : 10/30/2024 #CustomerIntent: As a network admin, I want understand the limitations in Azure Virtual Network Manager so that I can properly deploy it my environment.
This article provides an overview of the current limitations when you're using [
## Limitations for connected groups * A connected group can have up to 250 virtual networks. Virtual networks in a [mesh topology](concept-connectivity-configuration.md#mesh-network-topology) are in a [connected group](concept-connectivity-configuration.md#connected-group), so a mesh configuration has a limit of 250 virtual networks.
-* Currently connected groups do not support BareMetal Infrastructure.
+* The following BareMetal Infrastructures are supported:
+ * [Azure NetApp Files](../azure-netapp-files/index.yml)
+ * [Azure VMware Solution](../azure-vmware/index.yml)
+ * [Nutanix Cloud Clusters on Azure](../baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md)
+ * [Oracle Database@Azure](../oracle/oracle-db/oracle-database-what-is-new.md)
+ * [Azure Payment HSM](/azure/payment-hsm/solution-design)
* Maximum number of private endpoints per connected group is 1000. * You can have network groups with or without [direct connectivity](concept-connectivity-configuration.md#direct-connectivity) enabled in the same [hub-and-spoke configuration](concept-connectivity-configuration.md#hub-and-spoke-topology), as long as the total number of virtual networks peered to the hub doesn't exceed 500 virtual networks. * If the network group peered to the hub *has direct connectivity enabled*, these virtual networks are in a connected group, so the network group has a limit of 250 virtual networks.
virtual-network Virtual Network Service Endpoint Policies Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-overview.md
Virtual networks and Azure Storage accounts can be in the same or different subs
- You can only deploy service endpoint policies on virtual networks deployed through the Azure Resource Manager deployment model. -- Virtual networks must be in the same region as the service endpoint policy.
+- Virtual networks must be in the same region and subscription as the service endpoint policy.
- You can only apply service endpoint policy on a subnet if service endpoints are configured for the Azure services listed in the policy.
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
Previously updated : 05/27/2023 Last updated : 10/30/2024
Each route contains an address prefix and next hop type. When traffic leaving a
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
-* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.yml#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. By default, Azure routes traffic between subnets. You don't need to define route tables or gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't create default routes for subnet address ranges. Each subnet address range is within an address range of the address space of a virtual network.
+* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.yml#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. By default, Azure routes traffic between subnets. You don't need to define route tables or gateways for Azure to route traffic between subnets. Azure doesn't create default routes for subnet address ranges. Each subnet address range is within an address range of the address space of a virtual network.
* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network to the Internet. There's one exception to this routing. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services doesn't traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes).
The next hop types listed in the previous table represent how Azure routes traff
### Optional default routes
-Azure adds more default system routes for different Azure capabilities, but only if you enable the capabilities. Depending on the capability, Azure adds optional default routes to either specific subnets within the virtual network, or to all subnets within a virtual network. The other system routes and next hop types that Azure may add when you enable different capabilities are:
+Azure adds more default system routes for different Azure capabilities, but only if you enable the capabilities. Depending on the capability, Azure adds optional default routes to either specific subnets within the virtual network, or to all subnets within a virtual network. The other system routes and next hop types that Azure might add when you enable different capabilities are:
|Source |Address prefixes |Next hop type|Subnet within virtual network that route is added to| |-- |- | |--|
You can specify the following next hop types when creating a user-defined route:
* **Virtual appliance**: A virtual appliance is a virtual machine that typically runs a network application, such as a firewall. To learn about various preconfigured network virtual appliances you can deploy in a virtual network, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). When you create a route with the **virtual appliance** hop type, you also specify a next hop IP address. The IP address can be:
- * The [private IP address](./ip-services/private-ip-addresses.md) of a network interface attached to a virtual machine. Any network interface attached to a virtual machine that forwards network traffic to an address other than its own must have the Azure *Enable IP forwarding* option enabled for it. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to [enable IP forwarding for a network interface](virtual-network-network-interface.md#enable-or-disable-ip-forwarding). Though *Enable IP forwarding* is an Azure setting, you may also need to enable IP forwarding within the virtual machine's operating system for the appliance to forward traffic between private IP addresses assigned to Azure network interfaces. If the appliance needs to route traffic to a public IP address, it must either proxy the traffic or perform network address translation (NAT) from the source's private IP address to its own private IP address. Azure then performs NAT to a public IP address before sending the traffic to the Internet. To determine required settings within the virtual machine, see the documentation for your operating system or network application. To understand outbound connections in Azure, see [Understanding outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+ * The [private IP address](./ip-services/private-ip-addresses.md) of a network interface attached to a virtual machine. Any network interface attached to a virtual machine that forwards network traffic to an address other than its own must have the Azure *Enable IP forwarding* option enabled for it. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to [enable IP forwarding for a network interface](virtual-network-network-interface.md#enable-or-disable-ip-forwarding). *Enable IP forwarding* is an Azure setting. You might need to enable IP forwarding within the virtual machine's operating system for the appliance to forward traffic between private IP addresses assigned to Azure network interfaces. If the appliance needs to route traffic to a public IP address, it must either proxy the traffic or perform network address translation (NAT) from the source's private IP address to its own private IP address. Azure then performs NAT to a public IP address before sending the traffic to the Internet. To determine required settings within the virtual machine, see the documentation for your operating system or network application. To understand outbound connections in Azure, see [Understanding outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
> [!NOTE] > Deploy a virtual appliance into a different subnet than the resources that route through the virtual appliance. Deploying the virtual appliance to the same subnet then applying a route table to the subnet that routes traffic through the virtual appliance can result in routing loops where traffic never leaves the subnet.
You can specify the following next hop types when creating a user-defined route:
You can define a route with 0.0.0.0/0 as the address prefix and a next hop type of virtual appliance. This configuration allows the appliance to inspect the traffic and determine whether to forward or drop the traffic. If you intend to create a user-defined route that contains the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first.
-* **Virtual network gateway**: Specify when you want traffic destined for specific address prefixes routed to a virtual network gateway. The virtual network gateway must be created with type **VPN**. You can't specify a virtual network gateway created as type **ExpressRoute** in a user-defined route because with ExpressRoute, you must use BGP for custom routes. You can't specify Virtual Network Gateways if you have VPN and ExpressRoute coexisting connections either. You can define a route that directs traffic destined for the 0.0.0.0/0 address prefix to a route-based virtual network gateway. On your premises, you might have a device that inspects the traffic and determines whether to forward or drop the traffic. If you intend to create a user-defined route for the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first. Instead of configuring a user-defined route for the 0.0.0.0/0 address prefix, you can advertise a route with the 0.0.0.0/0 prefix via BGP, if you've [enabled BGP for a VPN virtual network gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+* **Virtual network gateway**: Specify when you want traffic destined for specific address prefixes routed to a virtual network gateway. The virtual network gateway must be created with type **VPN**. You can't specify a virtual network gateway created as type **ExpressRoute** in a user-defined route because with ExpressRoute, you must use BGP for custom routes. You can't specify Virtual Network Gateways if you have VPN and ExpressRoute coexisting connections either. You can define a route that directs traffic destined for the 0.0.0.0/0 address prefix to a route-based virtual network gateway. On your premises, you might have a device that inspects the traffic and determines whether to forward or drop the traffic. If you intend to create a user-defined route for the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first. Instead of configuring a user-defined route for the 0.0.0.0/0 address prefix, you can advertise a route with the 0.0.0.0/0 prefix via BGP if the [BGP for a VPN virtual network gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json) is enabled.
-* **None**: Specify when you want to drop traffic to an address prefix, rather than forwarding the traffic to a destination. If you haven't fully configured a capability, Azure may list *None* for some of the optional system routes. For example, if you see *None* listed as the **Next hop IP address** with a **Next hop type** of *Virtual network gateway* or *Virtual appliance*, it may be because the device isn't running, or isn't fully configured. Azure creates system [default routes](#default) for reserved address prefixes with **None** as the next hop type.
+* **None**: Specify when you want to drop traffic to an address prefix, rather than forwarding the traffic to a destination. Azure might list *None* for some of the optional system routes if a capability isn't configured. For example, if you see *None* listed as the **Next hop IP address** with a **Next hop type** of *Virtual network gateway* or *Virtual appliance*, it might be because the device isn't running, or isn't fully configured. Azure creates system [default routes](#default) for reserved address prefixes with **None** as the next hop type.
* **Virtual network**: Specify the **Virtual network** option when you want to override the default routing within a virtual network. See [Routing example](#routing-example), for an example of why you might create a route with the **Virtual network** hop type.
The name displayed and referenced for next hop types is different between the Az
|Next hop type |Azure CLI and PowerShell (Resource Manager) |Azure classic CLI and PowerShell (classic)| |- | |--| |Virtual network gateway |VirtualNetworkGateway | VPNGateway |
-|Virtual network |VNetLocal | VNETLocal (not available in the classic CLI in Service Management mode)|
-|Internet |Internet |Internet (not available in the classic CLI in Service Management mode)|
+|Virtual network |VNetLocal | VNETLocal (not available in the classic CLI in classic deployment model mode)|
+|Internet |Internet |Internet (not available in the classic CLI in classic deployment model mode)|
|Virtual appliance |VirtualAppliance |VirtualAppliance|
-|None |None |Null (not available in the classic CLI in Service Management mode)|
+|None |None |Null (not available in the classic CLI in classic deployment model mode)|
|Virtual network peering |VNet peering |Not applicable| |Virtual network service endpoint|VirtualNetworkServiceEndpoint |Not applicable|
When you override the 0.0.0.0/0 address prefix, not only does outbound traffic f
* **Virtual network gateway**: If the gateway is an ExpressRoute virtual network gateway, an Internet-connected device on-premises can network address translate and forward, or proxy the traffic to the destination resource in the subnet, via ExpressRoute's [private peering](../expressroute/expressroute-circuit-peerings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#privatepeering).
-If your virtual network is connected to an Azure VPN gateway, don't associate a route table to the [gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub) that includes a route with a destination of 0.0.0.0/0. Doing so can prevent the gateway from functioning properly. For details, see the *Why are certain ports opened on my VPN gateway?* question in the [VPN Gateway FAQ](../vpn-gateway/vpn-gateway-vpn-faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gatewayports).
+If your virtual network is connected to an Azure VPN gateway, don't associate a route table to the [gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub) that includes a route with a destination of 0.0.0.0/0. Doing so can prevent the gateway from functioning properly. For details, see [Why are certain ports opened on my VPN gateway?](../vpn-gateway/vpn-gateway-vpn-faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gatewayports)
See [DMZ between Azure and your on-premises datacenter](/azure/architecture/reference-architectures/dmz/secure-vnet-hybrid?toc=%2fazure%2fvirtual-network%2ftoc.json) for implementation details when using virtual network gateways between the Internet and Azure.
To illustrate the concepts in this article, the sections that follow describe:
1. For one subnet in one virtual network:
- * Force all outbound traffic from the subnet, except to Azure Storage and within the subnet, to flow through a network virtual appliance, for inspection and logging.
+ * Route all outbound traffic from the subnet through a network virtual appliance for inspection and logging. Exclude traffic to Azure Storage and within the subnet from this routing.
* Don't inspect traffic between private IP addresses within the subnet; allow traffic to flow directly between all resources.
An explanation of each route ID follows:
* **ID2**: Azure added this route when a user-defined route for the 10.0.0.0/16 address prefix was associated to the *Subnet1* subnet in the *Virtual-network-1* virtual network. The user-defined route specifies 10.0.100.4 as the IP address of the virtual appliance, because the address is the private IP address assigned to the virtual appliance virtual machine. The route table this route exists in isn't associated to *Subnet2*, so doesn't appear in the route table for *Subnet2*. This route overrides the default route for the 10.0.0.0/16 prefix (ID1), which automatically routed traffic addressed to 10.0.0.1 and 10.0.255.254 within the virtual network through the virtual network next hop type. This route exists to meet [requirement](#requirements) 3, to force all outbound traffic through a virtual appliance.
-* **ID3**: Azure added this route when a user-defined route for the 10.0.0.0/24 address prefix was associated to the *Subnet1* subnet. Traffic destined for addresses between 10.0.0.1 and 10.0.0.254 remains within the subnet, rather than being routed to the virtual appliance specified in the previous rule (ID2), because it has a longer prefix than the ID2 route. This route wasn't associated to *Subnet2*, so the route doesn't appear in the route table for *Subnet2*. This route effectively overrides the ID2 route for traffic within *Subnet1*. This route exists to meet [requirement](#requirements) 3.
+* **ID3**: Azure added this route when a user-defined route for the 10.0.0.0/24 address prefix was associated to the *Subnet1* subnet. Traffic destined for addresses between 10.0.0.1 and 10.0.0.254 remains within the subnet. The traffic isn't routed to the virtual appliance specified in the previous rule (ID2), because it has a longer prefix than the ID2 route. This route wasn't associated to *Subnet2*, so the route doesn't appear in the route table for *Subnet2*. This route effectively overrides the ID2 route for traffic within *Subnet1*. This route exists to meet [requirement](#requirements) 3.
* **ID4**: Azure automatically added the routes in IDs 4 and 5 for all subnets within *Virtual-network-1*, when the virtual network was peered with *Virtual-network-2.* *Virtual-network-2* has two address ranges in its address space: 10.1.0.0/16 and 10.2.0.0/16, so Azure added a route for each range. If you don't create the user-defined routes in route IDs 6 and 7, traffic sent to any address between 10.1.0.1-10.1.255.254 and 10.2.0.1-10.2.255.254 would be routed to the peered virtual network. This process is because the prefix is longer than 0.0.0.0/0 and doesn't fall within the address prefixes of any other routes. When you added the routes in IDs 6 and 7, Azure automatically changed the state from *Active* to *Invalid*. This process is because they have the same prefixes as the routes in IDs 4 and 5, and user-defined routes override default routes. The state of the routes in IDs 4 and 5 are still *Active* for *Subnet2*, because the route table that the user-defined routes in IDs 6 and 7 are in, isn't associated to *Subnet2*. A virtual network peering was created to meet [requirement](#requirements) 1.
vpn-gateway Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/roles-permissions.md
+
+ Title: About VPN Gateway roles and permissions
+
+description: Learn about roles and permissions for VPN Gateway.
+++ Last updated : 10/29/2024+++
+# About roles and permissions for VPN Gateway
+
+The VPN Gateway utilizes multiple resources, such as virtual networks and IP addresses, during both creation and management operations.
+Because of this, it's essential to verify permissions on all involved resources during these operations.
+
+## Azure built-in roles
+
+You can choose to assign [Azure built-in roles](../role-based-access-control/built-in-roles.md) to a user, group, service principal, or managed identity such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which support all the required permissions for creating the gateway.
+For more information, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
+
+## Custom roles
+
+If the [Azure built-in roles](../role-based-access-control/built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles.
+Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.
+For more information, see [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role) .
+
+To ensure proper functionality, check your custom role permissions to confirm user service principals, and managed identities operating the VPN gateway have the necessary permissions.
+To add any missing permissions listed here, see [Update a custom role](../role-based-access-control/custom-roles-portal.md#update-a-custom-role).
+
+## Permissions
+
+Depending on whether you're creating new resources or using existing ones, add the appropriate permissions from the following list:
+
+|Resource | Resource status | Required Azure permissions |
+||||
+| Subnet | Create new| Microsoft.Network/virtualNetworks/subnets/write |
+| Subnet | Use existing| Microsoft.Network/virtualNetworks/subnets/join/action<br>Microsoft.Network/virtualNetworks/subnets/read |
+| IP addresses| Create new| Microsoft.Network/publicIPAddresses/write |
+| IP addresses | Use existing| Microsoft.Network/publicIPAddresses/join/action<br>Microsoft.Network/publicIPAddresses/read |
+
+For more information, see [Azure permissions for Networking](../role-based-access-control/permissions/networking.md) and [Virtual network permissions](../virtual-network/virtual-network-manage-subnet.md#permissions).
+
+## Roles scope
+
+In the process of custom role definition, you can specify a role assignment scope at four levels: management group, subscription, resource group, and resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope.
+
+These scopes are structured in a parent-child relationship, with each level of hierarchy making the scope more specific. You can assign roles at any of these levels of scope, and the level you select determines how widely the role is applied.
+
+For example, a role assigned at the subscription level can cascade down to all resources within that subscription, while a role assigned at the resource group level will only apply to resources within that specific group. Learn more about scope level
+For more information, see [Scope levels](../role-based-access-control/scope-overview.md#scope-levels).
+
+> [!NOTE]
+> Allow sufficient time for [Azure Resource Manager cache](../role-based-access-control/troubleshooting.md) to refresh after role assignment changes.
+
+## Next steps
+
+[What is Azure Role Based Access](../role-based-access-control/overview.md)
+[Azure Role Based Access Control](../role-based-access-control/role-assignments-list-portal.yml)
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 07/29/2024 Last updated : 10/30/2024 ms.devlang: azurecli
Azure VPN gateways can be configured as active-standby or active-active. In an a
* [About active-active gateways](about-active-active-gateways.md) * [Design highly available gateway connectivity for cross-premises and VNet-to-VNet connections](vpn-gateway-highlyavailable.md)
+## Gateway Private IPs
+
+This setting is used for certain ExpressRoute private peering configurations. For more information, see [Configure a Site-to-Site VPN connection over ExpressRoute private peering](site-to-site-vpn-private-peering.md).
+ ## <a name="connectiontype"></a>Connection types Each connection requires a specific virtual network gateway connection type. The available PowerShell values for [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection) `-Connection Type` are: IPsec, Vnet2Vnet, ExpressRoute, VPNClient.
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
Three bot categories are supported:
- **Bad**
- Bad bots include bots from malicious IP addresses and bots that falsify their identities. Bad bots with malicious IPs are sourced from the Microsoft Threat Intelligence feedΓÇÖs high confidence IP Indicators of Compromise.
+ Bad bots are bots with malicious IP addresses and bots that have falsified their identities. Bad bots includes malicious IP addresses that are sourced from the Microsoft Threat Intelligence feedΓÇÖs high confidence IP Indicators of Compromise and IP reputation feeds. Bad bots also include bots that identify themselves as good bots but their IP addresses donΓÇÖt belong to legitimate bot publishers.
- **Good**
- Good bots include validated search engines such as Googlebot, bingbot, and other trusted user agents.
+ Good Bots are trusted user agents. Good bot rules are categorized into multiple categories to provide granular control over WAF policy configuration. These categories include:
+ - verified search engine bots (such as Googlebot and Bingbot)
+ - validated link checker bots
+ - verified social media bots (such as Facebookbot and LinkedInBot)
+ - verified advertising bots
+ - verified content checker bots
+ - validated miscellaneous bots
- **Unknown**
- Unknown bots are classified via published user agents without more validation. For example, market analyzer, feed fetchers, and data collection agents. Unknown bots also include malicious IP addresses that are sourced from Microsoft Threat Intelligence feedΓÇÖs medium confidence IP Indicators of Compromise.
+ Unknown bots are user agents without additional validation. Unknown bots also include malicious IP addresses that are sourced from Microsoft Threat Intelligence feedΓÇÖs medium confidence IP Indicators of Compromise.
-The WAF platform actively manages and dynamically updates bot signatures.
-
+The WAF platform actively manages and dynamically updates the bot signatures.
You can assign Microsoft_BotManagerRuleSet_1.0 by using the **Assign** option under **Managed Rulesets**:
When Bot protection is enabled, it blocks, allows, or logs incoming requests tha
You can access WAF logs from a storage account, event hub, log analytics, or send logs to a partner solution.
+For more information about Application Gateway bot protection, see [Azure Web Application Firewall on Azure Application Gateway bot protection overview](bot-protection-overview.md).
### WAF modes
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group c
|**[crs_42_tight_security](#crs42)**|Protect against path-traversal attacks| |**[crs_45_trojans](#crs45)**|Protect against backdoor trojans|
-### Bot rules
+### Bot Manager 1.0
-You can enable a managed bot protection rule set to take custom actions on requests from all bot categories.
+The Bot Manager 1.0 rule set provides protection against malicious bots and detection of good bots. The rules provide granular control over bots detected by WAF by categorizing bot traffic as Good, Bad, or Unknown bots.
-|Rule group name|Description|
+|Rule group|Description|
|||
-|**[BadBots](#bot100)**|Protect against bad bots|
-|**[GoodBots](#bot200)**|Identify good bots|
-|**[UnknownBots](#bot300)**|Identify unknown bots|
+|[BadBots](#bot100)|Protect against bad bots|
+|[GoodBots](#bot200)|Identify good bots|
+|[UnknownBots](#bot300)|Identify unknown bots|
+
+### Bot Manager 1.1
+
+The Bot Manager 1.1 rule set is an enhancement to Bot Manager 1.0 rule set. It provides enhanced protection against malicious bots, and increases good bot detection.
+
+|Rule group|Description|
+|||
+|[BadBots](#bot11-100)|Protect against bad bots|
+|[GoodBots](#bot11-200)|Identify good bots|
+|[UnknownBots](#bot11-300)|Identify unknown bots|
The following rule groups and rules are available when using Web Application Firewall on Application Gateway.
The following rule groups and rules are available when using Web Application Fir
|950921|Backdoor access| |950922|Backdoor access|
-# [Bot rules](#tab/bot)
+# [Bot Manager 1.0](#tab/bot)
-## <a name="bot"></a> Bot Manager rule sets
+## <a name="bot"></a> 1.0 rule sets
### <a name="bot100"></a> Bad bots |RuleId|Description| ||| |Bot100100|Malicious bots detected by threat intelligence| |Bot100200|Malicious bots that have falsified their identity|-
- Bot100100 scans both client IP addresses and the IPs in the X-Forwarded-For header.
+ Bot100100 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
+ ### <a name="bot200"></a> Good bots |RuleId|Description| |||
The following rule groups and rules are available when using Web Application Fir
||| |Bot300100|Unspecified identity| |Bot300200|Tools and frameworks for web crawling and attacks|
-|Bot300300|General purpose HTTP clients and SDKs|
+|Bot300300|General-purpose HTTP clients and SDKs|
|Bot300400|Service agents| |Bot300500|Site health monitoring services| |Bot300600|Unknown bots detected by threat intelligence| |Bot300700|Other bots|
- Bot300600 scans both client IP addresses and the IPs in the X-Forwarded-For header.
+Bot300600 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
+
+# [Bot Manager 1.1](#tab/bot11)
+
+## <a name="bot11"></a> 1.1 rule sets
+
+### <a name="bot11-100"></a> Bad bots
+|RuleId|Description|
+|||
+|Bot100100|Malicious bots detected by threat intelligence|
+|Bot100200|Malicious bots that have falsified their identity|
+|Bot100300|High risk bots detected by threat intelligence|
+
+ Bot100100 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
+
+### <a name="bot11-200"></a> Good bots
+|RuleId|Description|
+|||
+|Bot200100|Search engine crawlers|
+|Bot200200|Verified miscellaneous bots|
+|Bot200300|Verified link checker bots|
+|Bot200400|Verified social media bots|
+|Bot200500|Verified content fetchers|
+|Bot200600|Verified feed fetchers|
+|Bot200700|Verified advertising bots|
+
+### <a name="bot11-300"></a> Unknown bots
+|RuleId|Description|
+|||
+|Bot300100|Unspecified identity|
+|Bot300200|Tools and frameworks for web crawling and attacks|
+|Bot300300|General-purpose HTTP clients and SDKs|
+|Bot300400|Service agents|
+|Bot300500|Site health monitoring services|
+|Bot300600|Unknown bots detected by threat intelligence. This rule also includes IP addresses matched to the Tor network.|
+|Bot300700|Other bots|
+
+Bot300600 scans both client IP addresses and IPs in the `X-Forwarded-For` header.
web-application-firewall Bot Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/bot-protection-overview.md
You can enable a managed bot protection rule set for your WAF to block or log re
## Use with OWASP rulesets
-You can use the Bot Protection ruleset alongside any of the OWASP rulesets with the Application Gateway WAF v2 SKU. Only one OWASP ruleset can be used at any given time. The bot protection ruleset contains another rule that appears in its own ruleset. It's titled **Microsoft_BotManagerRuleSet_1.0**, and you can enable or disable it like the other OWASP rules.
+You can use the Bot Protection ruleset alongside any of the OWASP rulesets with the Application Gateway WAF v2 SKU. Only one OWASP ruleset can be used at any given time. The bot protection ruleset contains another rule that appears in its own ruleset. It's titled **Microsoft_BotManagerRuleSet_1.1**, and you can enable or disable it like the other OWASP rules.
:::image type="content" source="../media/bot-protection-overview/bot-ruleset.png" alt-text="Screenshot show bot protection ruleset." lightbox="../media/bot-protection-overview/bot-ruleset.png":::
web-application-firewall Waf Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-copilot.md
Title: Azure Web Application Firewall integration in Microsoft Copilot for Security (preview) description: Learn about using Microsoft Copilot for Security to investigate traffic flagged by Azure Web Application Firewall.
-keywords: security copilot, copilot for security, threat intelligence, intrusion detection and prevention system, plugin, integration, azure web application firewall, copilot, open ai, openai co-pilot
+keywords: copilot for security, copilot for security, threat intelligence, intrusion detection and prevention system, plugin, integration, azure web application firewall, copilot, open ai, openai co-pilot
Last updated 05/20/2024
ms.localizationpriority: high
-# Azure Web Application Firewall integration in Copilot for Security (preview)
+# Azure Web Application Firewall integration in Microsoft Copilot for Security (preview)
> [!IMPORTANT] > Azure Web Application Firewall integration in Microsoft Copilot for Security is currently in PREVIEW.
Microsoft Copilot for Security is a cloud-based AI platform that provides natural language copilot experience. It can help support security professionals in different scenarios, like incident response, threat hunting, and intelligence gathering. For more information, see [What is Microsoft Copilot for Security?](/security-copilot/microsoft-security-copilot)
-Azure Web Application Firewall (WAF) integration in Copilot for Security enables deep investigation of Azure WAF events. It can help you investigate WAF logs triggered by Azure WAF in a matter of minutes and provide related attack vectors using natural language responses at machine speed. It provides visibility into your environmentΓÇÖs threat landscape. It allows you to retrieve a list of most frequently triggered WAF rules and identify the top offending IPaddresses in your environment.
+Azure Web Application Firewall (WAF) integration in Microsoft Copilot for Security enables deep investigation of Azure WAF events. It can help you investigate WAF logs triggered by Azure WAF in a matter of minutes and provide related attack vectors using natural language responses at machine speed. It provides visibility into your environmentΓÇÖs threat landscape. It allows you to retrieve a list of most frequently triggered WAF rules and identify the top offending IPaddresses in your environment.
-Copilot for Security integration is supported on both Azure WAF integrated with Azure Application Gateway and Azure WAF integrated with Azure Front Door.
+Microsoft Copilot for Security integration is supported on both Azure WAF integrated with Azure Application Gateway and Azure WAF integrated with Azure Front Door.
## Know before you begin
-If you're new to Copilot for Security, you should familiarize yourself with it by reading these articles:
+If you're new to Microsoft Copilot for Security, you should familiarize yourself with it by reading these articles:
- [What is Microsoft Copilot for Security?](/security-copilot/microsoft-security-copilot) - [Microsoft Copilot for Security experiences](/security-copilot/experiences-security-copilot) - [Get started with Microsoft Copilot for Security](/security-copilot/get-started-security-copilot) - [Understand authentication in Microsoft Copilot for Security](/security-copilot/authentication) - [Prompting in Microsoft Copilot for Security](/security-copilot/prompting-security-copilot)
-## Azure WAF integration in Copilot for Security
+## Microsoft Copilot for Security integration in Azure WAF
This integration supports the standalone experience and is accessed through [https://securitycopilot.microsoft.com](https://securitycopilot.microsoft.com). This is a chat-like experience that you can use to ask questions and get answers about your data. For more information, see [Microsoft Copilot for Security experiences](/security-copilot/experiences-security-copilot#standalone-and-embedded-experiences).
-### Features in the standalone experience
+## Key features
The preview standalone experience in Azure WAF can help you with:
The preview standalone experience in Azure WAF can help you with:
This Azure WAF skill helps you understand why Azure WAF blocked Cross Site Scripting(XSS) attacks to web applications. It does this by analyzing Azure WAF logs and connecting related logs over a specific time period. The result is an easy-to-understand natural language explanation of why an XSS request was blocked.
-## Enable the Azure WAF integration in Microsoft Copilot for Security
+## Enable the Azure WAF integration in Copilot for Security
To enable the integration, follow these steps: 1. Ensure that you have at least Copilot contributor permissions. 2. Open [https://securitycopilot.microsoft.com/](https://securitycopilot.microsoft.com).
-3. Open the Microsoft Copilot for Security menu.
+3. Open the Copilot for Security menu.
4. Open **Sources** in the prompt bar. 5. On the Plugins page, set the Azure Web Application Firewall toggle to **On**. 6. Select the Settings on the Azure Web Application Firewall plugin to configure the Log Analytics workspace, Log Analytics subscription ID, and the Log Analytics resource group name for Azure Front Door WAF and/or the Azure Application Gateway WAF. You can also configure the Application Gateway WAF policy URI and/or Azure Front Door WAF policy URI. 7. To start using the skills, use the prompt bar.
-## Sample prompts
+## Sample Azure WAF prompts
-You can create your own prompts in Copilot for Security to perform analysis on the attacks based on WAF logs. This section shows some ideas and examples.
+You can create your own prompts in Microsoft Copilot for Security to perform analysis on the attacks based on WAF logs. This section shows some ideas and examples.
### Before you begin
The following example prompts might be helpful.
## Provide feedback
-Your feedback on the Azure WAF integration with Copilot for Security helps with development. To provide feedback in Copilot, select **HowΓÇÖs this response?** At the bottom of each completed prompt and choose any of the following options:
+Your feedback on the Azure WAF integration with Microsoft Copilot for Security helps with development. To provide feedback in Copilot, select **HowΓÇÖs this response?** At the bottom of each completed prompt and choose any of the following options:
- Looks right - Select if the results are accurate, based on your assessment. - Needs improvement - Select if any detail in the results is incorrect or incomplete, based on your assessment.
For each feedback item, you can provide more information in the next dialog box
## Limitation
-If you've migrated to Azure Log Analytics dedicated tables in the Application Gateway WAF V2 version, the Copilot for Security WAF Skills aren't functional. As a temporary workaround, enable Azure Diagnostics as the destination table in addition to the resource-specific table.
+If you've migrated to Azure Log Analytics dedicated tables in the Application Gateway WAF V2 version, the Microsoft Copilot for Security WAF Skills aren't functional. As a temporary workaround, enable Azure Diagnostics as the destination table in addition to the resource-specific table.
## Privacy and data security in Microsoft Copilot for Security
To understand how Microsoft Copilot for Security handles your prompts and the da
-