Updates from: 04/02/2024 01:09:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
This article shows you how to enable sign-in for users using the multitenant end
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)] > [!NOTE]
-> In this article, it assumed that **SocialAndLocalAccounts** starter pack is used in the previous steps mentioned in pre-requisite.
+> In this article, it is assumed that the **SocialAndLocalAccounts** starter pack is used in the previous steps mentioned in the pre-requisite.
## Register a Microsoft Entra app
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
Document Intelligence supports more sophisticated and modular analysis capabilit
> [!NOTE] >
-> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#analysis-features).
+> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#model-analysis-features).
The following add-on capabilities are available for`2024-02-29-preview`, `2024-02-29-preview`, and later releases:
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
The following table shows the available models for each current preview and stab
|-|--||--||| |Document analysis models|[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a| |Document analysis models|[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️|
-|Document analysis models|[General document](concept-general-document.md) |moved to layout| ✔️| ✔️| n/a|
-|Prebuilt models|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
+|Document analysis models|[General document](concept-general-document.md) |moved to layout**| ✔️| ✔️| n/a|
|Prebuilt models|[Contract](concept-contract.md) | ✔️| ✔️| n/a| n/a| |Prebuilt models|[Health insurance card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a| |Prebuilt models|[ID document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️| |Prebuilt models|[Invoice](concept-invoice.md) | ✔️| ✔️| ✔️| ✔️| |Prebuilt models|[Receipt](concept-receipt.md) | ✔️| ✔️| ✔️| ✔️|
-|Prebuilt models|[US 1098 Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|Prebuilt models|[US 1098-E Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|Prebuilt models|[US 1098-T Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|Prebuilt models|[US 1099 Tax](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US 1040 Tax*](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
+|Prebuilt models|[US 1098 Tax*](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US 1099 Tax*](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
|Prebuilt models|[US W2 Tax](concept-tax-document.md) | ✔️| ✔️| ✔️| n/a| |Prebuilt models|[US Mortgage 1003 URLA](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
-|Prebuilt models|[US Mortgage 1008 ](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US Mortgage 1008 Summary](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
|Prebuilt models|[US Mortgage closing disclosure](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
-|Custom models|[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a|
-|Custom models|[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a|
-|Custom models|[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️|
-|Custom models|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️|
+|Prebuilt models|[Marriage certificate](concept-marriage-certificate.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[Credit card](concept-credit-card.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
+|Custom classification model|[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a|
+|Custom extraction model|[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a|
+|Customextraction model|[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️|
+|Custom extraction model|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️|
|All models|[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a|
+\* - Contains sub-models. See the model specific information for supported variations and sub-types.
+ |**Add-on Capability**| **Add-On/Free**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br>&bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)| |-|--||--||| |Font property extraction|Add-On| ✔️| ✔️| n/a| n/a|
Add-On* - Query fields are priced differently than the other add-on features. Se
|**Prebuilt models**|| | [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number, and other key information from US health insurance cards.| | [US Tax document models](#us-tax-documents) | Process US tax forms to extract employee, employer, wage, and other information. |
+| [US Mortgage document models](#us-mortgage-documents) | Process US mortgage forms to extract borrower loan and property information. |
| [Contract](#contract) | Extract agreement and party details.| | [Invoice](#invoice) | Automate invoices. | | [Receipt](#receipt) | Extract receipt data from receipts.|
For all models, except Business card model, Document Intelligence now supports a
* [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2024-02-29-preview, 2023-10-31-preview) * [`queryFields`](concept-add-on-capabilities.md#query-fields) (2024-02-29-preview, 2023-10-31-preview) `Not available with the US.Tax models`
-## Analysis features
+## Model details
+This section describes the output you can expect from each model. Please note that you can extend the output of most models with add-on features.
### Read OCR
The US tax document models analyze and extract key fields and line items from a
|Model|Description|ModelID| |||| |US Tax W-2|Extract taxable compensation details.|**prebuilt-tax.us.W-2**|
- |US Tax 1098|Extract mortgage interest details.|**prebuilt-tax.us.1098**|
- |US Tax 1098-E|Extract student loan interest details.|**prebuilt-tax.us.1098E**|
- |US Tax 1098-T|Extract qualified tuition details.|**prebuilt-tax.us.1098T**|
- |US Tax 1099|Extract wage information details.|**prebuilt-tax.us.1099(variations)**|
+ US Tax 1040|Extract mortgage interest details.|**prebuilt-tax.us.1040(variations)**|
+ |US Tax 1098|Extract mortgage interest details.|**prebuilt-tax.us.1098(variations)**|
+ |US Tax 1099|Extract income received from sources other than employer.|**prebuilt-tax.us.1099(variations)**|
***Sample W-2 document processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
The US tax document models analyze and extract key fields and line items from a
> [!div class="nextstepaction"] > [Learn more: Tax document models](concept-tax-document.md)
+>
+
+### US mortgage documents
++
+The US mortgage document models analyze and extract key fields including borrower, loan and property information from a select group of mortgage documents. The API supports the analysis of English-language US mortgage documents of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The following models are currently supported:
+
+ |Model|Description|ModelID|
+ ||||
+ |1003 End-User License Agreement (EULA)|Extract loan, borrower, property details.|**prebuilt-mortgage.us.1003**|
+ |1008 Summary document|Extract borrower, seller, property, mortgage and underwriting details.|**prebuilt-mortgage.us.1008**|
+ |Closing disclosure|Extract closing, transaction costs and loan details.|**prebuilt-mortgage.us.closingDisclosure**|
+ |Marriage certificate|Extract marriage information details for joint loan applicants.|**prebuilt-marriageCertificate**|
+ |US Tax W-2|Extract taxable compensation details for income verification.|**prebuilt-tax.us.W-2**|
+
+***Sample Closing disclosure document processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=mortgage.us.closingDisclosure)***:
+
+> [!div class="nextstepaction"]
+> [Learn more: Mortgage document models](concept-mortgage-documents.md)
+>
### Contract :::image type="icon" source="media/overview/icon-contract.png":::
Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 s
> [!div class="nextstepaction"] > [Learn more: identity document model](concept-id-document.md)
+### Marriage certificate
++
+Use the marriage certificate model to process U.S. marriage certificates to extract key fields including the individuals, date and location.
+
+***Sample U.S. marriage certificate processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=marriageCertificate.us)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: identity document model](concept-marriage-certificate.md)
+
+### Credit card
++
+Use the credit card model to process credit and debit cards to extract key fields.
+
+***Sample credit card processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=creditCard)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: identity document model](concept-credit-card.md)
+ ### Custom models :::image type="icon" source="media/studio/custom.png":::
-Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
+Custom models can be broadly classified into two types. Custom classification models that support classification of a "document type" and custom extraction models that can extract a defined schema from a specific document type.
++
+Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need one example of the form type to get started.
-Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
+Version v3.0 custom model supports signature detection in custom template (form) and cross-page tables in both template and neural models.
***Sample custom template processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
A composed model is created by taking a collection of custom models and assignin
> [!div class="nextstepaction"] > [Learn more: custom model](concept-custom.md)
-## Model data extraction
-
-| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** |
-|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
-| [prebuilt-healthInsuranceCard.us](concept-health-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-tax.us.w2](concept-tax-document.md#field-extraction-w-2) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-tax.us.1098](concept-tax-document.md#field-extraction-1098) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-tax.us.1098E](concept-tax-document.md) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-tax.us.1098T](concept-tax-document.md) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-tax.us.1099(variations)](concept-tax-document.md) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
-| [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô || Γ£ô | |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [Custom](concept-custom.md#compare-model-features) | Γ£ô || Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
## Input requirements
ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-query-fields.md
For query field extraction, specify the fields you want to extract and Document
:::image type="content" source="media/studio/query-field-select.png" alt-text="Screenshot of query fields selection window in Document Intelligence Studio.":::
-* In addition to the query fields, the response includes the model output. For a list of features or schema extracted by each model, see [model analysis features](concept-model-overview.md#analysis-features).
+* In addition to the query fields, the response includes the model output. For a list of features or schema extracted by each model, see [model analysis features](concept-model-overview.md#model-analysis-features).
+ ## Query fields REST API request**
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
monikerRange: '>=doc-intel-3.0.0'
* A [**Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource. > [!TIP]
-> Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Currently [Microsoft Entra authentication](../../../active-directory/authentication/overview-authentication.md), is not supported on Document Intelligence Studio to access Document Intelligence service APIs. To use Document Intelligence Studio, enable access key authentication.
+> Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Currently [Microsoft Entra authentication](../../../active-directory/authentication/overview-authentication.md) is not supported on Document Intelligence Studio to access Document Intelligence service APIs. To use Document Intelligence Studio, enabling access key-based authentication/local authentication is necessary.
#### Azure role assignments
ai-studio Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md
Connections in Azure AI Studio are a way to authenticate and consume both Micros
## Connections to Azure AI services
-You can create connections to Azure AI services such as Azure OpenAI and Azure AI Content Safety. You can then use the connection in a prompt flow tool such as the LLM tool.
+You can [create connections](../how-to/connections-add.md) to Azure AI services such as Azure OpenAI and Azure AI Content Safety. You can then use the connection in a prompt flow tool such as the LLM tool.
:::image type="content" source="../media/prompt-flow/llm-tool-connection.png" alt-text="Screenshot of a connection used by the LLM tool in prompt flow." lightbox="../media/prompt-flow/llm-tool-connection.png":::
-As another example, you can create a connection to an Azure AI Search resource. The connection can then be used by prompt flow tools such as the Vector DB Lookup tool.
+As another example, you can [create a connection](../how-to/connections-add.md) to an Azure AI Search resource. The connection can then be used by prompt flow tools such as the Vector DB Lookup tool.
:::image type="content" source="../media/prompt-flow/vector-db-lookup-tool-connection.png" alt-text="Screenshot of a connection used by the Vector DB Lookup tool in prompt flow." lightbox="../media/prompt-flow/vector-db-lookup-tool-connection.png":::
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
Your copilot application can use the deployed prompt flow to answer questions in
:::image type="content" source="../media/tutorials/copilot-deploy-flow/deployments-score-url-samples.png" alt-text="Screenshot of the prompt flow deployment endpoint and code samples." lightbox = "../media/tutorials/copilot-deploy-flow/deployments-score-url-samples.png"::: - ## Clean up resources To avoid incurring unnecessary Azure costs, you should delete the resources you created in this tutorial if they're no longer needed. To manage resources, you can use the [Azure portal](https://portal.azure.com?azure-portal=true). You can also [stop or delete your compute instance](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance) in [Azure AI Studio](https://ai.azure.com). +
+## Azure AI Studio enterprise chat solution demo
+
+Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8).
+> [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8]
+ ## Next steps * Learn more about [prompt flow](../how-to/prompt-flow.md).
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
Build is an experience where AI Devs and ML Pros can build or customize AI solut
# [Manage](#tab/manage)
-As a developer, you can manage settings such as connections and compute. Your admin will mainly use this section to look at access control, usage, and billing.
+As a developer, you can manage settings such as connections and compute. Your admin mainly uses this section to look at access control, usage, and billing.
- Centralized backend infrastructure to reduce complexity for developers. - A single Azure AI hub resource for enterprise configuration, unified data story, and built-in governance.
As a developer, you can manage settings such as connections and compute. Your ad
## Azure AI Studio enterprise chat solution demo
-Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8).
-> [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8]
+Learn how to build your own copilot with Azure AI Studio in this [overview video from Microsoft Mechanics on YouTube](https://youtu.be/3hZorLy6JiA).
+> [!VIDEO https://www.youtube.com/embed/3hZorLy6JiA]
## Pricing and Billing
Azure AI Studio is available in most regions where Azure AI services are availab
## How to get access
-You can explore Azure AI Studio without signing in, but for full functionality an Azure account is needed and apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). You receive a follow-up email when your subscription has been added.
+You can explore Azure AI Studio without signing in, but for full functionality an Azure account is needed. You also need to apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). You receive a follow-up email when your subscription is added.
## Next steps
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
For agent nodes, which are expected to handle very large numbers of concurrent s
| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. | | `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. | | `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
-| `net.ipv4.tcp_keepalive_intvl` | 10 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
+| `net.ipv4.tcp_keepalive_intvl` | 10 - 90 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. |
-| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
+| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65535] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. | | `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. | | `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
-| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
-| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+| `net.netfilter.nf_conntrack_max` | 131072 - 2097152 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
+| `net.netfilter.nf_conntrack_buckets` | 65536 - 524288 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
### Worker limits
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-sku
* If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md). * [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are *not* a recommended SKU for GPU VHD.
-* AKS doesn't support Windows GPU-enabled node pools.
* Updating an existing node pool to add GPU isn't supported. ## Before you begin
Using NVIDIA GPUs involves the installation of various NVIDIA software component
### Skip GPU driver installation (preview)
-AKS has automatic GPU driver installation enabled by default. In some cases, such as installing your own drivers or using the NVIDIA GPU Operator, you may want to skip GPU driver installation.
+AKS has automatic GPU driver installation enabled by default. In some cases, such as installing your own drivers or using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html), you may want to skip GPU driver installation.
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
AKS has automatic GPU driver installation enabled by default. In some cases, suc
--node-count 1 \ --skip-gpu-driver-install \ --node-vm-size Standard_NC6s_v3 \
- --node-taints sku=gpu:NoSchedule \
--enable-cluster-autoscaler \ --min-count 1 \ --max-count 3
AKS has automatic GPU driver installation enabled by default. In some cases, suc
### NVIDIA device plugin installation
-NVIDIA device plugin installation is required when using GPUs on AKS. In some cases, the installation is handled automatically, such as when using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) or the [AKS GPU image (preview)](#use-the-aks-gpu-image-preview). Alternatively, you can manually install the NVIDIA device plugin.
+NVIDIA device plugin installation is required when using GPUs on AKS. In some cases, the installation is handled automatically, such as when using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html) or the [AKS GPU image (preview)](#use-the-aks-gpu-image-preview). Alternatively, you can manually install the NVIDIA device plugin.
#### Manually install the NVIDIA device plugin
The NVIDIA GPU Operator automates the management of all NVIDIA software componen
1. Skip automatic GPU driver installation by creating a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with `--skip-gpu-driver-install`. Adding the `--skip-gpu-driver-install` flag during node pool creation skips the automatic GPU driver installation. Any existing nodes aren't changed. You can scale the node pool to zero and then back up to make the change take effect.
-2. Follow the NVIDIA documentation to [Install the GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/install-gpu-ocp.html#install-nvidiagpu:~:text=NVIDIA%20GPU%20Operator-,Installing%20the%20NVIDIA%20GPU%20Operator,-%EF%83%81).
+2. Follow the NVIDIA documentation to [Install the GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html).
3. Now that you successfully installed the GPU Operator, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
When upgrading the OS version of a running Windows workload on Azure Kubernetes
When a new version of the Windows Server operating system is released, AKS is committed to supporting it and recommending you upgrade to the latest version to take advantage of the fixes, improvements, and new functionality. AKS provides a five-year support lifecycle for every Windows Server version, starting with Windows Server 2022. During this period, AKS will release a new version that supports a newer version of Windows Server OS for you to upgrade to. > [!NOTE]
->- Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL). For more information, see [AKS release notes][aks-release-notes].
->- Windows Server 2022 is being retired after Kubernetes version 1.34 reaches its end of life (EOL). For more information, see [AKS release notes][aks-release-notes].
+> - Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL). For more information, see [AKS release notes][aks-release-notes].
+> - Windows Server 2022 is being retired after Kubernetes version 1.34 reaches its end of life (EOL). For more information, see [AKS release notes][aks-release-notes].
## Limitations
Node Selector is the most common and recommended option for placement of Windows
kubectl get pods -o wide ```
- The following example output shows the pods in the `defualt` namespace:
+ The following example output shows the pods in the `default` namespace:
```output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
For steps to update the Azure AD B2C app, see [Switch redirect URIs to the singl
1. Select **Azure Active Directory B2C** from the list. 1. In the **Client library** dropdown, select **MSAL**. 1. Select **Update**.
-1. [Republish your developer portal](api-management-howto-developer-portal-customize.md#publish-from-the-azure-portal).
+1. [Republish your developer portal](developer-portal-overview.md#publish-the-portal).
## Developer portal - add Azure Active Directory B2C account authentication > [!IMPORTANT]
-> You need to [republish the developer portal](api-management-howto-developer-portal-customize.md#publish) when you create or update Azure Active Directory B2C configuration settings for the changes to take effect.
+> You need to [republish the developer portal](developer-portal-overview.md#publish-the-portal) when you create or update Azure Active Directory B2C configuration settings for the changes to take effect.
In the developer portal, sign-in with Azure Active Directory B2C is possible with the **Sign-in button: OAuth** widget. The widget is already included on the sign-in page of the default developer portal content.
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
For steps, see [Switch redirect URIs to the single-page application type](../act
1. Select **Microsoft Entra ID** from the list. 1. In the **Client library** dropdown, select **MSAL**. 1. Select **Update**.
-1. [Republish your developer portal](api-management-howto-developer-portal-customize.md#publish-from-the-azure-portal).
+1. [Republish your developer portal](developer-portal-overview.md#publish-the-portal).
<a name='add-an-external-azure-ad-group'></a>
In the developer portal, you can sign in with Microsoft Entra ID using the **Sig
Although a new account will automatically be created when a new user signs in with Microsoft Entra ID, consider adding the same widget to the sign-up page. The **Sign-up form: OAuth** widget represents a form used for signing up with OAuth. > [!IMPORTANT]
-> You need to [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the Microsoft Entra ID changes to take effect.
+> You need to [republish the portal](developer-portal-overview.md#publish-the-portal) for the Microsoft Entra ID changes to take effect.
## Related content
api-management Api Management Howto Create Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md
Once the association is added between the developer and the group, you can view
## <a name="next-steps"> </a>Next steps
-* Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How create and publish a product in Azure API Management][How create and publish a product in Azure API Management].
-* You can control how the developer portal content appears to different users and groups you've configured. Learn more about [customizing the developer portal](api-management-howto-developer-portal-customize.md#customize-the-portals-content).
+* Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How to create and publish a product in Azure API Management][How create and publish a product in Azure API Management].
+* You can control how the developer portal content appears to different users and groups you've configured. Learn more about [visibility and access controls in the developer portal](developer-portal-overview.md#content-visibility-and-access).
* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications to developers from your API Management instance.
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md
Title: Tutorial - Access and customize the developer portal - Azure API Management | Microsoft Docs
-description: In this tutorial, customize the API Management developer portal, an automatically generated, fully customizable website with the documentation of your APIs.
+description: Follow this tutorial to learn how to customize the API Management developer portal, an automatically generated, fully customizable website with the documentation of your APIs.
Previously updated : 09/06/2023 Last updated : 03/29/2024 # Tutorial: Access and customize the developer portal
-The *developer portal* is an automatically generated, fully customizable website with the documentation of your APIs. It is where API consumers can discover your APIs, learn how to use them, and request access.
+In this tutorial, you'll get started with customizing the API Management *developer portal*. The developer portal is an automatically generated, fully customizable website with the documentation of your APIs. It's where API consumers can discover your APIs, learn how to use them, and request access.
+ In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Publish the changes > * View the published portal
-You can find more details on the developer portal in the [Azure API Management developer portal overview](api-management-howto-developer-portal.md).
+For more information about developer portal features and options, see [Azure API Management developer portal overview](developer-portal-overview.md).
:::image type="content" source="media/api-management-howto-developer-portal-customize/cover.png" alt-text="Screenshot of the API Management developer portal - administrator mode." border="false"::: ## Prerequisites -- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)-- Import and publish an API. For more information, see [Import and publish](import-and-publish.md)
+- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+- [Import and publish](import-and-publish.md) an API.
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] ## Access the portal as an administrator
-Follow the steps below to access the managed version of the portal.
+Follow these steps to access the managed version of the developer portal.
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. If you created your instance in a v2 service tier that supports the developer portal, first enable the developer portal.
Follow the steps below to access the managed version of the portal.
It might take a few minutes to enable the developer portal. 1. In the left menu, under **Developer portal**, select **Portal overview**. Then select the **Developer portal** button in the top navigation bar. A new browser tab with an administrative version of the portal will open.
-## Developer portal architectural concepts
-
-The portal components can be logically divided into two categories: *code* and *content*.
-
-### Code
-
-Code is maintained in the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal) and includes:
--- **Widgets** - represent visual elements and combine HTML, JavaScript, styling ability, settings, and content mapping. Examples are an image, a text paragraph, a form, a list of APIs etc.-- **Styling definitions** - specify how widgets can be styled-- **Engine** - which generates static webpages from portal content and is written in JavaScript-- **Visual editor** - allows for in-browser customization and authoring experience-
-### Content
-
-Content is divided into two subcategories: *portal content* and *API Management content*.
-
-*Portal content* is specific to the portal and includes:
--- **Pages** - for example, landing page, API tutorials, blog posts-- **Media** - images, animations, and other file-based content-- **Layouts** - templates, which are matched against a URL and define how pages are displayed-- **Styles** - values for styling definitions, such as fonts, colors, borders-- **Settings** - configurations such as favicon, website metadata-
- Portal content, except for media, is expressed as JSON documents.
-
-*API Management content* includes entities such as APIs, Operations, Products, Subscriptions.
## Understand the portal's administrative interface
-### Default content
-
-If you're accessing the portal for the first time, the default content is automatically provisioned in the background. Default content has been designed to showcase the portal's capabilities and minimize the customizations needed to personalize your portal. You can learn more about what is included in the portal content in the [Azure API Management developer portal overview](api-management-howto-developer-portal.md).
-### Visual editor
+## Add an image to the media library
-You can customize the content of the portal with the visual editor.
-* The menu sections on the left let you create or modify pages, media, layouts, menus, styles, or website settings.
-* The menu items on the bottom let you switch between viewports (for example, mobile or desktop), view the elements of the portal visible to users in different groups, or save or undo actions.
-* Add sections to a page by clicking on a blue icon with a plus sign.
-* Widgets (for example, text, images, or APIs list) can be added by pressing a grey icon with a plus sign.
-* Rearrange items in a page with the drag-and-drop interaction.
+You'll want to use your own images and other media content in the developer portal to reflect your organization's branding. If an image that you want to use isn't already in the portal's media library, add it in the developer portal:
-### Layouts and pages
+1. In the left menu of the visual editor, select **Media**.
+1. Do one of the following:
+ * Select **Upload file** and select a local image file on your computer.
+ * Select **Link file**. Enter a **Reference URL** to the image file and other details. Then select **Download**.
+1. Select **Close** to exit the media library.
+> [!TIP]
+> You can also add an image to the media library by dragging and dropping it directly in the visual editor window.
-Layouts define how pages are displayed. For example, in the default content, there are two layouts: one applies to the home page, and the other to all remaining pages.
+## Replace the default logo on the home page
-A layout gets applied to a page by matching its URL template to the page's URL. For example, a layout with a URL template of `/wiki/*` will be applied to every page with the `/wiki/` segment in the URL: `/wiki/getting-started`, `/wiki/styles`, etc.
+A placeholder logo is provided in the top left corner of the navigation bar. You can replace it with your own logo to match your organization's branding.
-In the preceding image, content belonging to the layout is marked in blue, while the page is marked in red. The menu sections are marked respectively.
+1. In the developer portal, select the default **Contoso** logo in the top left of the navigation bar.
+1. Select **Edit**.
+1. In the **Picture** pop-up, under **Main**, select **Source**.
+1. In the **Media** pop-up, select one of the following:
+ * An image already uploaded in your media library
+ * **Upload file** to upload a new image file to your media library
+ * **None** if you don't want to use a logo
+1. The logo updates in real time.
+1. Select outside the pop-up windows to exit the media library.
+1. In the top bar, select **Save**.
-### Styling guide
+## Edit content on the home page
+The default **Home** page and other pages are provided with placeholder text and other images. You can either remove entire sections containing this content or keep the structure and adjust the elements one by one. Replace the generated text and images with your own and make sure any links point to desired locations.
-Styling guide is a panel created with designers in mind. It allows for overseeing and styling all the visual elements in your portal. The styling is hierarchical - many elements inherit properties from other elements. For example, button elements use colors for text and background. To change a button's color, you need to change the original color variant.
+Edit the structure and content of the generated pages in several ways. For example:
-To edit a variant, select it and select the pencil icon that appears on top of it. After you make the changes in the pop-up window, close it.
-### Save button
+## Edit the site's primary color
+To change colors, gradients, typography, buttons, and other user interface elements in the developer portal, edit the site styles. For example, change the primary color used in the navigation bar, buttons, and other elements to match your organization's branding.
-Whenever you make a change in the portal, you need to save it manually by selecting the **Save** button in the menu at the bottom, or press [Ctrl]+[S]. When you save your changes, the modified content is automatically uploaded to your API Management service.
+1. In the developer portal, in the left menu of the visual editor, select **Styles**.
+1. Under the **Colors** section, select the color style item you want to edit. For example, select **Primary**.
+1. Select **Edit color**.
+1. Select the color from the color-picker, or enter the color hex code.
+1. In the top bar, elect **Save**.
-## Customize the portal's content
+The updated color is applied to the site in real time.
-Before you make your portal available to the visitors, you should personalize the automatically generated content. Recommended changes include the layouts, styles, and the content of the home page. You can also make certain content elements accessible only to selected users or groups.
+> [!TIP]
+> If you want, add and name another color item by selecting **+ Add color** on the **Styles** page.
-> [!NOTE]
-> Due to integration considerations, the following pages can't be removed or moved under a different URL: `/404`, `/500`, `/captcha`, `/change-password`, `/config.json`, `/confirm/invitation`, `/confirm-v2/identities/basic/signup`, `/confirm-v2/password`, `/internal-status-0123456789abcdef`, `/publish`, `/signin`, `/signin-sso`, `/signup`.
+## Change the background image on the home page
-### Home page
+You can change the background on your portal's home page to an image or color that matches your organization's branding. If you haven't already uploaded a different image to the media library, you can upload it before changing the background image, or when you're changing it.
-The default **Home** page is filled with placeholder content. You can either remove entire sections containing this content or keep the structure and adjust the elements one by one. Replace the generated text and images with your own and make sure the links point to desired locations. You can edit the structure and content of the home page by:
-* Dragging and dropping page elements to the desired placement on the site.
-* Selecting text and heading elements to edit and format content.
-* Verifying your buttons point to the right locations.
+1. On the home page of the developer portal, click in the top right corner so that the top section is highlighted at the corners and a pop-up menu appears.
+1. To the right of **Edit article** in the pop-up menu, select the up-down arrow (**Switch to parent**).
+1. Select **Edit section**.
+1. In the **Section** pop-up, under **Background**, select one of the icons:
-### Layouts
+ :::image type="content" source="media/api-management-howto-developer-portal-customize/background.png" alt-text="Screenshot of background settings in the developer portal.":::
+ * **Clear background**, to remove a background image
+ * **Background image**, to select an image from the media library, or to upload a new image
+ * **Background color**, to select a color from the color picker, or to clear a color
+ * **Background gradient**, to select a gradient from your site styles page, or to clear a gradient
+1. Under **Background sizing**, make a selection appropriate for your background.
+1. In the top bar, select **Save**.
-Replace the automatically generated logo in the navigation bar with your own image.
+## Change the default layout
-1. In the developer portal, select the default **Contoso** logo in the top left of the navigation bar.
-1. Select the **Edit** icon.
-1. Under the **Main** section, select **Source**.
-1. In the **Media** pop-up, either select:
- * An image already uploaded in your library, or
- * **Upload file** to upload a new image file to use, or
- * Select **None** to forego using a logo.
-1. The logo updates in real-time.
-1. Select outside the pop-up windows to exit the media library.
-1. Click **Save**.
+The developer portal uses *layouts* to define common content elements such as navigation bars and footers on groups of related pages. Each page is automatically matched with a layout based on a URL template.
-### Styling
+By default, the developer portal comes with two layouts:
-Although you don't need to adjust any styles, you may consider adjusting particular elements. For example, change the primary color to match your brand's color. You can do this in two ways:
+* **Home** - used for the home page (URL template `/`)
-#### Overall site style
+* **Default** - used for all other pages (URL template `/*`).
-1. In the developer portal, select the **Styles** icon from the left tool bar.
-1. Under the **Colors** section, select the color style item you want to edit.
-1. Click the **Edit** icon for that style item.
-1. Select the color from the color-picker, or enter the hex color code.
-1. Add and name another color item by clicking **Add color**.
-1. Click **Save**.
-#### Container style
+You can change the layout for any page in the developer portal and define new layouts to apply to pages that match other URL templates.
-1. On the main page of the developer portal, select the container background.
-1. Click the **Edit** icon.
-1. In the pop-up, set:
- * The background to clear, an image, a specific color, or a gradient.
- * The container size, margin, and padding.
- * Container position and height.
-1. Select outside the pop-up windows to exit the container settings.
-1. Click **Save**.
+For example, to change the logo that's used in the navigation bar of the Default layout to match your organization's branding:
-### Visibility and access controls
+1. In the left menu of the visual editor, select **Pages**.
+1. Select the **Layouts** tab, and select **Default**.
+1. Select the picture of the logo in the upper left corner and select **Edit**.
+1. Under **Main**, select **Source**.
+1. In the **Media** pop-up windows, select one of the following:
+ * An image already uploaded in your media library
+ * **Upload file** to upload a new image file to your media file that you can select
+ * **None** if you don't want to use a logo
+1. The logo updates in real time.
+1. Select outside the pop-up windows to exit the media library.
+1. In the top bar, select **Save**.
-You can control which portal content appears to different users, based on their identity. For example, you might want to display certain pages only to users who have access to a specific product or API. Or, make a section of a page appear only for certain [groups of users](api-management-howto-create-groups.md). The developer portal has built-in controls for these needs.
+## Edit navigation menus
-> [!NOTE]
-> Visibility and access controls are supported only in the managed developer portal. They are not supported in the [self-hosted portal](developer-portal-self-host.md).
+You can edit the navigation menus at the top of the developer portal pages to change the order of menu items, add items, or remove items. You can also change the name of menu items and the URL or other content they point to.
-* When you add or edit a page, select the **Access** tab to control the users or groups that can access the page
-
- :::image type="content" source="media/api-management-howto-developer-portal-customize/page-access-control.png" alt-text="Screenshot of the page access control settings in the developer portal.":::
+For example, the **Default** and **Home** layouts for the developer portal display two menus to guest users of the developer portal:
-* When you customize page content such as a page section, menu, or button, select the **Change visibility** icon to control the users or groups that can see the element on the page
+* a main menu with links to **Home**, **APIs**, and **Products**
+* an anonymous user menu with links to **Sign in** and **Sign up** pages.
- :::image type="content" source="media/api-management-howto-developer-portal-customize/change-visibility-button.png" alt-text="Screenshot of the change visibility button in the developer portal.":::
+However, you might want to customize them. For example, if you want to independently invite users to your site, you could disable the **Sign up** link in the anonymous user menu.
- * You can change the visibility of the following page content: sections, menus, buttons, and sign-in for OAuth authorization.
- * Media files such as images on a page inherit the visibility of the elements that contain them.
+1. In the left menu of the visual editor, select **Site menu**.
+1. On the left, expand **Anonymous user menu**.
+1. Select the settings (gear icon) next to **Sign up**, and select **Delete**.
+1. Select **Save**.
-When a user visits the developer portal with visibility and access controls applied:
+## Edit site settings
-* The developer portal automatically hides buttons or navigation items that point to pages that a user doesn't have access to.
+Edit the site settings for the developer portal to change the site name, description, and other details.
-* An attempt by a user to access a page they aren't authorized to will result in a 404 Not Found error.
+1. In the left menu of the visual editor, select **Settings**.
+1. In the **Settings** pop-up, enter the site metadata you want to change. Optionally, set up a favicon for the site from an image in your media library.
+1. In the top bar, **Save**.
> [!TIP]
-> Using the administrative interface, you can preview pages as a user associated with any built-in or custom group by selecting the **Impersonate** icon in the menu at the bottom.
->
-
-### Customization example
+> If you want to change the site's domain name, you must first set up a custom domain in your API Management instance. [Learn more about custom domain names](configure-custom-domain.md) in API Management.
-In the following video, we demonstrate how to edit the content of the portal, customize the website's look, and publish the changes.
-> [!VIDEO https://www.youtube.com/embed/5mMtUSmfUlw]
+## Publish the portal
-## <a name="publish"></a> Publish the portal
+To make your portal and its latest changes available to visitors, you need to *publish* it.
-To make your portal and its latest changes available to visitors, you need to *publish* it. You can publish the portal within the portal's administrative interface or from the Azure portal.
+To publish from the administrative interface of the developer portal:
-### Publish from the administrative interface
-1. Make sure you saved your changes by selecting the **Save** icon.
-1. In the **Operations** section of the menu, select **Publish website**. This operation may take a few minutes.
- :::image type="content" source="media/api-management-howto-developer-portal-customize/publish-portal.png" alt-text="Screenshot of the Publish website button in the developer portal." border="false":::
-
-### Publish from the Azure portal
+> [!TIP]
+> Another option is to publish the site from the Azure portal. On the **Portal overview** page of your API Management instance in the Azure portal, select **Publish**.
-1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
-1. In the left menu, under **Developer portal**, select **Portal overview**.
-1. In the **Portal overview** window, select **Publish**.
+## Visit the published portal
- :::image type="content" source="media/api-management-howto-developer-portal-customize/pubish-portal-azure-portal.png" alt-text="Publish portal from Azure portal":::
+To view your changes after you publish the portal, access it at the same URL as the administrative panel, for example `https://contoso-api.developer.azure-api.net`. View it in a separate browser session (using incognito or private browsing mode) as an external visitor.
-> [!NOTE]
-> The portal needs to be republished after API Management service configuration changes. For example, republish the portal after assigning a custom domain, updating the identity providers, setting delegation, or specifying sign-in and product terms.
+## Apply the CORS policy on APIs
+To let the visitors of your portal test the APIs through the built-in interactive console, enable CORS (cross-origin resource sharing) on your APIs, if you haven't already done so. On the **Portal overview** page of your API Management instance in the Azure portal, select **Enable CORS**. [Learn more](enable-cors-developer-portal.md).
-## Visit the published portal
+## Next steps
-After you publish the portal, you can access it at the same URL as the administrative panel, for example `https://contoso-api.developer.azure-api.net`. View it in a separate browser session (using incognito or private browsing mode) as an external visitor.
+In this tutorial, you learned how to:
-## Apply the CORS policy on APIs
+> [!div class="checklist"]
+> * Access the managed version of the developer portal
+> * Navigate its administrative interface
+> * Customize the content
+> * Publish the changes
+> * View the published portal
-To let the visitors of your portal test the APIs through the built-in interactive console, enable CORS (cross-origin resource sharing) on your APIs. For details, see the [Azure API Management developer portal FAQ](developer-portal-faq.md#cors).
+Advance to the next tutorial:
-## Next steps
+> [!div class="nextstepaction"]
+> [Import and manage APIs using Visual Studio Code](visual-studio-code-tutorial.md)
-Learn more about the developer portal:
+See related content about the developer portal:
-- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)
+- [Azure API Management developer portal overview](developer-portal-overview.md)
- Configure authentication to the developer portal with [usernames and passwords](developer-portal-basic-authentication.md), [Microsoft Entra ID](api-management-howto-aad.md), or [Azure AD B2C](api-management-howto-aad-b2c.md). - Learn more about [customizing and extending](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
api-management Api Management Howto Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal.md
- Title: Overview of the developer portal in Azure API Management-
-description: Learn about the developer portal in API Management - a customizable website, where API consumers can explore your APIs.
----- Previously updated : 10/28/2022---
-# Overview of the developer portal
-
-Developer portal is an automatically generated, fully customizable website with the documentation of your APIs. It is where API consumers can discover your APIs, learn how to use them, request access, and try them out.
-
-As introduced in this article, you can customize and extend the developer portal for your specific scenarios.
-
-![API Management developer portal](media/api-management-howto-developer-portal/cover.png)
--
-## Customize and style the managed portal
-
-Your API Management service includes a built-in, always up-to-date, **managed** developer portal. You can access it from the Azure portal interface.
-
-[Customize and style](api-management-howto-developer-portal-customize.md) the managed portal through the built-in, drag-and-drop visual editor:
-
-* Use the visual editor to modify pages, media, layouts, menus, styles, or website settings.
-
-* Take advantage of built-in widgets to add text, images, buttons, and other objects that the portal supports out-of-the-box.
-
-* Control how the portal content appears to different [users and groups](api-management-howto-create-groups.md) configured in your API Management instance. For example, display certain pages only to groups that are associated with particular products, or to users that can access specific APIs.
-
-> [!NOTE]
-> The managed developer portal receives and applies updates automatically. Changes that you've saved but not published to the developer portal remain in that state during an update.
-
-## <a name="managed-vs-self-hosted"></a> Options to extend portal functionality
-In some cases you might need functionality beyond the customization and styling options provided in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you have [several options](developer-portal-extend-custom-functionality.md):
-* [Add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) directly through a developer portal widget designed for small customizations - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (IFrame).
-* [Create and upload a custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) to develop and add more complex custom portal features.
-* [Self-host the portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal [codebase](https://github.com/Azure/api-management-developer-portal). This option requires advanced configuration. Azure Support's assistance is limited only to the basic setup of self-hosted portals.
-
-> [!NOTE]
-> Because the API Management developer portal codebase is maintained on [GitHub](https://github.com/Azure/api-management-developer-portal), you can open issues and make pull requests for the API Management team to merge new functionality at any time.
->
-
-## Next steps
-
-Learn more about the developer portal:
--- [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)-- [Extend functionality of the managed developer portal](developer-portal-extend-custom-functionality.md)-- [Set up self-hosted version of the portal](developer-portal-self-host.md)-
-Browse other resources:
--- [GitHub repository with the source code](https://github.com/Azure/api-management-developer-portal)-- [Frequently asked questions about the developer portal](developer-portal-faq.md)
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Optionally:
1. Select **Create** to save the API Management OAuth 2.0 authorization server configuration.
-1. [Republish](api-management-howto-developer-portal-customize.md#publish) the developer portal.
+1. [Republish](developer-portal-overview.md#publish-the-portal) the developer portal.
> [!IMPORTANT] > When making OAuth 2.0-related changes, be sure to republish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
api-management Api Management Howto Setup Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md
var signature = digest.toString('base64');
``` > [!IMPORTANT]
-> You need to [republish the developer portal](api-management-howto-developer-portal-customize.md#publish) for the delegation changes to take effect.
+> You need to [republish the developer portal](developer-portal-overview.md#publish-the-portal) for the delegation changes to take effect.
## Next steps - [Learn more about the developer portal.](api-management-howto-developer-portal.md)
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
Developer portal sign-in and sign-up with Microsoft Entra ID or Azure AD B2C wil
3. Select **Microsoft Entra ID** or **Azure Active Directory B2C** from the list. 4. Select **MSAL** in the **Client library** dropdown. 5. Select **Update**.
-6. [Republish your developer portal](../api-management-howto-developer-portal-customize.md#publish-from-the-azure-portal).
+6. [Republish your developer portal](../developer-portal-overview.md#publish-the-portal).
## Help and support
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
The managed developer portal includes a **Custom HTML code** widget where you ca
:::image type="content" source="media/developer-portal-extend-custom-functionality/configure-html-custom-code.png" alt-text="Screenshot that shows how to configure HTML custom code in the developer portal."::: 1. Replace the sample **HTML code** with your custom content. 1. When configuration is complete, close the window.
-1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+1. Save your changes, and [republish the portal](developer-portal-overview.md#publish-the-portal).
> [!NOTE] > Microsoft does not support the HTML code you add in the Custom HTML Code widget.
The custom widget is now deployed to your developer portal. Using the portal's a
### Publish the developer portal
-After you configure the widget in the administrative interface, [republish the portal](api-management-howto-developer-portal-customize.md#publish) to make the widget available in production.
+After you configure the widget in the administrative interface, [republish the portal](developer-portal-overview.md#publish-the-portal) to make the widget available in production.
> [!NOTE] > * If you deploy updated widget code at a later date, the widget used in production doesn't update until you republish the developer portal.
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
# API Management developer portal - frequently asked questions
+This article provides answers to frequently asked questions about the [developer portal](developer-portal-overview.md) in Azure API Management.
+ ## What if I need functionality that isn't supported in the portal? You have the following options:
-* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) .
+* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget).
* For larger customizations, [create and upload](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) a custom widget to the managed developer portal.
If your API Management service is in an internal VNet and you're accessing it th
## I assigned a custom API Management domain and the published portal doesn't work
-After you update the domain, you need to [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect.
+After you update the domain, you need to [republish the portal](developer-portal-overview.md#publish-the-portal) for the changes to take effect.
## I added an identity provider and I can't see it in the portal
-After you configure an identity provider (for example, Microsoft Entra ID, Azure AD B2C), you need to [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect. Make sure your developer portal pages include the OAuth buttons widget.
+After you configure an identity provider (for example, Azure AD, Azure AD B2C), you need to [republish the portal](developer-portal-overview.md#publish-the-portal) for the changes to take effect. Make sure your developer portal pages include the OAuth buttons widget.
## I set up delegation and the portal doesn't use it
-After you set up delegation, you need to [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect.
+After you set up delegation, you need to [republish the portal](developer-portal-overview.md#publish-the-portal) for the changes to take effect.
## My other API Management configuration changes haven't been propagated in the developer portal
-Most configuration changes (for example, VNet, sign-in, product terms) require [republishing the portal](api-management-howto-developer-portal-customize.md#publish).
+Most configuration changes (for example, VNet, sign-in, product terms) require [republishing the portal](developer-portal-overview.md#publish-the-portal).
## <a name="cors"></a> I'm getting a CORS error when using the interactive console
-The interactive console makes a client-side API request from the browser. Resolve the CORS problem by adding [a CORS policy](cors-policy.md) on your API(s).
-
-You can check the status of the CORS policy in the **Portal overview** section of your API Management service in the Azure portal. A warning box indicates an absent or misconfigured policy.
-
-> [!NOTE]
->
-> Only one CORS policy is executed. If you specified multiple CORS policies (for example, on the API level and on the all-APIs level), your interactive console may not work as expected.
-
-![Screenshot that shows where you can check the status of your CORS policy.](media/developer-portal-faq/cors-azure-portal.png)
-
-Automatically apply the CORS policy by clicking the **Enable CORS** button.
+The interactive console makes a client-side API request from the browser. Resolve the CORS problem by adding a CORS policy on your API(s), or configure the portal to use a CORS proxy. For more information, see [Enable CORS for interactive console in the API Management developer portal](enable-cors-developer-portal.md).
-You can also enable CORS manually.
-
-1. Select the **Manually apply it on the global level** link to see the generated policy code.
-2. Navigate to **All APIs** in the **APIs** section of your API Management service in the Azure portal.
-3. Select the **</>** icon in the **Inbound processing** section.
-4. Insert the policy in the **\<inbound\>** section of the XML file. Make sure the **\<origin\>** value matches your developer portal's domain.
-
-> [!NOTE]
->
-> If you apply the CORS policy in the Product scope, instead of the API(s) scope, and your API uses subscription key authentication through a header, your console won't work.
->
-> The browser automatically issues an `OPTIONS` HTTP request, which doesn't contain a header with the subscription key. Because of the missing subscription key, API Management can't associate the `OPTIONS` call with a Product, so it can't apply the CORS policy.
->
-> As a workaround you can pass the subscription key in a query parameter.
-
-## What is the CORS proxy feature and when should I use it?
-
-Select the **Use CORS proxy** option in the configuration of the API operation details widget to route the interactive console's API calls through the portal's backend in your API Management service. In this configuration, you no longer need to apply a CORS policy for your APIs, and connectivity to the gateway endpoint from the local machine isn't required. If the APIs are exposed through a self-hosted gateway or your service is in a virtual network, the connectivity from the API Management's backend service to the gateway is required. If you use the self-hosted portal, specify the portal's backend endpoint using the `backendUrl` option in the configuration files. Otherwise, the self-hosted portal won't be aware of the location of the backend service.
## What permissions do I need to edit the developer portal?
If you don't need the sign-up functionality enabled by default in the developer
:::image type="content" source="media/developer-portal-faq/delete-identity-providers.png" alt-text="Delete identity providers"::: 1. Navigate to the developer portal administrative interface.
-1. Remove **Sign up** links and navigation items in the portal content. For information about customizing portal content, see [Tutorial: Access and customize the developer portal](api-management-howto-developer-portal-customize.md).
-
- :::image type="content" source="media/developer-portal-faq/delete-navigation-item.png" alt-text="Delete navigation item":::
+1. Remove **Sign up** links and navigation items in the portal content. For information about customizing portal content, see [Tutorial: Access and customize the developer portal](api-management-howto-developer-portal-customize.md#edit-navigation-menus).
1. Modify the **Sign up** page content to remove fields used to enter identity data, in case users navigate directly to it. Optionally, delete the **Sign up** page. Currently, you use the [contentItem](/rest/api/apimanagement/current-ga/content-item) REST APIs to list and delete this page.
-1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+1. Save your changes, and [republish the portal](developer-portal-overview.md#publish-the-portal).
## How can I remove the developer portal content provisioned to my API Management service?
You can generate *user-specific tokens* (including admin tokens) using the [Get
> The token must be URL-encoded.
-## Next steps
+## Related content
Learn more about the developer portal:
api-management Developer Portal Integrate Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-application-insights.md
Follow these steps to plug Application Insights into your managed or self-hosted
} ```
-1. After you update the configuration, [republish the portal](api-management-howto-developer-portal-customize.md#publish) for the changes to take effect.
+1. After you update the configuration, [republish the portal](developer-portal-overview.md#publish-the-portal) for the changes to take effect.
## Next steps
api-management Developer Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-overview.md
+
+ Title: Overview of the developer portal in Azure API Management
+
+description: Learn about the developer portal in API Management - a customizable website where API consumers can explore your APIs.
+++++ Last updated : 03/29/2024+++
+# Overview of the developer portal
+
+The API Management *developer portal* is an automatically generated, fully customizable website with the documentation of your APIs. It's where API consumers can discover your APIs, learn how to use them, request access, and try them out.
+
+This article introduces features of the developer portal, the types of content the portal presents, and options to manage and extend the developer portal for your specific users and scenarios.
++++
+## Developer portal architectural concepts
+
+The portal components can be logically divided into two categories: *code* and *content*.
+
+### Code
+
+Code is maintained in the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal) and includes:
+
+- **Widgets** - represent visual elements and combine HTML, JavaScript, styling ability, settings, and content mapping. Examples are an image, a text paragraph, a form, a list of APIs etc.
+- **Styling definitions** - specify how widgets can be styled
+- **Engine** - which generates static webpages from portal content and is written in JavaScript
+- **Visual editor** - allows for in-browser customization and authoring experience
+
+### Content
+
+Content is divided into two subcategories: *portal content* and *API Management data*.
+
+* *Portal content* is specific to the portal website and includes:
+
+ - **Pages** - for example, landing page, API tutorials, blog posts
+ - **Media** - images, animations, and other file-based content
+ - **Layouts** - templates that are matched against a URL and define how pages are displayed
+ - **Styles** - values for styling definitions, such as fonts, colors, borders
+ - **Settings** - configurations such as favicon, website metadata
+
+ Portal content, except for media, is expressed as JSON documents.
+
+* *API Management data* includes entities such as APIs, Operations, Products, and Subscriptions that are managed in your API Management instance.
+
+## Customize and style the portal
+
+Out of the box, the developer portal is already populated with your published APIs and products and ready to be customized for your needs. As an API publisher, you use the developer portal's administrative interface to customize the appearance and functionality of the developer portal.
+
+If you're accessing the portal for the first time, the portal includes placeholder pages, content, and navigation menus. The placeholder content you see has been designed to showcase the portal's capabilities and minimize the customizations needed to personalize your portal.
+
+For a step-by-step walkthrough of customizing and publishing the developer portal, see [Tutorial: Access and customize the developer portal](api-management-howto-developer-portal-customize.md).
+
+> [!IMPORTANT]
+> * Access to the developer portal by API publishers and consumers requires network connectivity to both the developer portal's endpoint (default: `https://<apim-instance-name>.portal.azure-api.net`) and the API Management instance's management endpoint (default: `https://<apim-instance-name>.management.azure-api.net`).
+> * Publishing the developer portal requires additional connectivity to blob storage managed by API Management in the West US region.
+> * If the API Management instance is deployed in a VNet, ensure that the hostnames of the developer portal and management endpoint resolve properly and that you enable connectivity to required dependencies for the developer portal. [Learn more](virtual-network-reference.md).
+
+### Visual editor
+
+The developer portal's administrative interface provides a visual editor for publishers to customize the portal's content and styling. Using the visual editor, you can add, remove, and rearrange pages, sections, and widgets. You can also change the styling of the portal's elements, such as fonts, colors, and spacing.
++++
+### Layouts and pages
+
+Layouts define how pages are displayed. For example, in the default content, there are two layouts: one applies to the home page, and the other to all remaining pages. You can modify these layouts and add more layouts to suit your needs.
+
+A layout gets applied to a page by matching its URL template to the page's URL. For example, a layout with a URL template of `/wiki/*` is applied to every page with the `/wiki/` segment in the URL: `/wiki/getting-started`, `/wiki/styles`, etc.
+
+In the following image, content belonging to the layout is outlined in blue, while the page-specific content is outlined in red.
++
+The pre-provisioned content in the developer portal showcases pages with commonly used features. You can modify the content of these pages or add new ones to suit your needs.
+
+> [!NOTE]
+> Due to integration considerations, the following pages can't be removed or moved under a different URL: `/404`, `/500`, `/captcha`, `/change-password`, `/config.json`, `/confirm/invitation`, `/confirm-v2/identities/basic/signup`, `/confirm-v2/password`, `/internal-status-0123456789abcdef`, `/publish`, `/signin`, `/signin-sso`, `/signup`.
+
+### Styles
++
+The **Styles** panel is created with designers in mind. Use styles to manage and customize all the visual elements in your portal, such as fonts used in headings and menus and button colors. The styling is hierarchical - many elements inherit properties from other elements. For example, button elements use colors for text and background. To change a button's color, you need to change the original color variant.
+
+To edit a variant, select it and select **Edit style** in the options that appear on top of it. After you make the changes in the pop-up window, close it.
+
+## Extend portal functionality
+
+In some cases you might need functionality beyond the customization and styling options provided in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you have [several options](developer-portal-extend-custom-functionality.md):
+* [Add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) directly through a developer portal widget designed for small customizations - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (IFrame).
+* [Create and upload a custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) to develop and add more complex custom portal features.
+* [Self-host the portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal [codebase](https://github.com/Azure/api-management-developer-portal). This option requires advanced configuration. Azure Support's assistance is limited only to the basic setup of self-hosted portals.
+
+> [!NOTE]
+> Because the API Management developer portal codebase is maintained on [GitHub](https://github.com/Azure/api-management-developer-portal), you can open issues and make pull requests for the API Management team to merge new functionality at any time.
+>
+
+## Control access to portal content
+
+The developer portal synchronizes with your API Management instance to display content such as the APIs, operations, products, subscriptions, and user profiles. APIs and products must be in a *published* state to be visible in the developer portal.
+
+### Content visibility and access
+
+In API Management, [groups of users](api-management-howto-create-groups.md) are used to manage the visibility of products and their associated APIs to developers. In addition to using built-in groups, you can create custom groups to suit your needs. Products are first made visible to groups, and then developers in those groups can view and subscribe to the products that are associated with the groups.
+
+You can also control how other portal content (such as pages and sections) appears to different users, based on their identity. For example, you might want to display certain pages only to users who have access to a specific product or API. Or, make a section of a page appear only for certain [groups of users](api-management-howto-create-groups.md). The developer portal has built-in controls for these needs.
+
+> [!NOTE]
+> Visibility and access controls are supported only in the managed developer portal. They aren't supported in the [self-hosted portal](developer-portal-self-host.md).
++
+* When you add a page or edit the settings of an existing page, make a selection under **Access** to control the users or groups that can see the page
+
+ :::image type="content" source="media/developer-portal-overview/page-access-control.png" alt-text="Screenshot of the page access control settings in the developer portal.":::
+
+ > [!TIP]
+ > To edit the settings of an existing page, select the gear icon next to the page name on the **Pages** tab.
+
+* When you select page content such as a page section, menu, or button for editing, select the **Change access** icon to control the users or groups that can see the element on the page
+
+ :::image type="content" source="media/developer-portal-overview/change-visibility-button.png" alt-text="Screenshot of the change access button in the developer portal.":::
+
+ * You can change the visibility of the following page content: sections, menus, buttons, and sign-in for OAuth authorization.
+
+ * Media files such as images on a page inherit the visibility of the elements that contain them.
+
+When a user visits the developer portal with visibility and access controls applied:
+
+* The developer portal automatically hides buttons or navigation items that point to pages that a user doesn't have access to.
+
+* An attempt by a user to access a page they aren't authorized to access results in a 404 Not Found error.
+
+> [!TIP]
+> Using the administrative interface, you can preview pages as a user associated with any built-in or custom group by selecting **View as** in the menu at the top.
+>
+
+### Content security policy
+
+You can enable a content security policy to add a layer of security to your developer portal and help mitigate certain types of attacks including cross-site scripting and data injection. With a content security policy, the developer portal on the browser will only load resources from trusted locations that you specify, such as your corporate website or other trusted domains.
+
+To enable a content security policy:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Portal settings**.
+1. On the **Content security policy** tab, select **Enabled**.
+1. Under **Allowed sources**, add one or more hostnames that specify trusted locations that the developer portal can load resources from. You can also specify a wildcard character to allow all subdomains of a domain. For example, `*.contoso.com` allows all subdomains of `contoso.com`.
+1. Select **Save**.
+
+### Interactive test console
+
+The developer portal provides a "Try it" capability on the API reference pages so that portal visitors can test your APIs directly through an interactive console.
++
+The test console supports APIs with different authorization models - for example, APIs that require no authorization, or that require a subscription key or OAuth 2.0 authorization. In the latter case, you can configure the test console to generate a valid OAuth token on behalf of the test console user. For more information, see [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md).
+
+> [!IMPORTANT]
+> To let the visitors of your portal test the APIs through the built-in interactive console, enable a CORS (cross-origin resource sharing) policy on your APIs. For details, see [Enable CORS for interactive console in the API Management developer portal](enable-cors-developer-portal.md).
+
+## Manage user sign-up and sign-in
+
+By default, the developer portal enables anonymous access. This means that anyone can view the portal and its content without signing in, although access to certain content and functionality such as using the test console may be restricted. You can enable a developer portal website setting to require users to sign in to access the portal.
+
+The portal supports several options for user sign-up and sign-in:
+
+* Basic authentication for developers to sign in with credentials for API Management [user accounts](api-management-howto-create-or-invite-developers.md). Developers can sign up for an account directly through the portal, or you can create accounts for them.
+
+* Depending on your scenarios, restrict access to the portal by requiring users to sign up or sign in with a [Microsoft Entra ID](api-management-howto-aad.md) or [Azure AD B2C](api-management-howto-aad-b2c.md) account.
+
+* If you already manage developer sign-up and sign-in through an existing website, [delegate authentication](api-management-howto-setup-delegation.md) instead of using the developer portal's built-in authentication.
+
+[Learn more](secure-developer-portal-access.md) about options to secure user sign-up and sign-in to the developer portal.
+
+### Reports for users
+
+The developer portal generates reports for authenticated users to view their individual API usage, data transfer, and response times, including aggregated use by specific products and subscriptions. Users can view the reports by selecting **Reports** in the default navigation menu for authenticated users. Users can filter reports by time interval, up to the most recent 90 days.
+
+> [!NOTE]
+> Reports in the developer portal only show data for the authenticated user. API publishers and administrators can access usage data for all users of the API Management instance - for example, by setting up monitoring features such as [Azure Application Insights](api-management-howto-app-insights.md) in the portal.
+
+## Save and publish website content
+
+After you update the developer portal content or configuration, you need to save and publish your changes to make them available to portal visitors. The developer portal maintains a record of the content you've published, and you can revert to a previous portal *revision* when you need to.
+
+### Save changes
++
+Whenever you make a change in the portal, you need to save it manually by selecting the **Save** button in the menu at the top, or press [Ctrl]+[S]. If you need to, you can **Undo** your last saved changes. Saved changes are visible only to you and aren't visible to portal visitors until you publish them.
+
+> [!NOTE]
+> The managed developer portal receives and applies software updates automatically. Changes that you've saved but not published to the developer portal remain in that state during an update.
+
+### Publish the portal
+
+To make your portal and its latest changes available to visitors, you need to *publish* it. You publish the portal within the portal's administrative interface or from the Azure portal.
+
+> [!IMPORTANT]
+> You need to publish the portal any time you want to expose changes to the portal's content or styling. The portal also needs to be republished after API Management service configuration changes that affect the developer portal. For example, republish the portal after assigning a custom domain, updating the identity providers, setting delegation, or specifying sign-in and product terms.
++
+#### Publish from the administrative interface
++
+#### Publish from the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Portal overview**.
+1. In the **Portal overview** window, select **Publish**.
+
+ :::image type="content" source="media/developer-portal-overview/publish-portal-azure-portal.png" alt-text="Screenshot of publishing the developer portal from the Azure portal":::
++
+### Restore a previous portal revision
+
+Each time you publish the developer portal, a corresponding portal revision is saved. You can republish a previous portal revision at any time. For example, you might want to roll back a change you introduced when you last published the portal.
+
+> [!NOTE]
+> Developer portal software updates are applied automatically when you restore a revision. Changes saved but not published in the administrative interface remain in that state when you publish a revision.
+
+To restore a previous portal revision:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Portal overview**.
+1. On the **Revisions** tab, select the context menu (**...**) for a revision that you want to restore, and then select **Make current and publish**.
+
+### Reset the portal
+
+If you want to discard all changes you've made to the developer portal, you can reset the website to its starting state. Resetting the portal deletes any changes you've made to the developer portal pages, layouts, customizations, and uploaded media.
+
+> [!NOTE]
+> Resetting the developer portal doesn't delete the published version of the developer portal.
+
+To reset the developer portal:
+
+1. In the administrative interface, in the menu at the left of the visual editor, select **Settings**.
+1. On the **Advanced** tab, select **Yes, reset the website to default state**.
+1. Select **Save**.
+
+## Related content
+
+Learn more about the developer portal:
+
+- [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)
+- [Extend functionality of the managed developer portal](developer-portal-extend-custom-functionality.md)
+- [Set up self-hosted version of the portal](developer-portal-self-host.md)
+
+Browse other resources:
+
+- [GitHub repository with the source code](https://github.com/Azure/api-management-developer-portal)
+- [Frequently asked questions about the developer portal](developer-portal-faq.md)
api-management Enable Cors Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/enable-cors-developer-portal.md
+
+ Title: Enable CORS for Azure API Management developer portal
+description: How to configure CORS to enable the Azure API Management developer portal's interactive test console.
+++++ Last updated : 12/22/2023+++
+# Enable CORS for interactive console in the API Management developer portal
+Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.
+
+To let visitors to the API Management [developer portal](developer-portal-overview.md) use the interactive test console in the API reference pages, enable a [CORS policy](cors-policy.md) for APIs in your API Management instance. If the developer portal's domain name isn't an allowed origin for cross-domain API requests, test console users will see a CORS error.
+
+For certain scenarios, you can configure the developer portal as a CORS proxy instead of enabling a CORS policy for APIs.
++
+## Prerequisites
+++ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)+++
+## Enable CORS policy for APIs
+
+You can enable a setting to configure a CORS policy automatically for all APIs in your API Management instance. You can also manually configure a CORS policy.
+
+> [!NOTE]
+> Only one CORS policy is executed. If you specify multiple CORS policies (for example, on the API level and on the all-APIs level), your interactive console may not work as expected.
+
+### Enable CORS policy automatically
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Portal overview**.
+1. Under **Enable CORS**, the status of CORS policy configuration is displayed. A warning box indicates an absent or misconfigured policy.
+1. To enable CORS from the developer portal for all APIs, select **Enable CORS**.
+
+![Screenshot that shows where to check status of your CORS policy in the developer portal.](media/enable-cors-developer-portal/cors-azure-portal.png)
++
+### Enable CORS policy manually
+
+1. Select the **Manually apply it on the global level** link to see the generated policy code.
+2. Navigate to **All APIs** in the **APIs** section of your API Management instance.
+3. Select the **</>** icon in the **Inbound processing** section.
+4. In the policy editor, insert the policy in the **\<inbound\>** section of the XML file. Make sure the **\<origin\>** value matches your developer portal's domain.
+
+> [!NOTE]
+>
+> If you apply the CORS policy in the Product scope, instead of the API(s) scope, and your API uses subscription key authentication through a header, your console won't work.
+>
+> The browser automatically issues an `OPTIONS` HTTP request, which doesn't contain a header with the subscription key. Because of the missing subscription key, API Management can't associate the `OPTIONS` call with a Product, so it can't apply the CORS policy.
+>
+> As a workaround, you can pass the subscription key in a query parameter.
+
+## CORS proxy option
+
+For some scenarios (for example, if the API Management gateway is network isolated), you can choose to configure the developer portal as a CORS proxy itself, instead of enabling a CORS policy for your APIs. The CORS proxy routes the interactive console's API calls through the portal's backend in your API Management instance.
+
+> [!NOTE]
+> If the APIs are exposed through a self-hosted gateway or your service is in a virtual network, the connectivity from the API Management developer portal's backend service to the gateway is required.
+
+To configure the CORS proxy, access the developer portal as an administrator:
+
+1. On the **Overview** page of your API Management instance, select **Developer portal**. The developer portal opens in a new browser tab.
+1. In the left menu of the administrative interface, select **Pages** > **APIs** > **Details**.
+1. On the **APIs: Details** page, select the **Operation: Details** widget, and select **Edit widget**.
+1. Select **Use CORS proxy**.
+1. Save changes to the portal, and [republish the portal](developer-portal-overview.md#publish-the-portal).
++
+## CORS configuration for self-hosted developer portal
+
+If you [self-host](developer-portal-self-host.md) the developer portal, the following configuration is needed to enable CORS:
+
+* Specify the portal's backend endpoint using the `backendUrl` option in the configuration files. Otherwise, the self-hosted portal isn't aware of the location of the backend service.
+
+* Add **Origin** domain values to the self-hosted portal configuration specifying the environments where the self-hosted portal is hosted. [Learn more](developer-portal-self-host.md#configure-cors-settings-for-developer-portal-backend)
+
+## Related content
+
+* For more information about configuring a policy, see [Set or edit policies](set-edit-policies.md).
+* For details about the CORS policy, see the [cors](cors-policy.md) policy reference.
api-management Secure Developer Portal Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/secure-developer-portal-access.md
API Management has a fully customizable, standalone, managed [developer portal](api-management-howto-developer-portal.md), which can be used externally (or internally) to allow developer users to discover and interact with the APIs published through API Management. The developer portal has several options to facilitate secure user sign-up and sign-in.
+> [!NOTE]
+> By default, the developer portal enables anonymous access. This means that anyone can view the portal and content such as APIs without signing in, although functionality such as using the test console is restricted. You can enable a setting that requires users to sign-in to view the developer portal. In the Azure portal, in the left menu of your API Management instance, under **Developer portal**, select **Identities** > **Settings**. Under **Anonymous users**, select (enable) **Redirect anonymous users to sign-in page**.
+ ## Authentication options * **External users** - The preferred option when the developer portal is consumed externally is to enable business-to-consumer access control through Azure Active Directory B2C (Azure AD B2C).
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
Search-AzGraph -Query "AdvisorResources | where type == 'microsoft.advisor/recom
# [Portal](#tab/azure-portal) - - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.us</a> - Microsoft Azure operated by 21Vianetated by 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.cn</a>
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 3/28/2024 Last updated : 4/1/2024 # Use the side-by-side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview)
You can get the new inbound IP address for your new App Service Environment v3 b
For ILB App Service Environments, get the private inbound IP address by running the following command: ```azurecli
-az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2022-03-01" --query properties.internalInboundIpAddresses
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.networkingConfiguration.internalInboundIpAddresses
``` For ELB App Service Environments, get the public inbound IP address by running the following command: ```azurecli
-az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2022-03-01" --query properties.externalInboundIpAddresses
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.networkingConfiguration.externalInboundIpAddresses
``` ## 11. Redirect customer traffic and complete migration
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
|-|-|-| | `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `true` for custom containers. || | `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. ||
-| `WEBSITES_CONTAINER_STOP_TIME_LIMIT` | Amount of time in seconds to wait for the container to terminate gracefully. Deafult is `5`. You can increase to a maximum of `120` ||
+| `WEBSITES_CONTAINER_STOP_TIME_LIMIT` | Amount of time in seconds to wait for the container to terminate gracefully. Default is `5`. You can increase to a maximum of `120` ||
| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. | `https://<server-name>.azurecr.io` | | `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. ||
The following environment variables are related to the [push notifications](/pre
| `WEBSITE_PUSH_TAGS_REQUIRING_AUTH` | Read-only. Contains a list of tags in the notification registration that requires user authentication. | | `WEBSITE_PUSH_TAGS_DYNAMIC` | Read-only. Contains a list of tags in the notification registration that were added automatically. |
->[!NOTE]
+> [!NOTE]
> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. <!--
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md
keywords: powershell dsc, desired state configuration, powershell dsc azure
Previously updated : 08/17/2021 Last updated : 04/01/2024
Consider the requirements in this section when using Azure Automation State Conf
For nodes running Windows, the following versions are supported:
+- Windows Server 2022
- Windows Server 2019 - Windows Server 2016 - Windows Server 2012R2
automation Runtime Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md
Title: Runtime environment in Azure Automation
description: This article provides an overview on Runtime environment in Azure Automation. Previously updated : 03/23/2024 Last updated : 04/01/2024
You can't edit these Runtime environments. However, any changes that are made in
- Runtime environment is currently supported in all Public regions except Central India, Germany North, Italy North, Israel Central, Poland Central, UAE Central, and Government clouds. - Existing runbooks that are automatically moved from old experience to Runtime environment experience would be able to execute as both cloud and hybrid job. -- When the runbook is [updated](manage-runtime-environment.md) and linked to a different Runtime environment, it can be executed as cloud job only. - PowerShell Workflow, Graphical PowerShell, and Graphical PowerShell Workflow runbooks only work with System-generated PowerShell-5.1 Runtime environment. - Runbooks created in Runtime environment experience with Runtime version PowerShell 7.2 would show as PowerShell 5.1 runbooks in old experience. - RBAC permissions cannot be assigned to Runtime environment.
azure-monitor Azure Web Apps Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-python.md
+
+ Title: Monitor Azure app services performance Python (Preview)
+description: Application performance monitoring for Azure app services using Python. Chart load and response time, dependency information, and set alerts on performance.
+ Last updated : 04/01/2024
+ms.devlang: python
++++
+# Application monitoring for Azure App Service and Python (Preview)
+
+> [!IMPORTANT]
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Monitor your Python web applications on Azure App Services without modifying the code. This guide shows you how to enable Azure Monitor Application Insights and offers tips for automating large-scale deployments.
+
+The integration instruments popular Python libraries in your code, letting you automatically gather and correlate dependencies, logs, and metrics. After instrumenting, you collect calls and metrics from these Python libraries:
+
+| Instrumentation | Supported library Name | Supported versions |
+| | - | |
+| [OpenTelemetry Django Instrumentation][ot_instrumentation_django] | [`django`][pypi_django] | [link][ot_instrumentation_django_version]
+| [OpenTelemetry FastApi Instrumentation][ot_instrumentation_fastapi] | [`fastapi`][pypi_fastapi] | [link][ot_instrumentation_fastapi_version]
+| [OpenTelemetry Flask Instrumentation][ot_instrumentation_flask] | [`flask`][pypi_flask] | [link][ot_instrumentation_flask_version]
+| [OpenTelemetry Psycopg2 Instrumentation][ot_instrumentation_psycopg2] | [`psycopg2`][pypi_psycopg2] | [link][ot_instrumentation_psycopg2_version]
+| [OpenTelemetry Requests Instrumentation][ot_instrumentation_requests] | [`requests`][pypi_requests] | [link][ot_instrumentation_requests_version]
+| [OpenTelemetry UrlLib Instrumentation][ot_instrumentation_urllib] | [`urllib`][pypi_urllib] | All
+| [OpenTelemetry UrlLib3 Instrumentation][ot_instrumentation_urllib3] | [`urllib3`][pypi_urllib3] | [link][ot_instrumentation_urllib3_version]
+
+> [!NOTE]
+> If using Django, see the additional [Django Instrumentation](#django-instrumentation) section in this article.
+
+Logging telemetry is collected at the level of the root logger. To learn more about Python's native logging hierarchy, visit the [Python logging documentation][python_logging_docs].
+
+## Prerequisites
+
+* Python version 3.11 or prior.
+* App Service must be deployed as code. Custom containers aren't supported.
+
+## Enable Application Insights
+
+The easiest way to monitor Python applications on Azure App Services is through the Azure portal.
+
+Activating monitoring in the Azure portal automatically instruments your application with Application Insights and requires no code changes.
+
+> [!NOTE]
+> You should only use autoinstrumentation on App Service if you aren't using manual instrumentation of OpenTelemetry in your code, such as the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python) or the [Azure Monitor OpenTelemetry Exporter][azure_monitor_opentelemetry_exporter]. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) in this article.
+
+### Autoinstrumentation through Azure portal
+
+For a complete list of supported autoinstrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+
+Toggle on monitoring for your Python apps in Azure App Service with no code changes required.
+
+Application Insights for Python integrates with code-based Linux Azure App Service.
+
+The integration is in public preview. It adds the Python SDK, which is in GA.
+
+1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
+
+ :::image type="content"source="./media/azure-web-apps/enable.png" alt-text="Screenshot of Application Insights tab with enable selected." lightbox="./media/azure-web-apps/enable.png":::
+
+2. Choose to create a new resource, or select an existing Application Insights resource for this application.
+
+ > [!NOTE]
+ > When you select **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
+
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown." lightbox="./media/azure-web-apps/change-resource.png":::
+
+3. You specify the resource, and it's ready to use.
+
+ :::image type="content"source="./media/azure-web-apps-python/app-service-python.png" alt-text="Screenshot of instrument your application." lightbox="./media/azure-web-apps-python/app-service-python.png":::
+
+## Configuration
+
+You can configure with [OpenTelemetry environment variables][ot_env_vars] such as:
+
+| **Environment Variable** | **Description** |
+|--|-|
+| `OTEL_SERVICE_NAME`, `OTEL_RESOURCE_ATTRIBUTES` | Specifies the OpenTelemetry [Resource Attributes][opentelemetry_resource] associated with your application. You can set any Resource Attributes with [OTEL_RESOURCE_ATTRIBUTES][opentelemetry_spec_resource_attributes_env_var] or use [OTEL_SERVICE_NAME][opentelemetry_spec_service_name_env_var] to only set the `service.name`. |
+| `OTEL_LOGS_EXPORTER` | If set to `None`, disables collection and export of logging telemetry. |
+| `OTEL_METRICS_EXPORTER` | If set to `None`, disables collection and export of metric telemetry. |
+| `OTEL_TRACES_EXPORTER` | If set to `None`, disables collection and export of distributed tracing telemetry. |
+| `OTEL_BLRP_SCHEDULE_DELAY` | Specifies the logging export interval in milliseconds. Defaults to 5000. |
+| `OTEL_BSP_SCHEDULE_DELAY` | Specifies the distributed tracing export interval in milliseconds. Defaults to 5000. |
+| `OTEL_TRACES_SAMPLER_ARG` | Specifies the ratio of distributed tracing telemetry to be [sampled][application_insights_sampling]. Accepted values range from 0 to 1. The default is 1.0, meaning no telemetry is sampled out. |
+| `OTEL_PYTHON_DISABLED_INSTRUMENTATIONS` | Specifies which OpenTelemetry instrumentations to disable. When disabled, instrumentations aren't executed as part of autoinstrumentation. Accepts a comma-separated list of lowercase [library names](#application-monitoring-for-azure-app-service-and-python-preview). For example, set it to `"psycopg2,fastapi"` to disable the Psycopg2 and FastAPI instrumentations. It defaults to an empty list, enabling all supported instrumentations. |
+
+### Add a community instrumentation library
+
+You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community.
++
+To add the community OpenTelemetry Instrumentation Library, install it via your app's `requirements.txt` file. OpenTelemetry autoinstrumentation automatically picks up and instruments all installed libraries. Find the list of community libraries [here](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation).
+
+## Automate monitoring
+
+In order to enable telemetry collection with Application Insights, only the following Application settings need to be set:
++
+### Application settings definitions
+
+| App setting name | Definition | Value |
+|||:|
+| APPLICATIONINSIGHTS_CONNECTION_STRING | Connections string for your Application Insights resource | Example: abcd1234-ab12-cd34-abcd1234abcd |
+| ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~3` |
+
+> [!NOTE]
+> Profiler and snapshot debugger are not available for Python applications
++
+## Django Instrumentation
+
+In order to use the OpenTelemetry Django Instrumentation, you need to set the `DJANGO_SETTINGS_MODULE` environment variable in the App Service settings to point from your app folder to your settings module. For more information, see the [Django documentation][django_settings_module_docs].
+
+## Frequently asked questions
++
+## Troubleshooting
+
+Here we provide our troubleshooting guide for monitoring Python applications on Azure App Services using autoinstrumentation.
+
+### Duplicate telemetry
+
+You should only use autoinstrumentation on App Service if you aren't using manual instrumentation of OpenTelemetry in your code, such as the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python) or the [Azure Monitor OpenTelemetry Exporter][azure_monitor_opentelemetry_exporter]. Using autoinstrumentation on top of the manual instrumentation could cause duplicate telemetry and increase your cost. In order to use App Service OpenTelemetry autoinstrumentation, first remove manual instrumentation of OpenTelemetry from your code.
+
+### Missing telemetry
+
+If you're missing telemetry, follow these steps to confirm that autoinstrumentation is enabled correctly.
+
+#### Step 1: Check the Application Insights blade on your App Service resource
+
+Confirm that autoinstrumentation is enabled in the Application Insights blade on your App Service Resource:
++
+#### Step 2: Confirm that your App Settings are correct
+
+Confirm that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~3` and that your `APPLICATIONINSIGHTS_CONNECTION_STRING` points to the appropriate Application Insights resource.
++
+#### Step 3: Check autoinstrumentation diagnostics and status logs
+Navigate to */var/log/applicationinsights/* and open status_*.json.
+
+Confirm that `AgentInitializedSuccessfully` is set to true and `IKey` to have a valid iKey.
+
+Here's an example JSON file:
+
+```json
+ "AgentInitializedSuccessfully":true,
+
+ "AppType":"python",
+
+ "MachineName":"c89d3a6d0357",
+
+ "PID":"47",
+
+ "IKey":"00000000-0000-0000-0000-000000000000",
+
+ "SdkVersion":"1.0.0"
+
+```
+
+The `applicationinsights-extension.log` file in the same folder may show other helpful diagnostics.
+
+### Django apps
+
+If your app uses Django and is either failing to start or using incorrect settings, make sure to set the `DJANGO_SETTINGS_MODULE` environment variable. See the [Django Instrumentation](#django-instrumentation) section for details.
++++
+For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md). -->
+
+## Next steps
+
+* [Enable Azure diagnostics](../agents/diagnostics-extension-to-application-insights.md) to be sent to Application Insights
+* [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive
+* [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold
+* [Availability overview](availability-overview.md)
+
+[application_insights_sampling]: ./sampling.md
+[azure_core_tracing_opentelemetry_plugin]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core-tracing-opentelemetry
+[azure_monitor_opentelemetry_exporter]: /python/api/overview/azure/monitor-opentelemetry-exporter-readme
+[django_settings_module_docs]: https://docs.djangoproject.com/en/4.2/topics/settings/#envvar-DJANGO_SETTINGS_MODULE
+[ot_env_vars]: https://opentelemetry.io/docs/reference/specification/sdk-environment-variables/
+[ot_instrumentation_django]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django
+[ot_instrumentation_django_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-django/src/opentelemetry/instrumentation/django/package.py#L16
+[ot_instrumentation_fastapi]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi
+[ot_instrumentation_fastapi_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py#L16
+[ot_instrumentation_flask]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask
+[ot_instrumentation_flask_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-flask/src/opentelemetry/instrumentation/flask/package.py#L16
+[ot_instrumentation_psycopg2]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2
+[ot_instrumentation_psycopg2_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-psycopg2/src/opentelemetry/instrumentation/psycopg2/package.py#L16
+[ot_instrumentation_requests]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests
+[ot_instrumentation_requests_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-requests/src/opentelemetry/instrumentation/requests/package.py#L16
+[ot_instrumentation_urllib]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3
+[ot_instrumentation_urllib3]: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3
+[ot_instrumentation_urllib3_version]: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/opentelemetry-instrumentation-urllib3/src/opentelemetry/instrumentation/urllib3/package.py#L16
+[opentelemetry_resource]: https://opentelemetry.io/docs/specs/otel/resource/sdk/
+[opentelemetry_spec_resource_attributes_env_var]: https://opentelemetry-python.readthedocs.io/en/latest/sdk/environment_variables.html?highlight=OTEL_RESOURCE_ATTRIBUTES%20#opentelemetry-sdk-environment-variables
+[opentelemetry_spec_service_name_env_var]: https://opentelemetry-python.readthedocs.io/en/latest/sdk/environment_variables.html?highlight=OTEL_RESOURCE_ATTRIBUTES%20#opentelemetry.sdk.environment_variables.OTEL_SERVICE_NAME
+[python_logging_docs]: https://docs.python.org/3/library/logging.html
+[pypi_django]: https://pypi.org/project/Django/
+[pypi_fastapi]: https://pypi.org/project/fastapi/
+[pypi_flask]: https://pypi.org/project/Flask/
+[pypi_psycopg2]: https://pypi.org/project/psycopg2/
+[pypi_requests]: https://pypi.org/project/requests/
+[pypi_urllib]: https://docs.python.org/3/library/urllib.html
+[pypi_urllib3]: https://pypi.org/project/urllib3/
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
- [.NET](./azure-web-apps-net.md) - [Java](./azure-web-apps-java.md) - [Node.js](./azure-web-apps-nodejs.md)
+ - [Python](./azure-web-apps-python.md)
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
- This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](/previous-versions/azure/azure-monitor/app/opencensus-python), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
+ This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opentelemetry-enable.md?tabs=python), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
If you need to make custom API calls to track events/dependencies not captured by default with autoinstrumentation monitoring, you need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
The details depend on the type of project. For a web application:
## Next steps
-Learn how to enable autoinstrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md), or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
+Learn how to enable autoinstrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md), [Nodejs](./azure-web-apps-nodejs.md), or [Python](./azure-web-apps-python.md) application running on App Service.
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Links are provided to more information for each supported scenario.
|-|-|-|--|-|--| |Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-java.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) ┬╣ | :x: | |Azure App Service on Windows - Publish as Docker | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) ┬▓ | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) ┬▓ | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) ┬▓ | [ :white_check_mark: :link: ](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/public-preview-application-insights-auto-instrumentation-for/ba-p/3947971) ┬▓ | :x: |
-|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-java.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-java.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | [ :white_check_mark: :link: ](azure-web-apps-python.md?tabs=linux) ┬▓ |
|Azure App Service on Linux - Publish as Docker | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
-|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ |
-|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
+|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) ┬╣ |
+|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | :x: |
|Azure Spring Apps | :x: | :x: | [ :white_check_mark: :link: ](../../spring-apps/enterprise/how-to-application-insights.md) | :x: | :x: | |Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: |
Links are provided to more information for each supported scenario.
## JavaScript (Web) SDK Loader Script injection by configuration
-When using supported SDKs, you can enable SDK injection in configuration to automatically inject JavaScript (Web) SDK Loader Script onto each page.
+When using supported Software Development Kits (SDKs), you can enable SDK injection in configuration to automatically inject JavaScript (Web) SDK Loader Script onto each page.
| Language
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
To view more data from your Node Azure Functions applications than is [collected
## Distributed tracing for Python function apps
-To collect custom telemetry from services such as Redis, Memcached, and MongoDB, use the [OpenCensus Python extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services in this [GitHub folder](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+To collect telemetry from services such as Requests, urllib3, httpx, PsycoPG2, and more, use the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python). Tracked incoming requests coming into your Python application hosted in Azure Functions will not be automatically correlated with telemetry being tracked within it. You can manually achieve trace correlation by extract the TraceContext directly as shown below:
+
+<!-- TODO: Remove after Azure Functions implements this automatically -->
+
+```python
+import azure.functions as func
+
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import trace
+from opentelemetry.propagate import extract
+
+# Configure Azure monitor collection telemetry pipeline
+configure_azure_monitor()
+
+def main(req: func.HttpRequest, context) -> func.HttpResponse:
+ ...
+ # Store current TraceContext in dictionary format
+ carrier = {
+ "traceparent": context.trace_context.Traceparent,
+ "tracestate": context.trace_context.Tracestate,
+ }
+ tracer = trace.get_tracer(__name__)
+ # Start a span using the current context
+ with tracer.start_as_current_span(
+ "http_trigger_span",
+ context=extract(carrier),
+ ):
+ ...
+```
## Next steps
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
- Title: Metric alert rules for Kubernetes clusters (preview)
-description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights.
- Previously updated : 03/13/2023---
-# Metric alert rules for Kubernetes clusters (preview)
-
-Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides preconfigured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
-
-> [!IMPORTANT]
-> Azure Monitor now supports alerts based on Prometheus metrics, and metric rules in Container insights will be retired on May 31, 2024 (this was previously announced as March 14, 2026). If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts. As of August 15, 2023, you are no longer be able to configure new custom metric recommended alerts using the portal.
-
-## Types of metric alert rules
-
-There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
-
-| Alert rule type | Description |
-|:|:|
-| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are handpicked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>- *Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
-| [Metric rules](#metric-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. Metric rules will be retired on May 31, 2024. |
-
-## Prometheus alert rules
-
-[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts) use metric data from your Kubernetes cluster sent to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
-
-### Prerequisites
-
-Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics with Container insights](container-insights-prometheus-metrics-addon.md).
-
-### Enable Prometheus alert rules
-
-The methods currently available for creating Prometheus alert rules are Azure Resource Manager template (ARM template) and Bicep template.
-
-> [!NOTE]
-> Although you can create the Prometheus alert in a resource group different from the target resource, you should use the same resource group.
-
-### [ARM template](#tab/arm-template)
-
-1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details).
-
- [Recommended metric alerts](https://aka.ms/azureprometheus-recommendedmetricalerts)
-
-2. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates).
-
-### [Bicep template](#tab/bicep)
-
-1. To deploy recommended metric alerts, follow this [template](https://aka.ms/azureprometheus-recommendedmetricalertsbicep) and follow the README.md file in the same folder for how to deploy.
------
-### Edit Prometheus alert rules
-
- To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it by using any deployment method.
-
-### Configure alertable metrics in ConfigMaps
-
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps only apply to the following alertable metrics:
--- cpuExceededPercentage-- cpuThresholdViolated-- memoryRssExceededPercentage-- memoryRssThresholdViolated-- memoryWorkingSetExceededPercentage-- memoryWorkingSetThresholdViolated-- pvUsageExceededPercentage-- pvUsageThresholdViolated-
-> [!TIP]
-> Download the new ConfigMap from [this GitHub content](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-
-1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
-
- - **Example:** Use the following ConfigMap configuration to modify the `cpuExceededPercentage` threshold to 90%:
-
- ```
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
- container_cpu_threshold_percentage = 90.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
- ```
-
- - **Example:** Use the following ConfigMap configuration to modify the `pvUsageExceededPercentage` threshold to 80%:
-
- ```
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 80.0
- ```
-
-1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-
-The configuration change can take a few minutes to finish before it takes effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, so they don't all restart at the same time. When the restarts are finished, a message similar to the following example includes the result: `configmap "container-azm-ms-agentconfig" created`.
-
-## Metric alert rules
-
-> [!IMPORTANT]
-> Metric alerts (preview) are retiring and no longer recommended. As of August 15, 2023, you will no longer be able to configure new custom metric recommended alerts using the portal. Please refer to the migration guidance at [Migrate from Container insights recommended alerts to Prometheus recommended alert rules (preview)](#migrate-from-metric-rules-to-prometheus-rules-preview).
-
-### Prerequisites
-
-You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
-
-### Enable and configure metric alert rules
-
-#### [Azure portal](#tab/azure-portal)
-
-#### Enable metric alert rules
-
-1. On the **Insights** menu for your cluster, select **Recommended alerts**.
-
- :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot that shows recommended alerts option in Container insights.":::
-
-1. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-
- :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot that shows a list of recommended alerts and options for enabling each.":::
-
-1. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page. Specify an existing action group or create an action group by selecting **Create action group**.
-
- :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot that shows selecting an action group.":::
-
-#### Edit metric alert rules
-
-To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your Azure Kubernetes Service (AKS) cluster.
-
-1. From Container insights for your cluster, select **Recommended alerts**.
-2. Select the **Rule Name** to open the alert rule.
-3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for information on the alert rule settings.
-
-#### Disable metric alert rules
-
-1. From Container insights for your cluster, select **Recommended alerts**.
-1. Change the status for the alert rule to **Disabled**.
-
-### [Resource Manager](#tab/resource-manager)
-
-For custom metrics, a separate ARM template is provided for each alert rule.
-
-#### Enable metric alert rules
-
-1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).
-1. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
-1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
-
-#### Disable metric alert rules
-
-To disable custom alert rules, use the same ARM template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-------------
-## Migrate from metric rules to Prometheus rules (preview)
-If you're using metric alert rules to monitor your Kubernetes cluster, you should transition to Prometheus recommended alert rules (preview) before May 31, 2024 when metric alerts are retired.
-
-1. Follow the steps at [Enable Prometheus alert rules](#enable-prometheus-alert-rules) to configure Prometheus recommended alert rules (preview).
-2. Follow the steps at [Disable metric alert rules](#disable-metric-alert-rules) to remove metric alert rules from your clusters.
-
-## Alert rule details
-
-The following sections present information on the alert rules provided by Container insights.
-
-### Community alert rules
-
-These handpicked alerts come from the Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-recommendedmetricalerts):
-
-| Alert name | Description | Default threshold |
-|:|:|:|
-| NodeFilesystemSpaceFillingUp | An extrapolation algorithm predicts that disk space usage for a node on a device in a cluster will run out of space within the upcoming 24 hours. | NA |
-| NodeFilesystemSpaceUsageFull85Pct | Disk space usage for a node on a device in a cluster is greater than 85%. | 85% |
-| KubePodCrashLooping | Pod is in CrashLoop which means the app dies or is unresponsive and kubernetes tries to restart it automatically. | NA |
-| KubePodNotReady | Pod has been in a non-ready state for more than 15 minutes. | NA |
-| KubeDeploymentReplicasMismatch | Deployment has not matched the expected number of replicas. | NA |
-| KubeStatefulSetReplicasMismatch | StatefulSet has not matched the expected number of replicas. | NA |
-| KubeJobNotCompleted | Job is taking more than 1h to complete. | NA |
-| KubeJobFailed | Job failed complete. | NA |
-| KubeHpaReplicasMismatch | Horizontal Pod Autoscaler has not matched the desired number of replicas for longer than 15 minutes. | NA |
-| KubeHpaMaxedOut | Horizontal Pod Autoscaler has been running at max replicas for longer than 15 minutes. | NA |
-| KubeCPUQuotaOvercommit | Cluster has overcommitted CPU resource requests for Namespaces and cannot tolerate node failure. | 1.5 |
-| KubeMemoryQuotaOvercommit | Cluster has overcommitted memory resource requests for Namespaces. | 1.5 |
-| KubeQuotaAlmostFull | Cluster reaches to the allowed limits for given namespace. | Between 0.9 and 1 |
-| KubeVersionMismatch | Different semantic versions of Kubernetes components running. | NA |
-| KubeNodeNotReady | KubeNodeNotReady alert is fired when a Kubernetes node is not in Ready state for a certain period. | NA |
-| KubeNodeUnreachable | Kubernetes node is unreachable and some workloads may be rescheduled. | NA |
-| KubeletTooManyPods | The alert fires when a specific node is running >95% of its capacity of pods | 0.95 |
-| KubeNodeReadinessFlapping | The readiness status of node has changed few times in the last 15 minutes. | 2 |
-
-### Recommended alert rules
-
-The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
-Source code for the recommended alerts can be found in [GitHub](https://aka.ms/azureprometheus-recommendedmetricalerts):
-
-| Prometheus alert name | Custom metric alert name | Description | Default threshold |
-|:|:|:|:|
-| Average container CPU % | Average container CPU % | Calculates average CPU used per container. | 95% |
-| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% |
-| Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% |
-| Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
-| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average persistent volume usage per pod. | 80% |
-| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% |
-| Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |
-| Failed Pod Counts | Failed Pod Counts | Calculates number of pods in failed state. | 0 |
-| Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 |
-| OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 |
-| Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% |
-| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 |
-
-> [!NOTE]
-> The recommended alert rules in the Azure portal also include a log search alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules.
->
-> You can create this rule on your own by creating a [log search alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
-
-Common properties across all these alert rules include:
--- All alert rules are evaluated once per minute, and they look back at the last five minutes of data.-- All alert rules are disabled by default.-- Alerts rules don't have an action group assigned to them by default. To add an [action group](../alerts/action-groups.md) to the alert, either select an existing action group or create a new action group while you edit the alert rule.-- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before you modify its threshold.-
-The following metrics have unique behavior characteristics:
-
-**Prometheus and custom metrics**
--- The `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.-- The `containerRestartCount` metric is only sent when there are containers restarting.-- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers.-- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-
-**Prometheus only**
--- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. For details related to configuring your ConfigMap configuration file, see [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). Collection of persistent volume metrics with claims in the `kube-system` namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. For more information, see [Metric collection settings](./container-insights-data-collection-configmap.md#data-collection-settings).-- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and Memory Working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. If you want to collect these metrics and analyze them from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. For details related to configuring your ConfigMap configuration file, see the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps).-
-## View alerts
-
-View fired alerts for your cluster from **Alerts** in the **Monitor** menu in the Azure portal with other fired alerts in your subscription. You can also select **View in alerts** on the **Recommended alerts** pane to view alerts from custom metrics.
-
-> [!NOTE]
-> Currently, Prometheus alerts won't be displayed when you select **Alerts** from your AKS cluster because the alert rule doesn't use the cluster as its target.
-
-## Next steps
--- Read about the [different alert rule types in Azure Monitor](../alerts/alerts-types.md).-- Read about [alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).-
azure-monitor Kubernetes Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-metric-alerts.md
+
+ Title: Recommended alert rules for Kubernetes clusters
+description: Describes how to enable recommended metric alerts rules for a Kubernetes cluster in Azure Monitor.
+ Last updated : 03/05/2024+++
+# Recommended alert rules for Kubernetes clusters
+[Alerts](../alerts/alerts-overview.md) in Azure Monitor proactively identify issues related to the health and performance of your Azure resources. This article describes how to enable and edit a set of recommended metric alert rules that are predefined for your Kubernetes clusters.
+
+## Types of alert rules
+There are two types of metric alert rules used with Kubernetes clusters.
+
+| Alert rule type | Description |
+|:|:|
+| [Prometheus metric alert rules (preview)](../alerts/alerts-types.md#prometheus-alerts) | Use metric data collected from your Kubernetes cluster in a [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). These rules require [Prometheus to be enabled on your cluster](./kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and are stored in a [Prometheus rule group](../essentials/prometheus-rule-groups.md). |
+| [Platform metric alert rules](../alerts/alerts-types.md#metric-alerts) | Use metrics that are automatically collected from your AKS cluster and are stored as [Azure Monitor alert rules](../alerts/alerts-overview.md). |
+
+## Enable recommended alert rules
+Use one of the following methods to enable the recommended alert rules for your cluster. You can enable both Prometheus and platform metric alert rules for the same cluster.
+
+### [Azure portal](#tab/portal)
+Using the Azure portal, the Prometheus rule group will be created in the same region as the cluster.
+
+1. From the **Alerts** menu for your cluster, select **Set up recommendations**.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/setup-recommendations.png" lightbox="media/kubernetes-metric-alerts/setup-recommendations.png" alt-text="Screenshot of AKS cluster showing Set up recommendations button.":::
+
+2. The available Prometheus and platform alert rules are displayed with the Prometheus rules organized by pod, cluster, and node level. Toggle a group of Prometheus rules to enable that set of rules. Expand the group to see the individual rules. You can leave the defaults or disable individual rules and edit their name and severity.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/recommended-alert-rules-enable-prometheus.png" lightbox="media/kubernetes-metric-alerts/recommended-alert-rules-enable-prometheus.png" alt-text="Screenshot of enabling Prometheus alert rule.":::
+
+3. Toggle a platform metric rule to enable that rule. You can expand the rule to modify its details such as the name, severity, and threshold.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/recommended-alert-rules-enable-platform.png" lightbox="media/kubernetes-metric-alerts/recommended-alert-rules-enable-platform.png" alt-text="Screenshot of enabling platform metric alert rule.":::
+
+4. Either select one or more notification methods to create a new action group, or select an existing action group with the notification details for this set of alert rules.
+5. Click **Save** to save the rule group.
++
+### [Azure Resource Manager](#tab/arm)
+Using an ARM template, you can specify the region for the Prometheus rule group, but you should create it in the same region as the cluster.
+
+Download the required files for the template you're working with and deploy using the parameters in the tables below. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
+
+### ARM
+
+- Template file: [https://aka.ms/azureprometheus-recommendedmetricalerts](https://aka.ms/azureprometheus-recommendedmetricalerts)
+
+- Parameters:
+
+ | Parameter | Description |
+ |:|:|
+ | clusterResourceId | Resource ID of the cluster. |
+ | actionGroupResourceId | Resource ID of action group that defines responses to alerts. |
+ | azureMonitorWorkspaceResourceId | Resource ID of the Azure Monitor workspace receiving the cluster's Prometheus metrics. |
+ | location | Region to store the alert rule group. |
+
+### Bicep
+See the [README](https://github.com/Azure/prometheus-collector/blob/main/AddonBicepTemplate/README.md) for further details.
+
+- Template file: [https://aka.ms/azureprometheus-recommendedmetricalertsbicep](https://aka.ms/azureprometheus-recommendedmetricalertsbicep)
+- Parameters:
+
+ | Parameter | Description |
+ |:|:|
+ | aksResourceId | Resource ID of the cluster. |
+ | actionGroupResourceId | Resource ID of action group that defines responses to alerts. |
+ | monitorWorkspaceName | Name of the Azure Monitor workspace receiving the cluster's Prometheus metrics. |
+ | location | Region to store the alert rule group. |
+
+++++
+## Edit recommended alert rules
+
+Once the rule group has been created, you can't use the same page in the portal to edit the rules. For Prometheus metrics, you must edit the rule group to modify any rules in it, including enabling any rules that weren't already enabled. For platform metrics, you can edit each alert rule.
+
+### [Azure portal](#tab/portal)
+
+1. From the **Alerts** menu for your cluster, select **Set up recommendations**. Any rules or rule groups that have already been created will be labeled as **Already created**.
+2. Expand the rule or rule group. Click on **View rule group** for Prometheus and **View alert rule** for platform metrics.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/recommended-alert-rules-already-enabled.png" lightbox="media/kubernetes-metric-alerts/recommended-alert-rules-already-enabled.png" alt-text="Screenshot of view rule group option.":::
+
+3. For Prometheus rule groups:
+ 1. select **Rules** to view the alert rules in the group.
+ 2. Click the **Edit** icon next a rule that you want to modify. Use the guidance in [Create an alert rule](../essentials/prometheus-rule-groups.md#configure-the-rules-in-the-group) to modify the rule.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/edit-prometheus-rule.png" lightbox="media/kubernetes-metric-alerts/edit-prometheus-rule.png" alt-text="Screenshot of option to edit Prometheus alert rules.":::
+
+ 3. When you're done editing rules in the group, click **Save** to save the rule group.
+
+4. For platform metrics:
+
+ 1. click **Edit** to open the details for the alert rule. Use the guidance in [Create an alert rule](../alerts/alerts-create-metric-alert-rule.md#configure-the-alert-rule-conditions) to modify the rule.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/edit-platform-metric-rule.png" lightbox="media/kubernetes-metric-alerts/edit-platform-metric-rule.png" alt-text="Screenshot of option to edit platform metric rule.":::
+
+### [Azure Resource Manager](#tab/arm)
+
+Edit the query and threshold or configure an action group for your alert rules in the ARM template described in [Enable recommended alert rules](#enable-recommended-alert-rules) and redeploy it by using any deployment method.
++++
+## Disable alert rule group
+Disable the rule group to stop receiving alerts from the rules in it.
+
+### [Azure portal](#tab/portal)
+
+1. View the Prometheus alert rule group or platform metric alert rule as described in [Edit recommended alert rules](#edit-recommended-alert-rules).
+
+2. From the **Overview** menu, select **Disable**.
+
+ :::image type="content" source="media/kubernetes-metric-alerts/disable-prometheus-rule-group.png" lightbox="media/kubernetes-metric-alerts/disable-prometheus-rule-group.png" alt-text="Screenshot of option to disable a rule group.":::
+
+### [ARM template](#tab/arm)
+
+Set the **enabled** flag to false for the rule group in the ARM template described in [Enable recommended alert rules](#enable-recommended-alert-rules) and redeploy it by using any deployment method.
+++
+## Recommended alert rule details
+
+The following tables list the details of each recommended alert rule. Source code for each is available in [GitHub](https://aka.ms/azureprometheus-recommendedmetricalerts) along with [troubleshooting guides](https://aka.ms/aks-alerts/community-runbooks) from the Prometheus community.
+
+### Prometheus community alert rules
+
+**Cluster level alerts**
+
+| Alert name | Description | Default threshold | Timeframe (minutes) |
+|:|:|::|::|
+| KubeCPUQuotaOvercommit | The CPU resource quota allocated to namespaces exceeds the available CPU resources on the cluster's nodes by more than 50% for the last 5 minutes. | >1.5 | 5 |
+| KubeMemoryQuotaOvercommit | The memory resource quota allocated to namespaces exceeds the available memory resources on the cluster's nodes by more than 50% for the last 5 minutes. | >1.5 | 5 |
+| Number of OOM killed containers is greater than 0 | One or more containers within pods have been killed due to out-of-memory (OOM) events for the last 5 minutes. | >0 | 5 |
+| KubeClientErrors | The rate of client errors (HTTP status codes starting with 5xx) in Kubernetes API requests exceeds 1% of the total API request rate for the last 15 minutes. | >0.01 | 15 |
+| KubePersistentVolumeFillingUp | The persistent volume is filling up and is expected to run out of available space evaluated on the available space ratio, used space, and predicted linear trend of available space over the last 6 hours. These conditions are evaluated over the last 60 minutes. | N/A | 60 |
+| KubePersistentVolumeInodesFillingUp | Less than 3% of the inodes within a persistent volume are available for the last 15 minutes. | <0.03 | 15 |
+| KubePersistentVolumeErrors | One or more persistent volumes are in a failed or pending phase for the last 5 minutes. | >0 | 5 |
+| KubeContainerWaiting | One or more containers within Kubernetes pods are in a waiting state for the last 60 minutes. | >0 | 60 |
+| KubeDaemonSetNotScheduled | One or more pods are not scheduled on any node for the last 15 minutes. | >0 | 15 |
+| KubeDaemonSetMisScheduled | One or more pods are misscheduled within the cluster for the last 15 minutes. | >0 | 15 |
+| KubeQuotaAlmostFull | The utilization of Kubernetes resource quotas is between 90% and 100% of the hard limits for the last 15 minutes. | >0.9 <1 | 15 |
++
+**Node level alerts**
+
+| Alert name | Description | Default threshold | Timeframe (minutes) |
+|:|:|::|::|
+| KubeNodeUnreachable | A node has been unreachable for the last 15 minutes. | 1 | 15 |
+| KubeNodeReadinessFlapping | The readiness status of a node has changed more than 2 times for the last 15 minutes. | 2 | 15 |
+
+**Pod level alerts**
+
+| Alert name | Description | Default threshold | Timeframe (minutes) |
+|:|:|::|::|
+| Average PV usage is greater than 80% | The average usage of Persistent Volumes (PVs) on pod exceeds 80% for the last 15 minutes. | >0.8 | 15 |
+| KubeDeploymentReplicasMismatch | There is a mismatch between the desired number of replicas and the number of available replicas for the last 10 minutes. | N/A | 10 |
+| KubeStatefulSetReplicasMismatch | The number of ready replicas in the StatefulSet does not match the total number of replicas in the StatefulSet for the last 15 minutes. | N/A | 15 |
+| KubeHpaReplicasMismatch | The Horizontal Pod Autoscaler in the cluster has not matched the desired number of replicas for the last 15 minutes. | N/A | 15 |
+| KubeHpaMaxedOut | The Horizontal Pod Autoscaler (HPA) in the cluster has been running at the maximum replicas for the last 15 minutes. | N/A | 15 |
+| KubePodCrashLooping | One or more pods is in a CrashLoopBackOff condition, where the pod continuously crashes after startup and fails to recover successfully for the last 15 minutes. | >=1 | 15 |
+| KubeJobStale | At least one Job instance did not complete successfully for the last 6 hours. | >0 | 360 |
+| Pod container restarted in last 1 hour | One or more containers within pods in the Kubernetes cluster have been restarted at least once within the last hour. | >0 | 15 |
+| Ready state of pods is less than 80% | The percentage of pods in a ready state falls below 80% for any deployment or daemonset in the Kubernetes cluster for the last 5 minutes. | <0.8 | 5 |
+| Number of pods in failed state are greater than 0. | One or more pods is in a failed state for the last 5 minutes. | >0 | 5 |
+| KubePodNotReadyByController | One or more pods are not in a ready state (i.e., in the "Pending" or "Unknown" phase) for the last 15 minutes. | >0 | 15 |
+| KubeStatefulSetGenerationMismatch | The observed generation of a Kubernetes StatefulSet does not match its metadata generation for the last 15 minutes. | N/A | 15 |
+| KubeJobFailed | One or more Kubernetes jobs have failed within the last 15 minutes. | >0 | 15 |
+| Average CPU usage per container is greater than 95% | The average CPU usage per container exceeds 95% for the last 5 minutes. | >0.95 | 5 |
+| Average Memory usage per container is greater than 95% | The average memory usage per container exceeds 95% for the last 5 minutes. | >0.95 | 10 |
+| KubeletPodStartUpLatencyHigh | The 99th percentile of the pod startup latency exceeds 60 seconds for the last 10 minutes. | >60 | 10 |
+
+### Platform metric alert rules
+
+| Alert name | Description | Default threshold | Timeframe (minutes) |
+|:|:|::|::|
+| Node cpu percentage is greater than 95% | The node CPU percentage is greater than 95% for the last 5 minutes. | 95 | 5 |
+| Node memory working set percentage is greater than 100% | The node memory working set percentage is greater than 95% for the last 5 minutes. | 100 | 5 |
++
+## Legacy Container insights metric alerts (preview)
+
+Metric rules in Container insights will be retired on May 31, 2024 (this was previously announced as March 14, 2026). These rules haven't been available for creation using the portal since August 15, 2023. These rules were in public preview but will be retired without reaching general availability since the new recommended metric alerts described in this article are now available.
+
+If you already enabled these legacy alert rules, you should disable them and enable the new experience.
+
+### Disable metric alert rules
+
+1. From the **Insights** menu for your cluster, select **Recommended alerts (preview)**.
+2. Change the status for each alert rule to **Disabled**.
++
+## Next steps
+
+- Read about the [different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- Read about [alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
+
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Most users should leave the option to **Save to the default query pack**, which
:::image type="content" source="media/save-query/save-query-dialog.png" lightbox="media/save-query/save-query-dialog.png" alt-text="Screenshot that shows the Save as query dialog." border="false"::: ## Edit a query
-You might want to edit a query that you've already saved. You might want to change the query itself or modify any of its properties. After you open an existing query in Log Analytics, you can edit it by selecting **Edit query details** from the **Save** dropdown. Now you can save the edited query with the same properties or modify any properties before saving.
+You might want to edit a query that you've already saved. You might want to change the query itself or modify any of its properties. After you open an existing query in Log Analytics and make changes, you can save the edited query with the same properties or modify any properties before saving.
If you want to save the query with a different name, select **Save as query** as if you were creating a new query.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.BatchAI/workspaces/experiments/jobs | listoutputfiles | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) |
-| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
+| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/aiservices/accountmanagement/accounts/list-keys) |
| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) |
Namespace: [az](bicep-functions.md#namespaces-for-functions).
`reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
-Returns an object representing a resource's runtime state.
+Returns an object representing a resource's runtime state. The output and behavior of the `reference` function highly relies on how each resource provider (RP) implements its PUT and GET responses.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
Title: Deploy resources with Azure CLI and Bicep files | Microsoft Docs description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Bicep file. Previously updated : 11/03/2023 Last updated : 03/22/2024
az deployment group create \
For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+You can use inline parameters and a location parameters file in the same deployment operation. For more information, see [Parameter precedence](./parameter-files.md#parameter-precedence).
+ ## Preview changes Before deploying your Bicep file, you can preview the changes the Bicep file will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the Bicep file makes the changes that you expect. What-if also validates the Bicep file for errors.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
Title: Deploy resources with PowerShell and Bicep
description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Bicep file. Previously updated : 11/03/2023 Last updated : 03/22/2024 # Deploy resources with Bicep and Azure PowerShell
New-AzResourceGroupDeployment `
The `TemplateParameterUri` parameter doesn't support `.bicepparam` files, it only supports JSON parameters files.
+You can use inline parameters and a location parameters file in the same deployment operation. For more information, see [Parameter precedence](./parameter-files.md#parameter-precedence).
+ ## Preview changes Before deploying your Bicep file, you can preview the changes the Bicep file will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the Bicep file makes the changes that you expect. What-if also validates the Bicep file for errors.
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file. Previously updated : 03/19/2024 Last updated : 04/01/2024 # Create parameters files for Bicep deployment
From Bicep CLI, you can build a Bicep parameters file into a JSON parameters fil
## Deploy Bicep file with parameters file
+### Azure CLI
+ From Azure CLI, you can pass a parameter file with your Bicep file deployment. # [Bicep parameters file](#tab/Bicep)
-With Azure CLI version 2.53.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there's no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch results in an "Only a .bicep template is allowed with a .bicepparam file" error.
+With Azure CLI version 2.53.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there's no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch.
```azurecli az deployment group create \
az deployment group create \
+You can use inline parameters and a location parameters file in the same deployment operation. For example:
+
+# [Bicep parameters file](#tab/Bicep)
+
+```azurecli
+az deployment group create \
+ --name ExampleDeployment \
+ --resource-group ExampleGroup \
+ --parameters storage.bicepparam \
+ --parameters storageAccountType=Standard_LRS
+```
+
+# [JSON parameters file](#tab/JSON)
+
+```azurecli
+az deployment group create \
+ --name ExampleDeployment \
+ --resource-group ExampleGroup \
+ --template-file storage.bicep \
+ --parameters storage.parameters.json \
+ --parameters storageAccountType=Standard_LRS
+```
+++ For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters).
+### Azure PowerShell
+ From Azure PowerShell, pass a local parameters file using the `TemplateParameterFile` parameter. # [Bicep parameters file](#tab/Bicep)
New-AzResourceGroupDeployment `
# [JSON parameters file](#tab/JSON) - ```azurepowershell New-AzResourceGroupDeployment ` -Name ExampleDeployment `
New-AzResourceGroupDeployment `
+You can use inline parameters and a location parameters file in the same deployment operation. For example:
+
+# [Bicep parameters file](#tab/Bicep)
+
+```azurepowershell
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
+ -TemplateFile C:\MyTemplates\storage.bicep `
+ -TemplateParameterFile C:\MyTemplates\storage.bicepparam `
+ -storageAccountType Standard_LRS
+```
+
+# [JSON parameters file](#tab/JSON)
+
+```azurepowershell
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleResourceGroup `
+ -TemplateFile C:\MyTemplates\storage.bicep `
+ -TemplateParameterFile C:\MyTemplates\storage.parameters.json `
+ -storageAccountType Standard_LRS
+```
+++ For more information, see [Deploy resources with Bicep and Azure PowerShell](./deploy-powershell.md#parameters). To deploy _.bicep_ files you need Azure PowerShell version 5.6.0 or later. ## Parameter precedence
-You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence.
+You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence.
-It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
+It's possible to use an external JSON parameters file, by providing the URI to the file. External Bicep parameters file is not currently supported. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
## Parameter name conflicts
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
Title: Parameters in Bicep files
description: Describes how to define parameters in a Bicep file. Previously updated : 12/06/2023 Last updated : 03/22/2024 # Parameters in Bicep
The following example shows a parameter that is an object. The default value sho
:::code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/parameters/parameterobject.bicep"::: - ## Next steps - To learn about the available properties for parameters, see [Understand the structure and syntax of Bicep files](file.md). - To learn about passing in parameter values as a file, see [Create a Bicep parameter file](parameter-files.md).
+- To learn about providing parameter values at deployment, see [Deploy with Azure CLI](./deploy-cli.md), and [Deploy with Azure PowerShell](./deploy-powershell.md).
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 08/22/2023 Last updated : 03/18/2024
The possible uses of `list*` are shown in the following table.
| Microsoft.BatchAI/workspaces/experiments/jobs | listoutputfiles | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) |
-| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
+| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/aiservices/accountmanagement/accounts/list-keys) |
| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) |
In the templates with [symbolic names](./resource-declaration.md#use-symbolic-na
`reference(symbolicName or resourceIdentifier, [apiVersion], ['Full'])`
-Returns an object representing a resource's runtime state. To return an array of objects representing a resource collections's runtime states, see [references](#references).
+Returns an object representing a resource's runtime state. The output and behavior of the `reference` function highly relies on how each resource provider (RP) implements its PUT and GET responses. To return an array of objects representing a resource collections's runtime states, see [references](#references).
Bicep provide the reference function, but in most cases, the reference function isn't required. It's recommended to use the symbolic name for the resource instead. See [reference](../bicep/bicep-functions-resource.md#reference).
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md
Title: Deploy Horizon on Azure VMware Solution
description: Learn how to deploy VMware Horizon on Azure VMware Solution. Previously updated : 11/27/2023 Last updated : 4/1/2024 - # Deploy Horizon on Azure VMware Solution >[!NOTE]
->This document focuses on the VMware Horizon product, formerly known as Horizon 7. Horizon is a different solution than Horizon Cloud on Azure, although there are some shared components. Key advantages of the Azure VMware Solution include both a more straightforward sizing method and the integration of VMware Cloud Foundation management into the Azure portal.
+>This document focuses on the VMware Horizon product, formerly known as Horizon 7. Horizon is a different solution than Horizon Cloud on Azure, although there are some shared components. Key advantages of the Azure VMware Solution include both a more straightforward sizing method and the integration of Software-Defined Data Center (SDDC) private cloud management into the Azure portal.
[VMware Horizon](https://www.vmware.com/products/horizon.html)®, a virtual desktop and applications platform, runs in the data center and provides simple and centralized management. It delivers virtual desktops and applications on any device, anywhere. Horizon lets you create, and broker connections to Windows and Linux virtual desktops, Remote Desktop Server (RDS) hosted applications, desktops, and physical machines.
Here, we focus specifically on deploying Horizon on Azure VMware Solution. For g
* [Horizon Reference Architecture](https://techzone.vmware.com/resource/workspace-one-and-horizon-reference-architecture)
-With Horizon's introduction on Azure VMware Solution, there are now two Virtual Desktop Infrastructure (VDI) solutions on the Azure platform. The following diagram summarizes the key differences at a high level.
+With Horizon's introduction on Azure VMware Solution, there are now two Virtual Desktop Infrastructure (VDI) solutions on the Azure platform:
+
+* VMware Horizon on Azure VMware Solution
+* VMware Horizon Cloud (Desktop-as-a-Service Model)
Horizon 2006 and later versions on the Horizon 8 release line supports both on-premises and Azure VMware Solution deployment. There are a few Horizon features that are supported on-premises but not on Azure VMware Solution. Other products in the Horizon ecosystem are also supported. For more information, see [feature parity and interoperability](https://kb.vmware.com/s/article/80850).
Customers are required to use the Cloud Admin role, which has a limited set of v
A typical Horizon architecture design uses a pod and block strategy. A block is a single vCenter Server, while multiple blocks combined make a pod. A Horizon pod is a unit of organization determined by Horizon scalability limits. Each Horizon pod has a separate management portal, and so a standard design practice is to minimize the number of pods.
-Every cloud has its own network connectivity scheme. Combine that with VMware SDDC networking / NSX-T Data Center, the Azure VMware Solution network connectivity presents unique requirements for deploying Horizon that is different from on-premises.
+Every cloud has its own network connectivity scheme. Combine that with VMware NSX, the Azure VMware Solution network connectivity presents unique requirements for deploying Horizon that is different from on-premises.
-Each Azure private cloud and SDDC can handle 4,000 desktop or application sessions, assuming:
+Each Azure VMware Solution private cloud and SDDC can handle 4,000 desktop or application sessions, assuming:
* The workload traffic aligns with the LoginVSI task worker profile.
Each Azure private cloud and SDDC can handle 4,000 desktop or application sessio
Given the Azure private cloud and SDDC max limit, we recommend a deployment architecture where the Horizon Connection Servers and VMware Unified Access Gateways (UAGs) are running inside the Azure Virtual Network. It effectively turns each Azure private cloud and SDDC into a block. In turn, maximizing the scalability of Horizon running on Azure VMware Solution.
-The connection from Azure Virtual Network to the Azure private clouds / SDDCs should be configured with ExpressRoute FastPath. The following diagram shows a basic Horizon pod deployment.
+The connection from Azure Virtual Network to the Azure private clouds / SDDCs should be configured with ExpressRoute Connections (FastPath enabled). The following diagram shows a basic Horizon pod deployment.
## Network connectivity to scale Horizon on Azure VMware Solution
This section lays out the network architecture at a high level with some common
### Single Horizon pod on Azure VMware Solution A single Horizon pod is the most straight forward deployment scenario because you deploy just one Horizon pod in the US East region. Since each private cloud and SDDC is estimated to handle 4,000 desktop sessions, you deploy the maximum Horizon pod size. You can plan the deployment of up to three private clouds/SDDCs.
-With the Horizon infrastructure virtual machines (VMs) deployed in Azure Virtual Network, you can reach the 12,000 sessions per Horizon pod. The connection between each private cloud and SDDC to the Azure Virtual Network is ExpressRoute Fast Path. No east-west traffic between private clouds is needed.
+With the Horizon infrastructure virtual machines (VMs) deployed in Azure Virtual Network, you can reach the 12,000 sessions per Horizon pod. The connection between each private cloud and SDDC to the Azure Virtual Network is an ExpressRoute Connection (FastPath enabled). No east-west traffic between private clouds is needed.
Key assumptions for this basic deployment example include that:
A variation on the basic example might be to support connectivity for on-premise
The diagram shows how to support connectivity for on-premises resources. To connect to your corporate network to the Azure Virtual Network, you need an ExpressRoute circuit. You need to connect your corporate network with each of the private cloud and SDDCs using ExpressRoute Global Reach. It allows the connectivity from the SDDC to the ExpressRoute circuit and on-premises resources. ### Multiple Horizon pods on Azure VMware Solution across multiple regions
Connect the Azure Virtual Network in each region to the private clouds/SDDCs in
The same principles apply if you deploy two Horizon pods in the same region. Make sure to deploy the second Horizon pod in a *separate Azure Virtual Network*. Just like the single pod example, you can connect your corporate network and on-premises pod to this multi-pod/region example using ExpressRoute and Global Reach. ## Size Azure VMware Solution hosts for Horizon deployments
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud
description: Learn how to access an Azure VMware Solution private cloud Previously updated : 12/19/2023 Last updated : 4/1/2024
In this tutorial, learn how to create a jump box in the resource group that you
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a Windows VM to access the Azure VMware Solution vCenter
+> * Create a Windows VM to access the Azure VMware Solution vCenter Server
> * Sign in to vCenter Server from this VM ## Create a new Windows virtual machine
In this tutorial, you learn how to:
1. In the Azure portal, select your private cloud, and then **Manage** > **VMware credentials**.
- The URLs and user credentials for private cloud vCenter Server and NSX-T Manager are displayed.
+ The URLs and user credentials for private cloud vCenter Server and NSX Manager are displayed.
:::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot shows the private cloud vCenter Server and NSX Manager URLs and credentials."lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
In this tutorial, you learn how to:
If you need help with connecting to the VM, see [connect to a virtual machine](../virtual-machines/windows/connect-logon.md#connect-to-the-virtual-machine) for details.
-1. In the Windows VM, open a browser and navigate to the vCenter Server and NSX-T Manager URLs in two tabs.
+1. In the Windows VM, open a browser and navigate to the vCenter Server and NSX Manager URLs in two tabs.
1. In the vSphere Client tab, enter the `cloudadmin@vsphere.local` user credentials from the previous step.
In this tutorial, you learn how to:
:::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client."lightbox="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" border="true":::
-1. In the second tab of the browser, sign in to NSX-T Manager with the 'cloudadmin' user credentials from earlier.
+1. In the second tab of the browser, sign in to NSX Manager with the 'cloudadmin' user credentials from earlier.
- :::image type="content" source="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" alt-text="Screenshot of the NSX-T Manager sign in page."lightbox="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" alt-text="Screenshot of the NSX Manager sign in page."lightbox="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" border="true":::
- :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview."lightbox="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX Manager Overview."lightbox="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" border="true":::
## Next steps
In this tutorial, you learned how to:
> [!div class="checklist"] > * Create a Windows VM to use to connect to vCenter Server > * Login to vCenter Server from your VM
-> * Login to NSX-T Manager from your VM
+> * Login to NSX Manager from your VM
Continue to the next tutorial to learn how to create a virtual network to set up local management for your private cloud clusters.
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 02/19/2024 Last updated : 04/01/2024
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
>- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Premium SSD v2](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs. >- Enhanced policy now supports protecting both Ultra SSD and Premium SSD v2. >- Backups for VMs having [data access authentication enabled disks](../virtual-machines/windows/download-vhd.md?tabs=azure-portal#secure-downloads-and-uploads-with-azure-ad) will fail.
+>- If you're protecting a VM with an enhanced policy, it incurs additional snapshot costs. [Learn more](backup-instant-restore-capability.md#cost-impact).
+>- Once you enable a VM backup with Enhanced policy, Azure Backup doesn't allow to change the policy type to *Standard*.
+++ You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features:
Follow these steps:
5. On **Create policy**, perform the following actions:
- - **Policy sub-type**: Select **Enhanced** type. By default, the policy type is set to **Standard**.
+ - **Policy sub-type**: Select **Enhanced** type.
:::image type="content" source="./media/backup-azure-vms-enhanced-policy/select-enhanced-backup-policy-sub-type.png" alt-text="Screenshot showing to select backup policies subtype as enhanced.":::
Trusted Launch VMs can only be backed up using Enhanced policies.
>[!Note] >- The support for Enhanced policy is available in all Azure Public and US Government regions.
->- We support Enhanced policy configuration through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported.
>- For hourly backups, the last backup of the day is transferred to vault. If backup fails, the first backup of the next day is transferred to vault. >- Enhanced policy is only available to unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy. >- Back up an Azure VM with disks that has public network access disabled is not supported.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 09/18/2023 Last updated : 04/01/2024
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers](../storage/blobs/soft-delete-container-overview.md). - If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier isn't supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob isn't restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](../storage/blobs/archive-rehydrate-overview.md).-- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation.
+- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [`Put Block List`](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation.
- A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - If there are [immutable blobs](../storage/blobs/immutable-storage-overview.md#about-immutable-storage-for-blobs) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.
+- Blob backup is also supported when the storage account has private endpoints.
# [Vaulted backup](#tab/vaulted-backup)
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md
Title: Create and use private endpoints for Azure Backup description: Understand the process to creating private endpoints for Azure Backup where using private endpoints helps maintain the security of your resources. Previously updated : 04/26/2023 Last updated : 04/01/2024
To set up private endpoint for Recovery Services vault correctly through this wo
## Frequently asked questions
-### Can I create a private endpoint for an existing Backup vault?<br>
+### Can I create a private endpoint for an existing Recovery Services vault?<br>
-No, private endpoints can be created for new Backup vaults only. So the vault must not have ever had any items protected to it. In fact, no attempts to protect any items to the vault can be made before creating private endpoints.
+No, private endpoints can be created for new Recovery Services Vaults only. So the vault must not have ever had any items protected to it. In fact, no attempts to protect any items to the vault can be made before creating private endpoints.
### I tried to protect an item to my vault, but it failed and the vault still doesn't contain any items protected to it. Can I create private endpoints for this vault?<br>
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 02/27/2024 Last updated : 04/01/2024 + # Azure Bastion FAQ ## <a name="host"></a>Bastion service and deployment FAQs
At this time, IPv6 isn't supported. Azure Bastion supports IPv4 only. This means
Azure Bastion doesn't move or store customer data out of the region it's deployed in.
+### <a name="az"></a>Does Azure Bastion support availability zones?
+
+Some regions support the ability to deploy Azure Bastion in an availability zone (or multiple, for zone redundancy).
+To deploy zonally, you can select the availability zones you want to deploy under instance details when you deploy Bastion using manually specified settings. You can't change zonal availability after Bastion is deployed.
+If you aren't able to select a zone, you might have selected an Azure region that doesn't yet support availability zones.
+For more information about availability zones, see [Availability Zones](https://learn.microsoft.com/azure/reliability/availability-zones-overview?tabs=azure-cli).
+ ### <a name="vwan"></a>Does Azure Bastion support Virtual WAN? Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke virtual network and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different virtual network via the Virtual WAN hub. If the Azure Virtual WAN hub will be integrated with Azure Firewall as a [Secured Virtual Hub](../firewall-manager/secured-virtual-hub.md), the AzureBastionSubnet must reside within a Virtual Network where the default 0.0.0.0/0 route propagation is disabled at the virtual network connection level.
Review any error messages and [raise a support request in the Azure portal](../a
### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
-Azure Bastion is deployed within virtual networks or peered virtual networks, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site virtual network. If there is an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+Azure Bastion is deployed within virtual networks or peered virtual networks, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site virtual network. If there's an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
### <a name="move-virtual-network"></a>Does Bastion support moving a VNet to another resource group?
See [About VM connections and features](vm-about.md) for supported features.
### <a name="shareable-links-passwords"></a>Is Reset Password available for local users connecting via shareable link?
-No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button may appear.
+No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button might appear.
### <a name="audio"></a>Is remote audio available for VMs?
To set your target language as your keyboard layout on a Windows workstation, na
Users can use "Ctrl+Shift+Alt" to effectively switch focus between the VM and the browser.
-### <a name="keyboard-focus"></a>How do I take keyboard or mouse focus back from an instance?
+### <a name="keyboard-focus"></a>How do I take back keyboard or mouse focus from an instance?
Click the Windows key twice in a row to take back focus within the Bastion window.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Title: 'Tutorial: Deploy Azure Bastion using specified settings: Azure portal'
-description: Learn how to deploy Azure Bastion by using settings that you specify in the Azure portal.
+description: Learn how to deploy Azure Bastion by using settings that you specify in the Azure portal. Use these steps when you want to specify features and settings.
Previously updated : 10/13/2023 Last updated : 03/29/2024 - # Tutorial: Deploy Azure Bastion by using specified settings
The following diagram shows the architecture of Bastion.
:::image type="content" source="./media/create-host/host-architecture.png" alt-text="Diagram that shows the Azure Bastion architecture." lightbox="./media/create-host/host-architecture.png":::
-In this tutorial, you deploy Bastion by using the Standard SKU. You adjust host scaling (instance count), which the Standard SKU supports. If you use a lower SKU for the deployment, you can't adjust host scaling.
+In this tutorial, you deploy Bastion by using the Standard SKU. You adjust host scaling (instance count), which the Standard SKU supports. If you use a lower SKU for the deployment, you can't adjust host scaling. You can also select an availability zone, depending on the region to which you want to deploy.
After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
You can use the following example values when creating this configuration, or yo
| **Name** | **VNet1-bastion** | | **+ Subnet Name** | **AzureBastionSubnet** | | **AzureBastionSubnet addresses** | A subnet within your virtual network address space with a subnet mask of /26 or larger; for example, **10.1.1.0/26** |
+| **Availability zone** | Select value(s) from the dropdown list, if desired.|
| **Tier/SKU** | **Standard** | | **Instance count (host scaling)**| **3** or greater | | **Public IP address** | **Create new** |
This section helps you deploy Bastion to your virtual network. After Bastion is
* **Region**: The Azure public region in which the resource will be created. Choose the region where your virtual network resides.
+ * **Availability zone**: Select the zone(s) from the dropdown, if desired. Only certain regions are supported. For more information, see the [What are availability zones?](https://learn.microsoft.com/azure/reliability/availability-zones-overview?tabs=azure-cli) article.
+ * **Tier**: The SKU. For this tutorial, select **Standard**. For information about the features available for each SKU, see [Configuration settings - SKU](configuration-settings.md#skus). * **Instance count**: The setting for host scaling, which is available for the Standard SKU. You configure host scaling in scale unit increments. Use the slider or enter a number to configure the instance count that you want. For more information, see [Instances and host scaling](configuration-settings.md#instance) and [Azure Bastion pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
communication-services Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md
Last updated 08/10/2021
-zone_pivot_groups: acs-plat-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to enable push notifications with the Azure Communication Services sdks so that I can create a calling application that provides push notifications to its users.
Here, we'll learn how to enable push notifications for Azure Communication Servi
[!INCLUDE [Enable push notifications iOS](./includes/push-notifications/push-notifications-ios.md)] ::: zone-end + ## Next steps - [Learn how to subscribe to events](./events.md) - [Learn how to manage calls](./manage-calls.md)
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/availability-zones.md
Azure Container Instances (ACI) supports *zonal* container group deployments, me
## Limitations > [!IMPORTANT]
-* Container groups with GPU resources don't support availability zones at this time.
+> Container groups with GPU resources don't support availability zones at this time.
### Version requirements
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Below are the list of operators currently supported on Azure Cosmos DB for Mongo
<tr><td><code>$filter</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$firstN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$in</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
-<tr><td><code>$indexOfArray</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$indexOfArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
<tr><td><code>$isArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$lastN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$map</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
Below are the list of operators currently supported on Azure Cosmos DB for Mongo
<tr><td rowspan="1">Custom Aggregation Expression Operators</td><td colspan="2">Not supported.</td></tr> <tr><td rowspan="2">Data Size Operators</td><td><code>$bsonSize</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
-<tr><td><code>$binarySize</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$binarySize</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
<tr><td rowspan="22">Date Expression Operators</td><td><code>$dateAdd</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr> <tr><td><code>$dateDiff</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
Below are the list of operators currently supported on Azure Cosmos DB for Mongo
<tr><td rowspan="3">Object Expression Operators</td><td><code>$mergeObjects</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$objectToArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
-<tr><td><code>$setField</code></td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr>
+<tr><td><code>$setField</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
<tr><td rowspan="7">Set Expression Operators</td><td><code>$allElementsTrue</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td><code>$anyElementTrue</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>
cost-management-billing Limited Time Central Poland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-poland.md
Previously updated : 11/17/2023 Last updated : 04/01/2024 # Save on select VMs in Poland Central for a limited time
+> [!NOTE]
+> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md).
+ Save up to 66 percent compared to pay-as-you-go pricing when you purchase one or three-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json) for select VMs Poland Central for a limited time. This offer is available between October 1, 2023 ΓÇô March 31, 2024. ## Purchase the limited time offer
cost-management-billing Poland Limited Time Sql Services Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/poland-limited-time-sql-services-reservations.md
Previously updated : 11/17/2023 Last updated : 04/01/2024 # Save on select Azure SQL Services in Poland Central for a limited time
+> [!NOTE]
+> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md).
+ Save up to 66 percent compared to pay-as-you-go pricing when you purchase one or three-year reserved capacity for select [Azure SQL Database](/azure/azure-sql/database/reserved-capacity-overview), [SQL Managed Instances](/azure/azure-sql/database/reserved-capacity-overview), and [Azure Database for MySQL](../../mysql/single-server/concept-reserved-pricing.md) in Poland Central for a limited time. This offer is available between November 1, 2023 – March 31, 2024. ## Purchase the limited time offer
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
To use **Basic** authentication, in addition to the generic properties that are
} } ```
+> [!NOTE]
+> Mapping Data Flows only supports Basic authentication.
### Key pair authentication
databox Data Box Deploy Copy Data Via Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-nfs.md
Previously updated : 08/26/2022 Last updated : 03/25/2024 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure. # Tutorial: Copy data to Azure Data Box via NFS
+> [!IMPORTANT]
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+>
+>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Data Box](#copy-data-to-data-box) section to copy your data to the appropriate access tier.
+>
+> The information contained within this section applies to orders placed after April 1, 2024.
+ This tutorial describes how to connect to and copy data from your host computer using the local web UI. In this tutorial, you learn how to:
In this tutorial, you learn how to:
Before you begin, make sure that:
-1. You have completed the [Tutorial: Set up Azure Data Box](data-box-deploy-set-up.md).
-2. You have received your Data Box and the order status in the portal is **Delivered**.
-3. You have a host computer that has the data that you want to copy over to Data Box. Your host computer must
+1. You complete the [Tutorial: Set up Azure Data Box](data-box-deploy-set-up.md).
+2. You receive your Data Box and the order status in the portal is **Delivered**.
+3. You have a host computer that has the data that you want to copy over to Data Box. Your host computer must:
- Run a [Supported operating system](data-box-system-requirements.md).
- - Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds will be impacted.
+ - Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds are impacted.
## Connect to Data Box Based on the storage account selected, Data Box creates up to:-- Three shares for each associated storage account for GPv1 and GPv2.-- One share for premium storage. -- One share for blob storage account.
-Under block blob and page blob shares, first-level entities are containers, and second-level entities are blobs. Under shares for Azure Files, first-level entities are shares, second-level entities are files.
+* Three shares for each associated storage account for GPv1 and GPv2.
+* One share for premium storage.
+* One share for a blob storage account, containing one folder for each of the four access tiers.
+
+The following table identifies the names of the Data Box shares to which you can connect, and the type of data uploaded to your target storage account. It also identifies the hierarchy of shares and directories into which you copy your source data.
+
+| Storage type | Share name | First-level entity | Second-level entity | Third-level entity |
+|--|-|||--|
+| Block blob | \<storageAccountName\>_BlockBlob | <\accessTier\> | <\containerName\> | <\blockBlob\> |
+| Page blob | <\storageAccountName\>_PageBlob | <\containerName\> | <\pageBlob\> | |
+| File storage | <\storageAccountName\>_AzFile | <\fileShareName\> | <\file\> | |
+
+You can't copy files directly to the *root* folder of any Data Box share. Instead, create folders within the Data Box share depending on your use case.
+
+Block blobs support the assignment of access tiers at the file level. Before you copy files to the block blob share, the recommended best-practice is to add new subfolders within the appropriate access tier. Then, after creating new subfolders, continue adding files to each subfolder as appropriate.
+
+A new container is created for any folder residing at the root of the block blob share. Any file within the folder is copied to the storage account's default access tier as a block blob.
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed information about access tier best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+The following table shows the UNC path to the shares on your Data Box and the corresponding Azure Storage path URL to which data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
-The following table shows the UNC path to the shares on your Data Box and Azure Storage path URL where the data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
-
-| Azure Storage type| Data Box shares |
-|-|--|
-| Azure Block blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_BlockBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_PageBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_AzFile>/<ShareName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-| Azure Block blobs (Archive) | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_BlockBlobArchive>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Storage types | Data Box shares |
+||--|
+| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<accessTier>\<ContainerName>\myBlob.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.txt`</li> |
+| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\myBlob.vhd`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.vhd`</li> |
+| Azure Files | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\myFile.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/myFile.txt`</li> |
-If you are using a Linux host computer, perform the following steps to configure Data Box to allow access to NFS clients.
+If you're using a Linux host computer, perform the following steps to configure Data Box to allow access to NFS clients.
-1. Supply the IP addresses of the allowed clients that can access the share. In the local web UI, go to **Connect and copy** page. Under **NFS settings**, click **NFS client access**.
+1. Supply the IP addresses of the allowed clients that can access the share. In the local web UI, go to **Connect and copy** page. Under **NFS settings**, select **NFS client access**.
![Configure NFS client access](media/data-box-deploy-copy-data/nfs-client-access-1.png)
-2. Supply the IP address of the NFS client and click **Add**. You can configure access for multiple NFS clients by repeating this step. Click **OK**.
+2. Supply the IP address of the NFS client and select **Add**. You can configure access for multiple NFS clients by repeating this step. Select **OK**.
![Configure IP address of an NFS client](media/data-box-deploy-copy-data/nfs-client-access2.png)
If you are using a Linux host computer, perform the following steps to configure
`sudo mount <Data Box device IP>:/<NFS share on Data Box device> <Path to the folder on local Linux computer>`
- The following example shows how to connect via NFS to a Data Box share. The Data Box device IP is `10.161.23.130`, the share `Mystoracct_Blob` is mounted on the ubuntuVM, mount point being `/home/databoxubuntuhost/databox`.
+ Use the following example to connect to a Data Box share using NFS. In the example, the Data Box device IP is `10.161.23.130`. The share `Mystoracct_Blob` is mounted on the ubuntuVM, and the mount point is `/home/databoxubuntuhost/databox`.
`sudo mount -t nfs 10.161.23.130:/Mystoracct_Blob /home/databoxubuntuhost/databox`
- For Mac clients, you will need to add an additional option as follows:
+ For Mac clients, you need to add an extra option as follows:
`sudo mount -t nfs -o sec=sys,resvport 10.161.23.130:/Mystoracct_Blob /home/databoxubuntuhost/databox`
- **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You cannot copy files directly to *root* folder in the storage account.
+
+ > [!IMPORTANT]
+ > You can't copy files directly to the storage account's *root* folder. Within a block blob storage account's root folder, you'll find a folder corresponding to each of the available access tiers.
+ >
+ > To copy your data to Azure Data Box, you must first select the folder corresponding to one of the access tiers. Next, create a sub-folder within that tier's folder to store your data. Finally, copy your data to the newly created sub-folder. Your new sub-folder represents the container created within the storage account during ingestion. Your data is uploaded to this container as blobs.
+
+ <!--**Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You cannot copy files directly to *root* folder in the storage account.-->
+
+## Determine appropriate access tiers for block blobs
+
+> [!IMPORTANT]
+> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
+
+Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
+
+| Tier | Recommendation | Best practice |
+||-||
+| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
+| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
+| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
+| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-data-box) section.
## Copy data to Data Box
-Once you are connected to the Data Box shares, the next step is to copy data. Before you begin the data copy, review the following considerations:
+After you connect to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
-* Ensure that you copy the data to shares that correspond to the appropriate data format. For instance, copy the block blob data to the share for block blobs. Copy VHDs to page blobs. If the data format does not match the appropriate share type, then at a later step, the data upload to Azure will fail.
+* Make sure that you copy your data to the share that corresponds to the required data format. For instance, copy block blob data to the share for block blobs. Copy VHDs to the page blob share. If the data format doesn't match the appropriate share type, the data upload to Azure fails during a later step.
+* When copying data to the *AzFile* or *PageBlob* shares, first create a folder at the share's root, then copy files to that folder.
+* When copying data to the *BlockBlob* share, create a subfolder within the desired access tier, then copy data to the newly created subfolder. The subfolder represents a container into which data is uploaded as blobs. You can't copy files directly to a share's *root* folder.
* While copying data, ensure that the data size conforms to the size limits described in the [Azure storage account size limits](data-box-limits.md#azure-storage-account-size-limits).
-* If data, which is being uploaded by Data Box, is concurrently uploaded by other applications outside of Data Box, then this could result in upload job failures and data corruption.
+* Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
* If you use both the SMB and NFS protocols for data copies, we recommend that you: * Use different storage accounts for SMB and NFS. * Don't copy the same data to the same end destination in Azure using both SMB and NFS. In these cases, the final outcome can't be determined. * Although copying via both SMB and NFS in parallel can work, we don't recommend doing that as it's prone to human error. Wait until your SMB data copy is complete before you start an NFS data copy.
-* **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You cannot copy files directly to *root* folder in the storage account.
+* When copying data to the block blob share, create a subfolder within the desired access tier, then copy data to the newly created subfolder. The subfolder represents a container to which your data is uploaded as blobs. You can't copy files directly to the *root* folder in the storage account.
* If ingesting case-sensitive directory and file names from an NFS share to NFS on Data Box: * The case is preserved in the name. * The files are case-insensitive.
- For example, if copying `SampleFile.txt` and `Samplefile.Txt`, the case will be preserved in the name when copied to Data Box but the second file will overwrite the first one, as these are considered the same file.
+ For example, if copying `SampleFile.txt` and `Samplefile.Txt`, the case is preserved in the name when copied to Data Box. However, because they're considered the same file, the last file uploaded overwrites the first file.
> [!IMPORTANT]
-> Make sure that you maintain a copy of the source data until you can confirm that the Data Box has transferred your data into Azure Storage.
+> Make sure that you maintain a copy of the source data until you can confirm that your data has been copied into Azure Storage.
If you're using a Linux host computer, use a copy utility similar to Robocopy. Some of the alternatives available in Linux are [`rsync`](https://rsync.samba.org/), [FreeFileSync](https://www.freefilesync.org/), [Unison](https://www.cis.upenn.edu/~bcpierce/unison/), or [Ultracopier](https://ultracopier.first-world.info/).
If using `rsync` option for a multi-threaded copy, follow these guidelines:
`cd /local_path/; find -L . -type f | parallel -j X rsync -za {} /mnt/databox/{}`
- where j specifies the number of parallelization, X = number of parallel copies
+ where *j* specifies the number of parallelization, *X* = number of parallel copies
We recommend that you start with 16 parallel copies and increase the number of threads depending on the resources available. > [!IMPORTANT] > The following Linux file types are not supported: symbolic links, character files, block files, sockets, and pipes. These file types will result in failures during the **Prepare to ship** step.
-During the copy process, if there are any errors, you will see a notification.
+Notifications are displayed during the copy prowess to identify errors.
![Download and view errors on Connect and copy](media/data-box-deploy-copy-data/view-errors-1.png)
In this tutorial, you learned about Azure Data Box topics such as:
> [!div class="checklist"] >
-> * Prerequisites
-> * Connect to Data Box
-> * Copy data to Data Box
+> * Data Box data copy prerequisites
+> * Connecting to Data Box
+> * Determining appropriate access tiers for block blobs
+> * Copying data to Data Box
Advance to the next tutorial to learn how to ship your Data Box back to Microsoft.
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
Previously updated : 12/29/2022 Last updated : 03/25/2024 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
+> [!IMPORTANT]
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+>
+>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Data Box](#copy-data-to-data-box) section to copy your data to the appropriate access tier.
+>
+> The information contained within this section applies to orders placed after April 1, 2024.
+ > [!CAUTION] > This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this tutorial, you learn how to:
Before you begin, make sure that:
-1. You've completed the [Tutorial: Set up Azure Data Box](data-box-deploy-set-up.md).
-2. You've received your Data Box and the order status in the portal is **Delivered**.
-3. You've reviewed the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools.
-4. You've access to a host computer that has the data that you want to copy over to Data Box. Your host computer must
+1. You complete the [Tutorial: Set up Azure Data Box](data-box-deploy-set-up.md).
+2. You receive your Data Box and the order status in the portal is **Delivered**.
+3. You review the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools.
+4. You have access to a host computer that has the data that you want to copy over to Data Box. Your host computer must:
* Run a [Supported operating system](data-box-system-requirements.md).
- * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds will be impacted.
-5. [Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. You'll use AzCopy to copy data to Azure Data Box Blob storage from your host computer.
+ * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds are impacted.
+5. [Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. AzCopy is used to copy data to Azure Data Box Blob storage from your host computer.
## Connect via http or https
The steps to connect are different when you connect to Data Box Blob storage ove
Connection to Data Box Blob storage REST APIs over *http* requires the following steps: * Add the device IP and blob service endpoint to the remote host
-* Configure third-party software and verify the connection
+* Configure partner software and verify the connection
Each of these steps is described in the following sections.
Each of these steps is described in the following sections.
Connection to Azure Blob storage REST APIs over https requires the following steps: * Download the certificate from Azure portal. This certificate is used for connecting to the web UI and Azure Blob storage REST APIs.
-* Import the certificate on the client or remote host
-* Add the device IP and blob service endpoint to the client or remote host
-* Configure third-party software and verify the connection
+* Import the certificate on the client or remote host.
+* Add the device IP and blob service endpoint to the client or remote host.
+* Configure partner software and verify the connection.
Each of these steps is described in the following sections.
Use the Azure portal to download certificate.
1. Sign into the Azure portal. 2. Go to your Data Box order and navigate to **General > Device details**.
-3. Under **Device credentials**, go to **API access** to device. Click **Download**. This action downloads a **\<your order name>.cer** certificate file. **Save** this file. You will install this certificate on the client or host computer that you will use to connect to the device.
+3. Under **Device credentials**, go to **API access** to device. Select **Download**. This action downloads a **\<your order name>.cer** certificate file. **Save** this file and install it on the client or host computer you use to connect to the device.
![Download certificate in Azure portal](media/data-box-deploy-copy-data-via-rest/download-cert-1.png) ### Import certificate
-Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it is imported into the system's certificate store, while other applications do not make use of that mechanism.
+Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it's imported into the system's certificate store, while other applications don't make use of that mechanism.
-Specific information for some applications is mentioned in this section. For more information on other applications, consult the documentation for the application and the operating system used.
+Specific information for some applications is mentioned in this section. For more information on other applications, see the documentation for the application and the operating system used.
Follow these steps to import the `.cer` file into the root store of a Windows or Linux client. On a Windows system, you can use Windows PowerShell or the Windows Server UI to import and install the certificate on your system.
Follow these steps to import the `.cer` file into the root store of a Windows or
#### Use Windows Server UI 1. Right-click the `.cer` file and select **Install certificate**. This action starts the Certificate Import Wizard.
-2. For **Store location**, select **Local Machine**, and then click **Next**.
+2. For **Store location**, select **Local Machine**, and then select **Next**.
![Certificate Import Wizard, Windows Server](media/data-box-deploy-copy-data-via-rest/import-cert-ws-1.png)
-3. Select **Place all certificates in the following store**, and then click **Browse**. Navigate to the root store of your remote host, and then click **Next**.
+3. Select **Place all certificates in the following store**, and then select **Browse**. Navigate to the root store of your remote host, and then select **Next**.
![Certificate Import Wizard, Certificate Store](media/data-box-deploy-copy-data-via-rest/import-cert-ws-2.png)
-4. Click **Finish**. A message that tells you that the import was successful appears.
+4. Select **Finish**. A message that tells you that the import was successful appears.
![Certificate Import Wizard, finish import](media/data-box-deploy-copy-data-via-rest/import-cert-ws-3.png)
Follow the same steps to [add device IP address and blob service endpoint when c
Follow the steps to [Configure partner software that you used while connecting over *http*](#verify-connection-and-configure-partner-software). The only difference is that you should leave the *Use http option* unchecked.
+## Determine appropriate access tiers for block blobs
+
+> [!IMPORTANT]
+> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
+
+Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
+
+| Tier | Recommendation | Best practice |
+||-||
+| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
+| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
+| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
+| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-data-box) section.
+ ## Copy data to Data Box
-Once you are connected to the Data Box Blob storage, the next step is to copy data. Prior to data copy, review the following considerations:
+After connecting to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
* While copying data, ensure that the data size conforms to the size limits described in the [Azure storage and Data Box limits](data-box-limits.md).
-* If data, which is being uploaded by Data Box, is concurrently uploaded by other applications outside of Data Box, this may result in upload job failures and data corruption.
+* Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
> [!IMPORTANT]
-> Make sure that you maintain a copy of the source data until you can confirm that the Data Box has transferred your data into Azure Storage.
+> Make sure that you maintain a copy of the source data until you can confirm that your data has been copied into Azure Storage.
-In this tutorial, AzCopy is used to copy data to Data Box Blob storage. You can also use Azure Storage Explorer (if you prefer a GUI-based tool) or a partner software to copy the data.
+In this tutorial, AzCopy is used to copy data to Data Box Blob storage. If you prefer a GUI-based tool, you can also use Azure Storage Explorer or other partner software to copy the data.
The copy procedure has the following steps:
The first step is to create a container, because blobs are always uploaded into
![Blob Containers context menu, Create Blob Container](media/data-box-deploy-copy-data-via-rest/create-blob-container-1.png) 4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
-5. Press **Enter** when done to create the blob container, or **Esc** to cancel. Once the blob container is successfully created, it is displayed under the **Blob Containers** folder for the selected storage account.
+5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After the blob container is successfully created, it's displayed under the **Blob Containers** folder for the selected storage account.
![Blob container created](media/data-box-deploy-copy-data-via-rest/create-blob-container-2.png)
-### Upload contents of a folder to Data Box Blob storage
+### Upload the contents of a folder to Data Box Blob storage
-Use AzCopy to upload all files in a folder to Blob storage on Windows or Linux. To upload all blobs in a folder, enter the following AzCopy command:
+Use AzCopy to upload all files within a folder to Blob storage on Windows or Linux. To upload all blobs in a folder, enter the following AzCopy command:
#### Linux ```azcopy azcopy \ --source /mnt/myfolder \
- --destination https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/files/ \
+ --destination https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/ \
--dest-key <key> \ --recursive ```
azcopy \
#### Windows ```azcopy
-AzCopy /Source:C:\myfolder /Dest:https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/files/ /DestKey:<key> /S
+AzCopy /Source:C:\myfolder /Dest:https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/ /DestKey:<key> /S
```
-Replace `<key>` with your account key. To get your account key, in the Azure portal, go to your storage account. Go to **Settings > Access keys**, select a key, and paste it into the AzCopy command.
+Replace `<key>` with your account key. You can retrieve your account key within the Azure portal by navigating to your storage account. Select **Settings > Access keys**, choose a key, then copy and paste the value into the AzCopy command.
-If the specified destination container does not exist, AzCopy creates it and uploads the file into it. Update the source path to your data directory, and replace `data-box-storage-account-name` in the destination URL with the name of the storage account associated with your Data Box.
+If the specified destination container doesn't exist, AzCopy creates it and uploads the file into it. Update the source path to your data directory, and replace `data-box-storage-account-name` in the destination URL with the name of the storage account associated with your Data Box.
-To upload the contents of the specified directory to Blob storage recursively, specify the `--recursive` (Linux) or `/S` (Windows) option. When you run AzCopy with one of these options, all subfolders and their files are uploaded as well.
+To upload the contents of the specified directory to Blob storage recursively, specify the `--recursive` option for Linux or the `/S` option for Windows. When you run AzCopy with one of these options, all subfolders and their files are uploaded as well.
### Upload modified files to Data Box Blob storage
-Use AzCopy to upload files based on their last-modified time. To try this, modify or create new files in your source directory for test purposes. To upload only updated or new files, add the `--exclude-older` (Linux) or `/XO` (Windows) parameter to the AzCopy command.
+You can also use AzCopy to upload files based on their last-modified time. To upload only updated or new files, add the `--exclude-older` parameter for Linux or the `/XO` parameter for Windows parameter to the AzCopy command.
-If you only want to copy source resources that do not exist in the destination, specify both `--exclude-older` and `--exclude-newer` (Linux) or `/XO` and `/XN` (Windows) parameters in the AzCopy command. AzCopy uploads only the updated data, based on its time stamp.
+If you only want to copy the resources within your local source that don't exist within the destination, specify both the `--exclude-older` and `--exclude-newer` parameters for Linux, or the `/XO` and `/XN` parameters for Windows in the AzCopy command. AzCopy uploads updated data only, as determined by its time stamp.
#### Linux ```azcopy azcopy \ --source /mnt/myfolder \destination https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/files/ \
+--destination https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/ \
--dest-key <key> \ --recursive \ --exclude-older
azcopy \
#### Windows ```azcopy
-AzCopy /Source:C:\myfolder /Dest:https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/files/ /DestKey:<key> /S /XO
+AzCopy /Source:C:\myfolder /Dest:https://data-box-storage-account-name.blob.device-serial-no.microsoftdatabox.com/container-name/ /DestKey:<key> /S /XO
``` If there are any errors during the connect or copy operation, see [Troubleshoot issues with Data Box Blob storage](data-box-troubleshoot-rest.md).
-Next step is to prepare your device to ship.
+The next step is to prepare your device to ship.
## Next steps
In this tutorial, you learned about Azure Data Box topics such as:
> [!div class="checklist"] >
-> * Prerequisites
-> * Connect to Data Box Blob storage via *http* or *https*
+> * Prerequisites for copy data to Azure Data Box Blob storage using REST APIs
+> * Connecting to Data Box Blob storage via *http* or *https*
+> * Determining appropriate access tiers for block blobs
> * Copy data to Data Box Advance to the next tutorial to learn how to ship your Data Box back to Microsoft.
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md
Previously updated : 04/12/2023 Last updated : 03/25/2024 # Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
::: zone target="docs"
+> [!IMPORTANT]
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+>
+>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Azure Data Box](#copy-data-to-azure-data-box) section to copy your data to the appropriate access tier.
+>
+> The information contained within this section applies to orders placed after April 1, 2024.
+ This tutorial describes how to connect to and copy data from your host computer using the local web UI. In this tutorial, you learn how to:
In this tutorial, you learn how to:
> > * Prerequisites > * Connect to Data Box
+> * Determine appropriate access tiers for block blobs
> * Copy data to Data Box ## Prerequisites
Before you begin, make sure that:
1. You've completed the [Tutorial: Set up Azure Data Box](data-box-deploy-set-up.md). 2. You've received your Data Box and the order status in the portal is **Delivered**.
-3. You have a host computer that has the data that you want to copy over to Data Box. Your host computer must
+3. You have a host computer that has the data that you want to copy over to Data Box. Your host computer must:
* Run a [Supported operating system](data-box-system-requirements.md).
- * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, use a 1-GbE data link but the copy speeds will be impacted.
+ * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, use a 1-GbE data link but the copy speeds are impacted.
## Connect to Data Box
Based on the storage account selected, Data Box creates up to:
* Three shares for each associated storage account for GPv1 and GPv2. * One share for premium storage.
-* One share for blob storage account.
+* One share for a blob storage account, containing one folder for each of the four access tiers.
+
+The following table identifies the names of the Data Box shares to which you can connect, and the type of data uploaded to your target storage account. It also identifies the hierarchy of shares and directories into which you copy your source data.
-Under block blob and page blob shares, first-level entities are containers, and second-level entities are blobs. Under shares for Azure Files, first-level entities are shares, second-level entities are files.
+| Storage type | Share name | First-level entity | Second-level entity | Third-level entity |
+|--|-|||--|
+| Block blob | \<storageAccountName\>_BlockBlob | <\accessTier\> | <\containerName\> | <\blockBlob\> |
+| Page blob | <\storageAccountName\>_PageBlob | <\containerName\> | <\pageBlob\> | |
+| File storage | <\storageAccountName\>_AzFile | <\fileShareName\> | <\file\> | |
-The following table shows the UNC path to the shares on your Data Box and Azure Storage path URL where the data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
+You can't copy files directly to the *root* folder of any Data Box share. Instead, create folders within the Data Box share depending on your use case.
+
+Block blobs support the assignment of access tiers at the file level. When copying files to the block blob share, the recommended best-practice is to add new subfolders within the appropriate access tier. After creating new subfolders, continue adding files to each subfolder as appropriate.
+
+A new container is created for any folder residing at the root of the block blob share. Any file within that folder is copied to the storage account's default access tier as a block blob.
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed information about access tier best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+The following table shows the UNC path to the shares on your Data Box and the corresponding Azure Storage path URL to which data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
-|Azure Storage types | Data Box shares |
-|-|--|
-| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob_Archive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Storage types | Data Box shares |
+||--|
+| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<accessTier>\<ContainerName>\myBlob.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.txt`</li> |
+| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\myBlob.vhd`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.vhd`</li> |
+| Azure Files | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\myFile.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/myFile.txt`</li> |
If using a Windows Server host computer, follow these steps to connect to the Data Box.
If using a Windows Server host computer, follow these steps to connect to the Da
![Get user name and password for a share](media/data-box-deploy-copy-data/get-share-credentials2.png)
-3. To access the shares associated with your storage account (*utsac1* in the following example) from your host computer, open a command window. At the command prompt, type:
+3. The following example uses a sample storage account named *utsac1*. To access the shares associated with your storage account from your host computer, open a command window. At the command prompt, type:
- `net use \\<IP address of the device>\<share name> /u:<IP address of the device>\<user name for the share>`
+ `net use \\<DeviceIPAddress>\<share name> /u:<IP address of the device>\<user name for the share>`
Depending upon your data format, the share paths are as follows:
- - Azure Block blob - `\\10.126.76.138\utsac1_BlockBlob`
- - Azure Page blob - `\\10.126.76.138\utsac1_PageBlob`
- - Azure Files - `\\10.126.76.138\utsac1_AzFile`
- - Azure Blob blob (Archive) - `\\10.126.76.138\utsac0_BlockBlobArchive`
+ - Azure Block blob - `\\<DeviceIPAddress>\utsac1_BlockBlob`
+ - Azure Page blob - `\\<DeviceIPAddress>\utsac1_PageBlob`
+ - Azure Files - `\\<DeviceIPAddress>\utsac1_AzFile`
4. Enter the password for the share when prompted. If the password has special characters, add double quotation marks before and after it. The following sample shows connecting to a share via the preceding command. ```
- C:\Users\Databoxuser>net use \\10.126.76.138\utSAC1_202006051000_BlockBlob /u:10.126.76.138\testuser1
- Enter the password for 'testuser1' to connect to '10.126.76.138': "ab1c2def$3g45%6h7i&j8kl9012345"
+ C:\Users\Databoxuser>net use \\<DeviceIPAddress>\utSAC1_202006051000_BlockBlob /u:<DeviceIPAddress>\testuser1
+ Enter the password for 'testuser1' to connect to '<DeviceIPAddress>': "ab1c2def$3g45%6h7i&j8kl9012345"
The command completed successfully. ```
-4. Press Windows + R. In the **Run** window, specify the `\\<device IP address>`. Select **OK** to open File Explorer.
+5. Press Windows + R. In the **Run** window, specify the `\\<DeviceIPAddress>`. Select **OK** to open File Explorer.
![Connect to share via File Explorer](media/data-box-deploy-copy-data/connect-shares-file-explorer1.png)
If using a Windows Server host computer, follow these steps to connect to the Da
![Shares shown in File Explorer](media/data-box-deploy-copy-data/connect-shares-file-explorer2.png)
- **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You cannot copy files directly to *root* folder in the storage account.
-
-If using a Linux client, use the following command to mount the SMB share. The "vers" parameter below is the version of SMB that your Linux host supports. Plug in the appropriate version in the command below. For versions of SMB that the Data Box supports see [Supported file systems for Linux clients](./data-box-system-requirements.md#supported-file-transfer-protocols-for-clients)
+ > [!IMPORTANT]
+ > You can't copy files directly to the storage account's *root* folder. Within a block blob storage account's root folder, you'll find a folder corresponding to each of the available access tiers.
+ >
+ > To copy your data to Azure Data Box, you must first select the folder corresponding to one of the access tiers. Next, create a sub-folder within that tier's folder to store your data. Finally, copy your data to the newly created sub-folder. Your new sub-folder represents the container created within the storage account during ingestion. Your data is uploaded to this container as blobs.
+
+If using a Linux client, use the following command to mount the SMB share. The `vers` parameter value identifies the version of SMB that your Linux host supports. Insert the appropriate version into the sample command provided. To see a list of SMB versions supported by Data Box, see [Supported file systems for Linux clients](./data-box-system-requirements.md#supported-file-transfer-protocols-for-clients).
```console sudo mount -t cifs -o vers=2.1 10.126.76.138:/utsac1_BlockBlob /home/databoxubuntuhost/databox ```
+## Determine appropriate access tiers for block blobs
+
+> [!IMPORTANT]
+> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
+
+Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
+
+| Tier | Recommendation | Best practice |
+||-||
+| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
+| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
+| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
+| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-azure-data-box) section.
+ ## Copy data to Data Box
-Once you're connected to the Data Box shares, the next step is to copy data. Before you begin the data copy, review the following considerations:
+After connecting to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
-* Make sure that you copy the data to shares that correspond to the appropriate data format. For instance, copy the block blob data to the share for block blobs. Copy the VHDs to page blob. If the data format doesn't match the appropriate share type, then at a later step, the data upload to Azure will fail.
-* Always create a folder under the share for the files that you intend to copy and then copy the files to that folder. The folder created under block blob and page blob shares represents a container to which the data is uploaded as blobs. You cannot copy files directly to the *root* folder in the storage account. The same behavior applies to Azure Files. Under shares for Azure Files, first-level entities are shares, second-level entities are files.
+* Make sure that you copy your data to the share that corresponds to the required data format. For instance, copy block blob data to the share for block blobs. Copy VHDs to the page blob share. If the data format doesn't match the appropriate share type, the data upload to Azure fails during a later step.
+* When copying data to the *AzFile* or *PageBlob* shares, first create a folder at the share's root, then copy files to that folder.
+* When copying data to the *BlockBlob* share, create a subfolder within the desired access tier, then copy data to the newly created subfolder. The subfolder represents a container into which data is uploaded as blobs. You can't copy files directly to a share's *root* folder.
* While copying data, make sure that the data size conforms to the size limits described in the [Azure storage account size limits](data-box-limits.md#azure-storage-account-size-limits). * If you want to preserve metadata (ACLs, timestamps, and file attributes) when transferring data to Azure Files, follow the guidance in [Preserving file ACLs, attributes, and timestamps with Azure Data Box](data-box-file-acls-preservation.md)
-* If data that is being uploaded by Data Box is also being uploaded by another application, outside Data Box, at the same time, this could result in upload job failures and data corruption.
+* Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
* If you use both the SMB and NFS protocols for data copies, we recommend that you: * Use different storage accounts for SMB and NFS. * Don't copy the same data to the same end destination in Azure using both SMB and NFS. In these cases, the final outcome can't be determined. * Although copying via both SMB and NFS in parallel can work, we don't recommend doing that as it's prone to human error. Wait until your SMB data copy is complete before you start an NFS data copy. > [!IMPORTANT]
-> Make sure that you maintain a copy of the source data until you can confirm that the Data Box has transferred your data into Azure Storage.
+> Make sure that you maintain a copy of the source data until you can confirm that your data has been copied into Azure Storage.
After you connect to the SMB share, begin the data copy. You can use any SMB-compatible file copy tool, such as Robocopy, to copy your data. Multiple copy jobs can be initiated using Robocopy. Use the following command:
robocopy <Source> <Target> * /e /r:3 /w:60 /is /nfl /ndl /np /MT:32 or 64 /fft
The attributes are described in the following table.
-|Attribute |Description |
-|||
-|/e |Copies subdirectories including empty directories. |
-|/r: |Specifies the number of retries on failed copies. |
-|/w: |Specifies the wait time between retries, in seconds. |
-|/is |Includes the same files. |
-|/nfl |Specifies that file names aren't logged. |
-|/ndl |Specifies that directory names aren't logged. |
-|/np |Specifies that the progress of the copying operation (the number of files or directories copied so far) will not be displayed. Displaying the progress significantly lowers the performance. |
-|/MT | Use multithreading, recommended 32 or 64 threads. This option not used with encrypted files. You may need to separate encrypted and unencrypted files. However, single threaded copy significantly lowers the performance. |
-|/fft | Use to reduce the time stamp granularity for any file system. |
-|/B | Copies files in Backup mode. |
-|/z | Copies files in Restart mode, use this if the environment is unstable. This option reduces throughput due to additional logging. |
-| /zb | Uses Restart mode. If access is denied, this option uses Backup mode. This option reduces throughput due to checkpointing. |
-|/efsraw | Copies all encrypted files in EFS raw mode. Use only with encrypted files. |
+|Attribute |Description |
+|-||
+|/e |Copies subdirectories including empty directories. |
+|/r: |Specifies the number of retries on failed copies. |
+|/w: |Specifies the wait time between retries, in seconds. |
+|/is |Includes the same files. |
+|/nfl |Specifies that file names aren't logged. |
+|/ndl |Specifies that directory names aren't logged. |
+|/np |Specifies that the progress of the copying operation (the number of files or directories copied so far) won't be displayed. Displaying the progress significantly lowers the performance. |
+|/MT | Use multithreading, recommended 32 or 64 threads. This option not used with encrypted files. You might need to separate encrypted and unencrypted files. However, single threaded copy significantly lowers the performance. |
+|/fft | Use to reduce the time stamp granularity for any file system. |
+|/B | Copies files in Backup mode. |
+|/z | Copies files in Restart mode; use this switch if the environment is unstable. This option reduces throughput due to additional logging. |
+| /zb | Uses Restart mode. If access is denied, this option uses Backup mode. This option reduces throughput due to checkpointing. |
+|/efsraw | Copies all encrypted files in EFS raw mode. Use only with encrypted files. |
|log+:\<LogFile>| Appends the output to the existing log file.| The following sample shows the output of the robocopy command to copy files to the Data Box.
For more specific scenarios such as using `robocopy` to list, copy, or delete fi
To optimize the performance, use the following robocopy parameters when copying the data.
-| Platform | Mostly small files < 512 KB | Mostly medium files 512 KB-1 MB | Mostly large files > 1 MB |
-|-|--|--|--|
-| Data Box | 2 Robocopy sessions <br> 16 threads per sessions | 3 Robocopy sessions <br> 16 threads per sessions | 2 Robocopy sessions <br> 24 threads per sessions |
+| Platform | Mostly small files < 512 KB | Mostly medium files 512 KB - 1 MB | Mostly large files > 1 MB |
+|-|--|--||
+| Data Box | 2 Robocopy sessions <br> 16 threads per session | 3 Robocopy sessions <br> 16 threads per session | 2 Robocopy sessions <br> 24 threads per session |
For more information on Robocopy command, go to [Robocopy and a few examples](https://social.technet.microsoft.com/wiki/contents/articles/1073.robocopy-and-a-few-examples.aspx).
-During the copy process, if there are any errors, you will see a notification.
+Notifications are displayed during the copy process to identify errors.
![A copy error notification in Connect and copy](media/data-box-deploy-copy-data/view-errors-1.png)
To copy data via SMB:
1. If using a Windows host, use the following command to connect to the SMB shares:
- `\\<IP address of your device>\ShareName`
+ `\\<Device IP address>\ShareName`
-2. To get the share access credentials, go to the **Connect & copy** page in the local web UI of the Data Box.
+2. To retrieve the share access credentials, go to the **Connect & copy** page within the local web UI of the Data Box.
3. Use an SMB compatible file copy tool such as Robocopy to copy data to shares. For step-by-step instructions, go to [Tutorial: Copy data to Azure Data Box via SMB](data-box-deploy-copy-data.md).
For step-by-step instructions, go to [Tutorial: Copy data to Azure Data Box via
To copy data via NFS:
-1. If using an NFS host, use the following command to mount the NFS shares on your Data Box:
+1. When using an NFS host, use the following command to mount the NFS shares on your Data Box:
`sudo mount <Data Box device IP>:/<NFS share on Data Box device> <Path to the folder on local Linux computer>`
For step-by-step instructions, go to [Tutorial: Use the data copy service to cop
To copy data managed disks:
-1. When ordering the Data Box device, you should have selected managed disks as your storage destination.
-2. You can connect to Data Box via SMB or NFS shares.
-3. You can then copy data via SMB or NFS tools.
+1. When ordering the Data Box device, select *managed disks* as your storage destination.
+2. Connect to Data Box via SMB or NFS shares.
+3. Copy data via SMB or NFS tools.
For step-by-step instructions, go to [Tutorial: Use Data Box to import data as managed disks in Azure](data-box-deploy-copy-data-from-vhds.md).
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
Title: Tutorial to order Azure Data Box | Microsoft Docs description: In this tutorial, learn about Azure Data Box, a hybrid solution that allows you to import on-premises data into Azure, and how to order Azure Data Box. -+ Previously updated : 07/08/2022 Last updated : 03/25/2024 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
This tutorial describes how you can order an Azure Data Box. In this tutorial, y
## Prerequisites
-# [Portal](#tab/portal)
+Complete the following configuration prerequisites for the Data Box service and device before you deploy the device:
-Complete the following configuration prerequisites for Data Box service and device before you deploy the device:
+# [Portal](#tab/portal)
[!INCLUDE [Prerequisites](../../includes/data-box-deploy-ordered-prerequisites.md)]
Before you begin, make sure that:
**Sign in to Azure**
-Open up a Windows PowerShell command window and sign in to Azure with the [az login](/cli/azure/reference-index#az-login) command:
+Open up a Windows PowerShell command window and sign in to Azure with the [az sign in](/cli/azure/reference-index#az-login) command:
```azurecli PS C:\Windows> az login ```
-Here is the output from a successful sign-in:
+The output confirms a successful sign-in:
```output You have logged in. Now let us find all the subscriptions to which you have access.
You have logged in. Now let us find all the subscriptions to which you have acce
**Install the Azure Data Box CLI extension**
-Before you can use the Azure Data Box CLI commands, you need to install the extension. Azure CLI extensions give you access to experimental and pre-release commands that haven't yet shipped as part of the core CLI. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
+Before you can use the Azure Data Box CLI commands, you need to install the extension. Azure CLI extensions give you access to experimental and prerelease commands before shipping as part of the core CLI. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
To install the extension for Azure Data Box, run the following command: `az extension add --name databox`:
To install the extension for Azure Data Box, run the following command: `az
PS C:\Windows> az extension add --name databox ```
-If the extension is installed successfully, you'll see the following output:
+If the extension is installed successfully, the following output is displayed:
```output The installed extension 'databox' is experimental and not covered by customer support. Please use with discretion.
If the extension is installed successfully, you'll see the following output:
#### Use Azure Cloud Shell
-You can use [Azure Cloud Shell](https://shell.azure.com/), an Azure hosted interactive shell environment, through your browser to run CLI commands. Azure Cloud Shell supports Bash or Windows PowerShell with Azure services. The Azure CLI is pre-installed and configured to use with your account. Select the Cloud Shell button on the menu in the upper-right section of the Azure portal:
+You can use [Azure Cloud Shell](https://shell.azure.com/), an Azure hosted interactive shell environment, through your browser to run CLI commands. Azure Cloud Shell supports Bash or Windows PowerShell with Azure services. The Azure CLI is preinstalled and configured to use with your account. Select the Cloud Shell button on the menu in the upper-right section of the Azure portal:
![Cloud Shell menu selection](../storage/common/media/storage-quickstart-create-account/cloud-shell-menu.png)
Before you begin, make sure that you:
**Install or upgrade Windows PowerShell**
-You'll need to have Windows PowerShell version 6.2.4 or higher installed. To find out what version of PowerShell is installed, run: `$PSVersionTable`.
+You need to have Windows PowerShell version 6.2.4 or higher installed. To find out what version of PowerShell is installed, run: `$PSVersionTable`.
-You'll see the following output:
+The following sample output confirms that version 6.2.3 is installed:
```azurepowershell PS C:\users\gusp> $PSVersionTable
If your version is lower than 6.2.4, you need to upgrade your version of Windows
**Install Azure PowerShell and Data Box modules**
-You'll need to install the Azure PowerShell modules to use Azure PowerShell to order an Azure Data Box. To install the Azure PowerShell modules:
+You need to install the Azure PowerShell modules to use Azure PowerShell to order an Azure Data Box. To install the Azure PowerShell modules:
-1. Install the [Azure PowerShell Az module](/powershell/azure/new-azureps-module-az).
+1. Install the [Az PowerShell module](/powershell/azure/new-azureps-module-az).
2. Then install Az.DataBox using the command `Install-Module -Name Az.DataBox`. ```azurepowershell
Open up a Windows PowerShell command window and sign in to Azure with the [Conne
PS C:\Windows> Connect-AzAccount ```
-Here is the output from a successful sign-in:
+The following sample output confirms a successful sign-in:
```output WARNING: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FSBFZMBKC to authenticate.
For detailed information on how to sign in to Azure using Windows PowerShell, se
## Order Data Box
+To order a device, perform the following steps:
+ # [Portal](#tab/portal) [!INCLUDE [order-data-box-via-portal](../../includes/data-box-order-portal.md)] # [Azure CLI](#tab/azure-cli)
-Do the following steps using Azure CLI to order a device:
-
-1. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You'll need to use these settings as parameters when running the CLI command to create the Data Box order. The following table shows the parameter settings used for `az databox job create`:
+1. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. These settings are used as parameters when running the CLI command to create the Data Box order. The following table shows the parameter settings used for `az databox job create`:
| Setting (parameter) | Description | Sample value | |||| |resource-group| Use an existing or create a new one. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"| |name| The name of the order you're creating. | "mydataboxorder"| |contact-name| The name associated with the shipping address. | "Gus Poland"|
- |phone| The phone number of the person or business that will receive the order.| "14255551234"
- |location| The nearest Azure region to you that will be shipping your device.| "US West"|
+ |phone| The phone number of the person or business receiving the order.| "14255551234" |
+ |location| The nearest Azure region used to ship the device.| "US West"|
|sku| The specific Data Box device you're ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" | |email-list| The email addresses associated with the order.| "gusp@contoso.com" |
- |street-address1| The street address to where the order will be shipped. | "15700 NE 39th St" |
+ |street-address1| The street address to which the order is shipped. | "15700 NE 39th St" |
|street-address2| The secondary address information, such as apartment number or building number. | "Building 123" |
- |city| The city that the device will be shipped to. | "Redmond" |
- |state-or-province| The state where the device will be shipped.| "WA" |
- |country| The country that the device will be shipped. | "United States" |
+ |city| The city to which the device is shipped. | "Redmond" |
+ |state-or-province| The state to which the device is shipped.| "WA" |
+ |country| The country to which the device is shipped. | "United States" |
|postal-code| The zip code or postal code associated with the shipping address.| "98052"| |company-name| The name of your company you work for.| "Contoso, LTD" | |storage account| The Azure Storage account from where you want to import data.| "mystorageaccount"|
Do the following steps using Azure CLI to order a device:
az databox job create --resource-group <resource-group> --name <order-name> --location <azure-location> --sku <databox-device-type> --contact-name <contact-name> --phone <phone-number> --email-list <email-list> --street-address1 <street-address-1> --street-address2 <street-address-2> --city "contact-city" --state-or-province <state-province> --country <country> --postal-code <postal-code> --company-name <company-name> --storage-account "storage-account" ```
- Here is an example of command usage:
+ The following sample command illustrates the command's usage:
```azurecli az databox job create --resource-group "myresourcegroup" \
Do the following steps using Azure CLI to order a device:
--storage-account mystorageaccount ```
- Here is the output from running the command:
+ The following sample output confirms successful job creation:
```output Command group 'databox job' is experimental and not covered by customer support. Please use with discretion.
Do the following steps using Azure CLI to order a device:
"deliveryType": "NonScheduled", "details": null, "error": null,
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.DataBox/jobs/mydataboxtest3",
+ "id": "/subscriptions/[GUID]/resourceGroups/myresourcegroup/providers/Microsoft.DataBox/jobs/mydataboxtest3",
"identity": { "type": "None" },
Do the following steps using Azure CLI to order a device:
```
-3. All Azure CLI commands will use json as the output format by default unless you change it. You can change the output format by using the global parameter `--output <output-format>`. Changing the format to "table" will improve output readability.
+3. Unless the default output is modified, all Azure CLI commands return a json response. You can change the output format by using the global parameter `--output <output-format>`. Changing the format to "table" improves output readability.
- Here is the same command we just ran with a small tweak to change the formatting:
+ The following example contains the same command, but with the modified `--output` parameter value to alter the formatted response:
```azurecli az databox job create --resource-group "myresourcegroup" --name "mydataboxtest4" --location "westus" --sku "DataBox" --contact-name "Gus Poland" --phone "14255551234" --email-list "gusp@contoso.com" --street-address1 "15700 NE 39th St" --street-address2 "Bld 25" --city "Redmond" --state-or-province "WA" --country "US" --postal-code "98052" --company-name "Contoso" --storage-account mystorageaccount --output "table" ```
- Here is the output from running the command:
+ The following sample response illustrates the modified output format:
```output
Do the following steps using Azure CLI to order a device:
Do the following steps using Azure PowerShell to order a device:
-1. Before you create the import order, you need to get your storage account and save the storage account object in a variable.
+1. Before creating the import order, fetch your storage account and save the object in a variable.
```azurepowershell $storAcct = Get-AzStorageAccount -Name "mystorageaccount" -ResourceGroup "myresourcegroup" ```
-2. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. You'll need to use these settings as parameters when running the PowerShell command to create the Data Box order. The following table shows the parameter settings used for [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob).
+2. Write down your settings for your Data Box order. These settings include your personal/business information, subscription name, device information, and shipping information. These settings are used as parameters when running the PowerShell cmdlet to create the Data Box order. The following table shows the parameter settings used for [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob).
| Setting (parameter) | Description | Sample value | |||| |ResourceGroupName [Required]| Use an existing resource group. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"| |Name [Required]| The name of the order you're creating. | "mydataboxorder"| |ContactName [Required]| The name associated with the shipping address. | "Gus Poland"|
- |PhoneNumber [Required]| The phone number of the person or business that will receive the order.| "14255551234"
- |Location [Required]| The nearest Azure region to you that will be shipping your device.| "WestUS"|
+ |PhoneNumber [Required]| The phone number of the person or business receiving the order.| "14255551234"
+ |Location [Required]| The nearest Azure region to you that ships your device.| "WestUS"|
|DataBoxType [Required]| The specific Data Box device you're ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" | |EmailId [Required]| The email addresses associated with the order.| "gusp@contoso.com" |
- |StreetAddress1 [Required]| The street address to where the order will be shipped. | "15700 NE 39th St" |
+ |StreetAddress1 [Required]| The street address to where the order is shipped. | "15700 NE 39th St" |
|StreetAddress2| The secondary address information, such as apartment number or building number. | "Building 123" | |StreetAddress3| The tertiary address information. | |
- |City [Required]| The city that the device will be shipped to. | "Redmond" |
- |StateOrProvinceCode [Required]| The state where the device will be shipped.| "WA" |
- |CountryCode [Required]| The country that the device will be shipped. | "United States" |
+ |City [Required]| The city to which the device is shipped. | "Redmond" |
+ |StateOrProvinceCode [Required]| The state to which the device is shipped.| "WA" |
+ |CountryCode [Required]| The country to which the device is shipped. | "United States" |
|PostalCode [Required]| The zip code or postal code associated with the shipping address.| "98052"| |CompanyName| The name of your company you work for.| "Contoso, LTD" | |StorageAccountResourceId [Required]| The Azure Storage account ID from where you want to import data.| &lt;AzstorageAccount&gt;.id |
-3. In your command-prompt of choice or terminal, use the [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob) to create your Azure Data Box order.
+3. Use the [New-AzDataBoxJob](/powershell/module/az.databox/New-AzDataBoxJob) cmdlet to create your Azure Data Box order as shown in the following example.
```azurepowershell PS> $storAcct = Get-AzureStorageAccount -StorageAccountName "mystorageaccount"
Do the following steps using Azure PowerShell to order a device:
-Name "myDataBoxOrderPSTest" ```
- Here is the output from running the command:
+ The following sample output confirms job creation:
```output jobResource.Name jobResource.Sku.Name jobResource.Status jobResource.StartTime jobResource.Location ResourceGroup
Do the following steps using Azure PowerShell to order a device:
After you place the order, you can track the status of the order from Azure portal. Go to your Data Box order and then go to **Overview** to view the status. The portal shows the order in **Ordered** state.
-If the device isn't available, you receive a notification. If the device is available, Microsoft identifies the device for shipment and prepares the shipment. During device preparation, following actions occur:
+If the device isn't available, you receive a notification. If the device is available, Microsoft identifies the device and prepares it for shipment. The following actions occur during device preparation:
* SMB shares are created for each storage account associated with the device. * For each share, access credentials such as username and password are generated.
-* Device password that helps unlock the device is also generated.
-* The Data Box is locked to prevent unauthorized access to the device at any point.
+* The device password is generated. This password is used to unlock the device.
+* The device is locked to prevent unauthorized access at any point.
-When the device preparation is complete, the portal shows the order in **Processed** state.
+When the device preparation is complete, the portal shows the order in a **Processed** state.
![A Data Box order that's been processed](media/data-box-overview/data-box-order-status-processed.png)
-Microsoft then prepares and dispatches your device via a regional carrier. You receive a tracking number once the device is shipped. The portal shows the order in **Dispatched** state.
+Microsoft then prepares and dispatches your device via a regional carrier. You receive a tracking number after the device is shipped. The portal shows the order in **Dispatched** state.
![A Data Box order that's been dispatched](media/data-box-overview/data-box-order-status-dispatched.png)
To get tracking information about a single, existing Azure Data Box order, run [
|query| The JMESPath query string. For more information, see [JMESPath](http://jmespath.org/). | --query &lt;string&gt;| |verbose| Include verbose logging. | --verbose |
- Here is an example of the command with output format set to "table":
+ The following example contains the same command, but with the `output` parameter value set to "table":
```azurecli PS C:\WINDOWS\system32> az databox job show --resource-group "myresourcegroup" \
To get tracking information about a single, existing Azure Data Box order, run [
--output "table" ```
- Here is the output from running the command:
+ The following sample response shows the modified output format:
```output Command group 'databox job' is experimental and not covered by customer support. Please use with discretion.
To get tracking information about a single, existing Azure Data Box order, run [
``` > [!NOTE]
-> List order can be supported at subscription level and that makes resource group an optional parameter (rather than a required parameter).
+> List order can be supported at subscription level, making the `resource group` parameter optional rather than required.
### List all orders
-If you have ordered multiple devices, you can run [`az databox job list`](/cli/azure/databox/job#az-databox-job-list) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
+When ordering multiple devices, you can run [`az databox job list`](/cli/azure/databox/job#az-databox-job-list) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
The command also displays time stamps of each order. ```azurecli
The following table shows the parameter information for `az databox job list`:
|query| The JMESPath query string. For more information, see [JMESPath](http://jmespath.org/). | --query &lt;string&gt;| |verbose| Include verbose logging. | --verbose |
- Here is an example of the command with output format set to "table":
+ The following example shows the command with the output format specified as "table":
```azurecli PS C:\WINDOWS\system32> az databox job list --resource-group "GDPTest" --output "table" ```
- Here is the output from running the command:
+ The following sample response displays the output with modified formatting:
```output Command group 'databox job' is experimental and not covered by customer support. Please use with discretion.
To get tracking information about a single, existing Azure Data Box order, run [
|Name [Required]| The name of the order to get information for. | "mydataboxorder"| |ResourceId| The ID of the resource associated with the order. | |
- Here is an example of the command with output:
+ The following example can be used to retrieve details about a specific order:
```azurepowershell
- PS C:\WINDOWS\system32> Get-AzDataBoxJob -ResourceGroupName "myResourceGroup" -Name "myDataBoxOrderPSTest"
+ Get-AzDataBoxJob -ResourceGroupName "myResourceGroup" -Name "myDataBoxOrderPSTest"
```
- Here is the output from running the command:
+ The following example output indicates that the command was completed successfully:
```output jobResource.Name jobResource.Sku.Name jobResource.Status jobResource.StartTime jobResource.Location ResourceGroup
To get tracking information about a single, existing Azure Data Box order, run [
### List all orders
-If you have ordered multiple devices, you can run [`Get-AzDataBoxJob`](/powershell/module/az.databox/Get-AzDataBoxJob) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
-The command also displays time stamps of each order.
-
-```azurepowershell
-Get-AzDataBoxJob -ResourceGroupName <String>
-```
+To view all your Azure Data Box orders, run the [`Get-AzDataBoxJob`](/powershell/module/az.databox/Get-AzDataBoxJob) cmdlet. The cmdlet lists all orders that belong to a specific resource group. The resulting output also contains additional data such as order name, shipping status, Azure region, delivery type, order status, and the time stamp associated with each order. Canceled orders are also included in the list.
-Here is an example of the command:
+The following example can be used to retrieve details about all orders associated to a specific Azure resource group:
```azurepowershell
-PS C:\WINDOWS\system32> Get-AzDataBoxJob -ResourceGroupName "myResourceGroup"
+Get-AzDataBoxJob -ResourceGroupName <String>
```
-Here is the output from running the command:
+The following example output indicates that the command was completed successfully:
```output jobResource.Name jobResource.Sku.Name jobResource.Status jobResource.StartTime jobResource.Location ResourceGroup
PS C:\WINDOWS\system32>
## Cancel the order
-# [Portal](#tab/portal)
-
-To cancel this order, in the Azure portal, go to **Overview** and select **Cancel** from the command bar.
- After placing an order, you can cancel it at any point before the order status is marked processed.
-To delete a canceled order, go to **Overview** and select **Delete** from the command bar.
+# [Portal](#tab/portal)
+
+To cancel and delete an order using the Azure portal, select **Overview** from within the command bar. To cancel the order, select the **Cancel** option. To delete a canceled order, select the **Delete** option.
# [Azure CLI](#tab/azure-cli) ### Cancel an order
-To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/databox/job#az-databox-job-cancel). You're required to specify your reason for canceling the order.
-
- ```azurecli
- az databox job cancel --resource-group <resource-group> --name <order-name> --reason <cancel-description>
- ```
+Use the [`az databox job cancel`](/cli/azure/databox/job#az-databox-job-cancel) command to cancel a Data Box order. You're required to specify your reason for canceling the order.
- The following table shows the parameter information for `az databox job cancel`:
+ The following table provides parameter information for the `az databox job cancel` command:
| Parameter | Description | Sample value | |||| |resource-group [Required]| The name of the resource group associated with the order to be deleted. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"| |name [Required]| The name of the order to be deleted. | "mydataboxorder"| |reason [Required]| The reason for canceling the order. | "I entered erroneous information and needed to cancel the order." |
- |yes| Do not prompt for confirmation. | --yes (-y)|
+ |yes| Don't prompt for confirmation. | --yes (-y)|
|debug| Include debugging information to verbose logging | --debug | |help| Display help information for this command. | --help -h | |only-show-errors| Only show errors, suppressing warnings. | --only-show-errors |
To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/data
|query| The JMESPath query string. For more information, see [JMESPath](http://jmespath.org/). | --query &lt;string&gt;| |verbose| Include verbose logging. | --verbose |
- Here is an example of the command with output:
+ The following sample command can be used to cancel a specific Data Box order:
```azurecli
- PS C:\Windows> az databox job cancel --resource-group "myresourcegroup" --name "mydataboxtest3" --reason "Our budget was slashed due to **redacted** and we can no longer afford this device."
+ az databox job cancel --resource-group "myresourcegroup" --name "mydataboxtest3" --reason "Our migration plan was modified and we are ordering a device using a different cost center."
```
- Here is the output from running the command:
+ The following example output indicates that the command was completed successfully:
```output Command group 'databox job' is experimental and not covered by customer support. Please use with discretion. Are you sure you want to perform this operation? (y/n): y
- PS C:\Windows>
``` ### Delete an order
-After you cancel an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/databox/job#az-databox-job-delete) to delete the order.
-
- ```azurecli
- az databox job delete --name [-n] <order-name> --resource-group <resource-group> [--yes] [--verbose]
- ```
+After you cancel an Azure Data Box order, use the [`az databox job delete`](/cli/azure/databox/job#az-databox-job-delete) command to delete the order.
The following table shows the parameter information for `az databox job delete`:
After you cancel an Azure Data Box order, you can run [`az databox job delete`](
|resource-group [Required]| The name of the resource group associated with the order to be deleted. A resource group is a logical container for the resources that can be managed or deployed together. | "myresourcegroup"| |name [Required]| The name of the order to be deleted. | "mydataboxorder"| |subscription| The name or ID (GUID) of your Azure subscription. | "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" |
- |yes| Do not prompt for confirmation. | --yes (-y)|
+ |yes| Don't prompt for confirmation. | --yes (-y)|
|debug| Include debugging information to verbose logging | --debug | |help| Display help information for this command. | --help -h | |only-show-errors| Only show errors, suppressing warnings. | --only-show-errors |
After you cancel an Azure Data Box order, you can run [`az databox job delete`](
|query| The JMESPath query string. For more information, see [JMESPath](http://jmespath.org/). | --query &lt;string&gt;| |verbose| Include verbose logging. | --verbose |
-Here is an example of the command with output:
+The following example can be used to delete a specific Data Box order after being canceled:
```azurecli
- PS C:\Windows> az databox job delete --resource-group "myresourcegroup" --name "mydataboxtest3" --yes --verbose
+ az databox job delete --resource-group "myresourcegroup" --name "mydataboxtest3" --yes --verbose
```
- Here is the output from running the command:
+ The following example output indicates that the command was completed successfully:
```output Command group 'databox job' is experimental and not covered by customer support. Please use with discretion. command ran in 1.142 seconds.
- PS C:\Windows>
``` # [PowerShell](#tab/azure-ps) ### Cancel an order
-To cancel an Azure Data Box order, run [Stop-AzDataBoxJob](/powershell/module/az.databox/stop-azdataboxjob). You're required to specify your reason for canceling the order.
-
-```azurepowershell
-Stop-AzDataBoxJob -ResourceGroup <String> -Name <String> -Reason <String>
-```
+You can cancel an Azure Data Box order using the [Stop-AzDataBoxJob](/powershell/module/az.databox/stop-azdataboxjob) cmdlet. You're required to specify your reason for canceling the order.
The following table shows the parameter information for `Stop-AzDataBoxJob`:
The following table shows the parameter information for `Stop-AzDataBoxJob`:
|Reason [Required]| The reason for canceling the order. | "I entered erroneous information and needed to cancel the order." | |Force | Forces the cmdlet to run without user confirmation. | -Force |
-Here is an example of the command with output:
+The following example can be used to delete a specific Data Box order after being canceled:
```azurepowershell
-PS C:\PowerShell\Modules> Stop-AzDataBoxJob -ResourceGroupName myResourceGroup \
- -Name "myDataBoxOrderPSTest" \
- -Reason "I entered erroneous information and had to cancel."
+Stop-AzDataBoxJob -ResourceGroupName myResourceGroup \
+ -Name "myDataBoxOrderPSTest" \
+ -Reason "I entered erroneous information and need to cancel and re-order."
```
-Here is the output from running the command:
+ The following example output indicates that the command was completed successfully:
```output Confirm "Cancelling Databox Job "myDataBoxOrderPSTest [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
-PS C:\WINDOWS\system32>
``` ### Delete an order
-If you have canceled an Azure Data Box order, you can run [`Remove-AzDataBoxJob`](/powershell/module/az.databox/remove-azdataboxjob) to delete the order.
-
-```azurepowershell
-Remove-AzDataBoxJob -Name <String> -ResourceGroup <String>
-```
+After canceling an Azure Data Box order, you can delete it using the [`Remove-AzDataBoxJob`](/powershell/module/az.databox/remove-azdataboxjob) cmdlet.
-The following table shows the parameter information for `Remove-AzDataBoxJob`:
+The following table shows parameter information for `Remove-AzDataBoxJob`:
| Parameter | Description | Sample value | ||||
The following table shows the parameter information for `Remove-AzDataBoxJob`:
|Name [Required]| The name of the order to be deleted. | "mydataboxorder"| |Force | Forces the cmdlet to run without user confirmation. | -Force |
-Here is an example of the command with output:
+The following example can be used to delete a specific Data Box order after canceling:
```azurepowershell
-PS C:\Windows> Remove-AzDataBoxJob -ResourceGroup "myresourcegroup" \
- -Name "mydataboxtest3"
+Remove-AzDataBoxJob -ResourceGroup "myresourcegroup" \
+ -Name "mydataboxtest3"
```
-Here is the output from running the command:
+The following example output indicates that the command was completed successfully:
```output Confirm "Removing Databox Job "mydataboxtest3 [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
-PS C:\Windows>
``` ## Next steps
-In this tutorial, you learned about Azure Data Box articles such as:
+In this tutorial, you learned about Azure Data Box topics such as:
> [!div class="checklist"] > > * Prerequisites to deploy Data Box
-> * Order Data Box
-> * Track the order
-> * Cancel the order
+> * Ordering Data Box
+> * Tracking the Data Box order
+> * Canceling the Data Box order
Advance to the next tutorial to learn how to set up your Data Box.
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md
Previously updated : 11/18/2022 Last updated : 03/26/2024
# Doc scores: # 11/18/22: 75 (2456/62) # 09/01/23: 100 (2159/0)
+-->
::: zone target="docs"> # Tutorial: Copy data to Azure Data Box Disk and verify
-<!--
::: zone-end ::: zone target="chromeless"
After the disks are connected and unlocked, you can copy data from your source d
::: zone-end ::: zone target="docs">+
+> [!IMPORTANT]
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+>
+>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to disks](#copy-data-to-disks) section to copy your data to the appropriate access tier.
+>
+> The information contained within this section applies to orders placed after April 1, 2024.
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
This tutorial describes how to copy data from your host computer and generate checksums to verify data integrity. In this tutorial, you learn how to: > [!div class="checklist"]
+> * Determine appropriate access tiers for block blobs
> * Copy data to Data Box Disk > * Verify data
Before you begin, make sure that:
- You have completed the [Tutorial: Install and configure your Azure Data Box Disk](data-box-disk-deploy-set-up.md). - Your disks are unlocked and connected to a client computer. - The client computer used to copy data to the disks is running a [Supported operating system](data-box-disk-system-requirements.md#supported-operating-systems-for-clients).-- The intended storage type for your data matches [Supported storage types](data-box-disk-system-requirements.md#supported-storage-types-for-upload).
+- The intended storage type for your data matches the [Supported storage types](data-box-disk-system-requirements.md#supported-storage-types-for-upload).
- You've reviewed [Managed disk limits in Azure object size limits](data-box-disk-limits.md#azure-object-size-limits).
+## Determine appropriate access tiers for block blobs
+
+> [!IMPORTANT]
+> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
+
+Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
+
+| Tier | Recommendation | Best practice |
+|-|-||
+| **Hot** | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
+| **Cool** | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
+| **Cold** | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
+| **Archive** | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box Disk. This process is discussed in greater detail within the [Copy data to disks](#copy-data-to-disks) section.
+ ## Copy data to disks Review the following considerations before you copy the data to the disks: -- It is your responsibility to ensure that you copy your local data to the folders that correspond to the appropriate data format. For instance, copy block blob data to the *BlockBlob* folder. Block blobs being archived should be copied to the *BlockBlob_Archive* folder. If the local data format doesn't match the appropriate folder for the chosen storage type, the data upload to Azure fails in a later step.
+- It is your responsibility to copy local data to the share which corresponds to the appropriate data format. For instance, copy block blob data to the *BlockBlob* share. Copy VHDs to the *PageBlob* share. If the local data format doesn't match the appropriate folder for the chosen storage type, the data upload to Azure fails in a later step.
+- You can't copy data directly to a share's *root* folder. Instead, create a folder within the appropriate share and copy your data into it.
+ - Folders located at the *PageBlob* share's *root* correspond to containers within your storage account. A new container will be created for any folder whose name does not match an existing container within your storage account.
+ - Folders located at the *AzFile* share's *root* correspond to Azure file shares. A new file share will be created for any folder whose name does not match an existing file share within your storage account.
+ - The *BlockBlob* share's *root* level contains one folder corresponding to each access tier. When copying data to the *BlockBlob* share, create a subfolder within the top-level folder corresponding to the desired access tier. As with the *PageBlob* share, a new containers will be created for any folder whose name doesn't match an existing container. Data within the container will be copied to the tier corresponding to the subfolder's top-level parent.
+
+ A container will also be created for any folder residing at the *BlockBlob* share's *root*, though the data it will be copied to the container's default access tier. To ensure that your data is copied to the desired access tier, don't create folders at the *root* level.
+
+ > [!IMPORTANT]
+ > Data uploaded to the archive tier remains offline and needs to be rehydrated before reading or modifying. Data copied to the archive tier must remain for at least 180 days or be subject to an early deletion charge. Archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts.
+
- While copying data, ensure that the data size conforms to the size limits described within in the [Azure storage and Data Box Disk limits](data-box-disk-limits.md) article. - To preserve metadata such as ACLs, timestamps, and file attributes when transferring data to Azure Files, follow the guidance within the [Preserving file ACLs, attributes, and timestamps with Azure Data Box Disk](data-box-disk-file-acls-preservation.md) article. - If you use both Data Box Disk and other applications to upload data simultaneously, you may experience upload job failures and data corruption. > [!IMPORTANT]
- > Data uploaded to the archive tier remains offline and needs to be rehydrated before reading or modifying. Data copied to the archive tier must remain for at least 180 days or be subject to an early deletion charge. Archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts.
+ > If you specified managed disks as one of the storage destinations during order creation, the following section is applicable.
> [!IMPORTANT] > If you specified managed disks as one of the storage destinations during order creation, the following section is applicable.
Perform the following steps to connect and copy data from your computer to the D
|Selected storage destination |Storage account type|Staging storage account type |Folders and subfolders | ||--|--||
- |Storage account |GPv1 or GPv2 | NA | BlockBlob<br>BlockBlob_Archive<br>PageBlob<br>AzureFile |
- |Storage account |Blob storage account| NA | BlockBlob<br>BlockBlob_Archive |
+ |Storage account |GPv1 or GPv2 | NA | BlockBlob<ul><li>Archive</li><li>Cold</li><li>Cool</li><li>Hot</li></ul>PageBlob<br>AzureFile |
+ |Storage account |Blob storage account| NA | BlockBlob<ul><li>Archive</li><li>Cold</li><li>Cool</li><li>Hot</li></ul> |
|Managed disks |NA | GPv1 or GPv2 | ManagedDisk<ul><li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> |
- |Storage account<br>Managed disks |GPv1 or GPv2 | GPv1 or GPv2 | BlockBlob<br/>BlockBlob_Archive<br/>PageBlob<br/>AzureFile<br/>ManagedDisk<ul><li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul>|
- |Storage account <br> Managed disks |Blob storage account | GPv1 or GPv2 |BlockBlob<br>BlockBlob_Archive<br>ManagedDisk<ul> <li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> |
+ |Storage account<br>Managed disks |GPv1 or GPv2 | GPv1 or GPv2 | BlockBlob<ul><li>Archive</li><li>Cold</li><li>Cool</li><li>Hot</li></ul>PageBlob<br/>AzureFile<br/>ManagedDisk<ul><li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul>|
+ |Storage account <br> Managed disks |Blob storage account | GPv1 or GPv2 |BlockBlob<ul><li>Archive</li><li>Cold</li><li>Cool</li><li>Hot</li></ul>ManagedDisk<ul> <li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> |
The following screenshot shows an order where a GPv2 storage account and archive tier were specified: :::image type="content" source="media/data-box-disk-deploy-copy-data/content-sml.png" alt-text="Screenshot of the contents of the disk drive." lightbox="media/data-box-disk-deploy-copy-data/content.png":::
-1. Copy data to be imported as block blobs into the *BlockBlob* folder. Copy data to be stored as block blobs with the archive tier into the *BlockBlob_Archive* folder. Similarly, copy VHD or VHDX data to the *PageBlob* folder, and file share data into *AzureFile* folder.
+1. Copy VHD or VHDX data to the *PageBlob* folder. All files copied to the *PageBlob* folder are copied into a default `$root` container within the Azure Storage account. A container is created in the Azure storage account for each subfolder within the *PageBlob* folder.
- A container is created in the Azure storage account for each subfolder within the *BlockBlob* and *PageBlob* folders. All files copied to the *BlockBlob* and *PageBlob* folders are copied into a default `$root` container within the Azure Storage account. Any files in the `$root` container are always uploaded as block blobs.
+ Copy data to be placed in Azure file shares to a subfolder within the *AzureFile* folder. All files copied to the *AzureFile* folder are copied as files to a default container of type `databox-format-[GUID]`, for example, `databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7`.
- Copy data to be placed in Azure file shares to a subfolder within the *AzureFile* folder. All files copied to the *AzureFile* folder are copied as files to a default container of type `databox-format-[GUID]`, for example, `databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7`.
+ You can't copy files directly to the *BlockBlob*'s *root* folder. Within the root folder, you'll find a sub-folder corresponding to each of the available access tiers. To copy your blob data, you must first select the folder corresponding to one of the access tiers. Next, create a sub-folder within that tier's folder to store your data. Finally, copy your data to the newly created sub-folder. Your new sub-folder represents the container created within the storage account during ingestion. Your data is uploaded to this container as blobs. As with the *AzureFile* share, a new blob storage container will be created for each sub-folder located at the *BlockBlob*'s *root* folder. The data within these folders will be saved according to the storage account's default access tier.
- Before you begin to copy data, you need to move any files and folders that exist in the root directory to a different folder.
+ Before you begin to copy data, you need to move any files and folders that exist in the root directory to a different folder.
> [!IMPORTANT] > All the containers, blobs, and filenames should conform to [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). If these rules are not followed, the data upload to Azure will fail.
Advance to the next tutorial to learn how to return the Data Box Disk and verify
> [!div class="nextstepaction"] > [Ship your Azure Data Box back to Microsoft](./data-box-disk-deploy-picked-up.md)
-<!--
::: zone-end>
-<!--
+ ::: zone target="chromeless" ### Copy data to disks
Take the following steps to verify your data.
For more information on data validation, see [Validate data](#validate-data). If you experience errors during validation, see [troubleshoot validation errors](data-box-disk-troubleshoot.md). ::: zone-end>
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
Previously updated : 12/29/2022 Last updated : 03/10/2024 # Azure Data Box Disk limits
Here are the sizes of the Azure objects that can be written. Make sure that all
|-|--| | Block blob | 7 TiB | | Page blob | 4 TiB <br> Every file uploaded in page blob format must be 512 bytes aligned (an integral multiple), else the upload fails. <br> VHD and VHDX are 512 bytes aligned. |
-| Azure Files | 1 TiB |
+| Azure Files | 4 TiB |
| Managed disks | 4 TiB <br> For more information on size and limits, see: <li>[Scalability targets of Standard SSDs](../virtual-machines/disks-types.md#standard-ssds)</li><li>[Scalability targets of Premium SSDs](../virtual-machines/disks-types.md#standard-hdds)</li><li>[Scalability targets of Standard HDDs](../virtual-machines/disks-types.md#premium-ssds)</li><li>[Pricing and billing of managed disks](../virtual-machines/disks-types.md#billing)</li> ## Azure block blob, page blob, and file naming conventions
databox Data Box Disk Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-quickstart-portal.md
Previously updated : 11/04/2020 Last updated : 03/26/2024 #Customer intent: As an IT admin, I need to quickly deploy Data Box Disk so as to import data into Azure.
# Quickstart: Deploy Azure Data Box Disk using the Azure portal
-This quickstart describes how to deploy the Azure Data Box Disk using the Azure portal. The steps include how to quickly create an order, receive disks, unpack, connect, and copy data to disks so that it uploads to Azure.
+This quickstart describes the process of deploying Azure Data Box Disk using the Azure portal. Follow the steps in this article to create an order; receive, unpack, and connect disks; and copy data to the device for upload to Azure.
-For detailed step-by-step deployment and tracking instructions, go to [Tutorial: Order Azure Data Box Disk](data-box-disk-deploy-ordered.md).
+For detailed step-by-step deployment and tracking instructions, refer to the [Order Azure Data Box Disk](data-box-disk-deploy-ordered.md) tutorial.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F&preserve-view=true).
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F&preserve-view=true).
::: zone-end ::: zone target="chromeless"
-This guide walks you through the steps of using the Azure Data Box Disk in the Azure portal. This guide helps answer the following questions.
+This guide describes the process of deploying Azure Data Box Disk using the Azure portal, and helps answer the following questions.
::: zone-end
This guide walks you through the steps of using the Azure Data Box Disk in the A
Before you begin: -- Make sure that your subscription is enabled for Azure Data Box service. To enable your subscription for this service, [Sign up for the service](https://aka.ms/azuredataboxfromdiskdocs).
+- Ensure that your subscription is enabled for Azure Data Box service. If necessary, [sign up for the service](https://aka.ms/azuredataboxfromdiskdocs) to enable it on our subscription.
## Sign in to Azure
Sign in to the Azure portal at [https://aka.ms/azuredataboxfromdiskdocs](https:/
### [Portal](#tab/azure-portal)
-This step takes roughly 5 minutes.
+This step takes approximately 5 minutes.
-1. Create a new Azure Data Box resource in the Azure portal.
+1. Create a new **Azure Data Box** resource in the Azure portal.
2. Select a subscription enabled for this service and choose transfer type as **Import**. Provide the **Source country** where the data resides and **Azure destination region** for the data transfer. 3. Select **Data Box Disk**. The maximum solution capacity is 35 TB and you can create multiple disk orders for larger data sizes. 4. Enter the order details and shipping information. If the service is available in your region, provide notification email addresses, review the summary, and then create the order.
Once the order is created, the device is prepared for shipment.
This step takes roughly 5 minutes.
-The Data Box Disk are mailed in a UPS Express Box. Open the box and check that the box has:
+Data Box Disks are mailed in a UPS Express Box. Open the box and check that the box has:
- 1 to 5 bubble-wrapped USB disks. - A connecting cable per disk.
This step takes roughly 5 minutes.
The time to complete this operation depends upon your data size.
-1. The drive contains *PageBlob*, *BlockBlob*, *AzureFile*, *ManagedDisk*, and *DataBoxDiskImport* folders. Drag and drop to copy the data that needs to be imported as block blobs in to *BlockBlob* folder. Similarly, drag and drop data such as VHD/VHDX to *PageBlob* folder, and appropriate data to *AzureFile*. Copy the VHDs that you want to upload as managed disks to a folder under *ManagedDisk*.
+1. The drive contains *PageBlob*, *BlockBlob*, *AzureFile*, *ManagedDisk*, and *DataBoxDiskImport* folders. Within the *BlockBlob* root folder, you'll find a sub-folder corresponding to each of the available access tiers.
+
+ Drag and drop data such as VHD/VHDX to *PageBlob* folder, and appropriate data to *AzureFile*. Copy any VHDs that you want to upload as managed disks to a folder under *ManagedDisk*.
+
+ To copy your blob data, you must first select the sub-folder within the *BlockBlob* share which corresponds to one of the access tiers. Next, create a sub-folder within that tier's folder to store your data. Finally, copy your data to the newly created sub-folder. Your new sub-folder represents a container created within the storage account during ingestion. Your data is uploaded to this container as blobs.
- A container is created in the Azure storage account for each sub-folder under *BlockBlob* and *PageBlob* folder. A file share is created for a sub-folder under *AzureFile*.
+ To copy files to the *AzureFile* share, first create a folder to contain your files, then copy your data to the newly created folder. A file share is created for a sub-folder under *AzureFile*. Any files copied directly to the *AzureFile* folder fail and are uploaded as block blobs with the storage account's default access tier.
- All files under *BlockBlob* and *PageBlob* folders are copied into a default container `$root` under the Azure Storage account. Copy files into a folder within *AzureFile*. Any files copied directly to the *AzureFile* folder fail and are uploaded as block blobs.
+ All files under *PageBlob* folders are copied into a default container `$root` under the Azure Storage account. Just as with the *AzureFile* share, a container is created in the Azure storage account for each sub-folder within the *PageBlob* folder.
> [!NOTE] > - All the containers, blobs, and files should conform to [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). If these rules are not followed, the data upload to Azure will fail.
- > - Ensure that files do not exceed ~4.75 TiB for block blobs, ~8 TiB for page blobs, and ~1 TiB for Azure Files.
+ > - Ensure that files do not exceed ~4.75 TiB for block blobs, ~8 TiB for page blobs, and ~4 TiB for Azure Files.
2. **(Optional but recommended)** After the copy is complete, we strongly recommend that at a minimum you run the `DataBoxDiskValidation.cmd` provided in the *DataBoxDiskImport* folder and select option 1 to validate the files. We also recommend that time permitting, you use option 2 to also generate checksums for validation (may take time depending upon the data size). These steps minimize the chances of any failures when uploading the data to Azure. 3. Safely remove the drive.
databox Data Box Heavy Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-copy-data.md
Previously updated : 08/29/2019 Last updated : 03/25/2024 #Customer intent: As an IT admin, I need to be able to copy data to Data Box Heavy to upload on-premises data from my server onto Azure.
::: zone target = "docs"
+> [!IMPORTANT]
+> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
+>
+> The information contained within this section applies to orders placed after April 1, 2024.
+ This tutorial describes how to connect to and copy data from your host computer using the local web UI. In this tutorial, you learn how to:
In this tutorial, you learn how to:
You can copy data from your source server to your Data Box via SMB, NFS, REST, data copy service or to managed disks.
-In each case, make sure that the share and folder names, and the data size follow guidelines described in the [Azure Storage and Data Box Heavy service limits](data-box-heavy-limits.md).
+In each case, make sure that the share names, folder names, and data size follow guidelines described in the [Azure Storage and Data Box Heavy service limits](data-box-heavy-limits.md).
::: zone-end
In each case, make sure that the share and folder names, and the data size follo
Before you begin, make sure that:
-1. You've completed the [Tutorial: Set up Azure Data Box Heavy](data-box-deploy-set-up.md).
-2. You've received your Data Box Heavy and the order status in the portal is **Delivered**.
-3. You have a host computer that has the data that you want to copy over to Data Box Heavy. Your host computer must
+1. You complete the [Tutorial: Set up Azure Data Box Heavy](data-box-deploy-set-up.md).
+2. You receive your Data Box Heavy and that the order status in the portal is **Delivered**.
+3. You have a host computer that has the data that you want to copy over to Data Box Heavy. Your host computer must:
- Run a [Supported operating system](data-box-system-requirements.md). - Be connected to a high-speed network. For fastest copy speeds, two 40-GbE connections (one per node) can be utilized in parallel. If you do not have 40-GbE connection available, we recommend that you have at least two 10-GbE connections (one per node).
-
## Connect to Data Box Heavy shares Based on the storage account selected, Data Box Heavy creates up to:+ - Three shares for each associated storage account for GPv1 and GPv2. - One share for premium storage.-- One share for blob storage account.
+- One share for a blob storage account, containing one folder for each of the four access tiers.
+
+The following table identifies the names of the Data Box shares to which you can connect, and the type of data uploaded to your target storage account. It also identifies the hierarchy of shares and directories into which you copy your source data.
+
+| Storage type | Share name | First-level entity | Second-level entity | Third-level entity |
+|--|-|||--|
+| Block blob | \<storageAccountName\>_BlockBlob | <\accessTier\> | <\containerName\> | <\blockBlob\> |
+| Page blob | <\storageAccountName\>_PageBlob | <\containerName\> | <\pageBlob\> | |
+| File storage | <\storageAccountName\>_AzFile | <\fileShareName\> | <\file\> | |
-These shares are created on both the nodes of the device.
+You can't copy files directly to the *root* folder of any Data Box share. Instead, create folders within the Data Box share depending on your use case.
-Under block blob and page blob shares:
-- First-level entities are containers.-- Second-level entities are blobs.
+Block blobs support the assignment of access tiers at the file level. When copying files to the block blob share, the recommended best-practice is to add new subfolders within the appropriate access tier. After creating new subfolders, continue adding files to each subfolder as appropriate.
-Under shares for Azure Files:
-- First-level entities are shares.-- Second-level entities are files.
+A new container is created for any folder residing at the root of the block blob share. Any file within that folder is copied to the storage account's default access tier as a block blob.
-The following table shows the UNC path to the shares on your Data Box Heavy and Azure Storage path URL where the data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed information about access tier best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
+
+The following table shows the UNC path to the shares on your Data Box and the corresponding Azure Storage path URL to which data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
-| Storage | UNC path |
-|-|--|
-| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<StorageAccountName_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Storage types | Data Box shares |
+||--|
+| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<accessTier>\<ContainerName>\myBlob.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.txt`</li> |
+| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\myBlob.vhd`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/myBlob.vhd`</li> |
+| Azure Files | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\myFile.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/myFile.txt`</li> |
+
+For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed information about access tier best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
The steps to connect using a Windows or a Linux client are different.
If using a Windows Server host computer, follow these steps to connect to the Da
- Azure Page blob - `\\10.100.10.100\databoxe2etest_PageBlob` - Azure Files - `\\10.100.10.100\databoxe2etest_AzFile`
-4. Enter the password for the share when prompted. The following sample shows connecting to a share via the preceding command.
+4. Enter the password for the share when prompted. The following sample can be used to connect to *BlockBlob* share on the Data Box having in IP address of *10.100.10.100*.
```
- C:\Users\Databoxuser>net use \\10.100.10.100\databoxe2etest_BlockBlob /u:databoxe2etest
+ net use \\10.100.10.100\databoxe2etest_BlockBlob /u:databoxe2etest
Enter the password for 'databoxe2etest' to connect to '10.100.10.100': The command completed successfully. ```
-4. Press Windows + R. In the **Run** window, specify the `\\<device IP address>`. Click **OK** to open File Explorer.
+5. Press Windows + R. In the **Run** window, specify the `\\<device IP address>`. Click **OK** to open File Explorer.
![Connect to share via File Explorer](media/data-box-heavy-deploy-copy-data/connect-shares-file-explorer-1.png)
- You should now see the shares as folders.
+ You should now see the shares as folders. Note that in this example the *BlockBlob* share is being used. Accordingly, the four folders representing the four available access tiers are present. These folders are not available in other shares.
![Connect to share via File Explorer 2](media/data-box-heavy-deploy-copy-data/connect-shares-file-explorer-2.png)
- **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You cannot copy files directly to *root* folder in the storage account.
+ **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. You cannot copy files directly to the *root* folder in the storage account. Any folders created under the *PageBlob* share represents containers into which data is uploaded as blobs. Similarly, any sub-folders created within the folders representing access tiers in the *BlockBlob* share also represents a blob storage container. Folders created within the *AzFile* share represent file shares.
+
+ Folders created at the *root* of the *BlockBlob* share will be created as blob containers. The access tier of these container will be inherited from the storage account.
### Connect on a Linux system
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
When necessary, you can delete a recommended rule for the current session. For e
To delete an adaptive network hardening rule for your current session: -- In the **Rules** tab, select the three dots (...) at the end of the rule's row, and select **Delete**.
+- In the **Rules** tab, select the three dots (...) at the end of the rule's row, and select **Delete**.
![Deleting a rule.](./media/adaptive-network-hardening/delete-hard-rule.png)
defender-for-cloud Agentless Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-malware-scanning.md
Agentless malware scanning provides:
Agentless malware scanning offers the following benefits to both protected and unprotected machines: -- **Improved coverage** - If a machine doesn't have an antivirus solution enabled, the agentless detector scans that machine to detect malicious activity.
+- **Improved coverage** - If a machine doesn't have an antivirus solution enabled, the agentless detector scans that machine to detect malicious activity.
-- **Detect potential threats** - The agentless scanner scans all files and folders including any files or folders that are excluded from the agent-based antivirus scans, without having an effect on the performance of the machine.
+- **Detect potential threats** - The agentless scanner scans all files and folders including any files or folders that are excluded from the agent-based antivirus scans, without having an effect on the performance of the machine.
You can learn more about [agentless machine scanning](concept-agentless-data-collection.md) and how to [enable agentless scanning for VMs](enable-agentless-scanning-vms.md).
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
After the Microsoft Defender for Endpoint agent is installed on your machine, as
:::image type="content" source="media/alert-validation/powershell-no-exit.png" alt-text="Screenshot showing PowerShell message line." lightbox="media/alert-validation/powershell-no-exit.png":::
-Alternately, you can also use the [EICAR](https://www.eicar.org/download-anti-malware-testfile/) test string to perform this test: Create a text file, paste the EICAR line, and save the file as an executable file to your machine's local drive.
+Alternately, you can also use the [EICAR](https://www.eicar.org/download-anti-malware-testfile/) test string to perform this test: Create a text file, paste the EICAR line, and save the file as an executable file to your machine's local drive.
> [!NOTE] > When reviewing test alerts for Windows, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus).
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
Last updated 03/25/2024
# Alerts schemas - Defender for Cloud provides alerts that help you identify, understand, and respond to security threats. Alerts are generated when Defender for Cloud detects suspicious activity or a security-related issue in your environment. You can view these alerts in the Defender for Cloud portal, or you can export them to external tools for further analysis and response. You can review security alerts from the [overview dashboard](overview-page.md), [alerts](managing-and-responding-alerts.md) page, [resource health pages](investigate-resource-health.md), or [workload protections dashboard](workload-protections-dashboard.md).
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
By applying sensitivity information types and Microsoft Purview sensitivity labe
Data sensitivity settings define what's considered sensitive data in your organization. Data sensitivity values in Defender for Cloud are based on: -- **Predefined sensitive information types**: Defender for Cloud uses the built-in sensitive information types in [Microsoft Purview](/microsoft-365/compliance/sensitive-information-type-learn-about). This ensures consistent classification across services and workloads. Some of these types are enabled by default in Defender for Cloud. You can [modify these defaults](data-sensitivity-settings.md). Of these built-in sensitive information types, there's a subset supported by sensitive data discovery. You can view a [reference list](sensitive-info-types.md) of this subset, which also lists which information types are supported by default.
+- **Predefined sensitive information types**: Defender for Cloud uses the built-in sensitive information types in [Microsoft Purview](/microsoft-365/compliance/sensitive-information-type-learn-about). This ensures consistent classification across services and workloads. Some of these types are enabled by default in Defender for Cloud. You can [modify these defaults](data-sensitivity-settings.md). Of these built-in sensitive information types, there's a subset supported by sensitive data discovery. You can view a [reference list](sensitive-info-types.md) of this subset, which also lists which information types are supported by default.
- **Custom information types/labels**: You can optionally import custom sensitive information types and [labels](/microsoft-365/compliance/sensitivity-labels) that you defined in the Microsoft Purview compliance portal. - **Sensitive data thresholds**: In Defender for Cloud, you can set the threshold for sensitive data labels. The threshold determines minimum confidence level for a label to be marked as sensitive in Defender for Cloud. Thresholds make it easier to explore sensitive data.
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
An external attack surface is the entire area of an organization or system that
Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
-Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
+Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
- Discover digital assets, always-on inventory -- Analyze and prioritize risks and threats -- Pinpoint attacker-exposed weaknesses, anywhere and on-demand
+- Analyze and prioritize risks and threats
+- Pinpoint attacker-exposed weaknesses, anywhere and on-demand
- Gain visibility into third-party attack surfaces EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). Defender for Cloud CSPM (ΓÇ£inside-outΓÇ¥) can use that data to assist with internet-exposure validation and discovery capabilities, to provide better visibility to customers. - ## Next steps - Learn about [cloud security explorer and attack paths](concept-attack-path.md) in Defender for Cloud.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
This article describes how to set up continuous export to a Log Analytics worksp
- You must [enable Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription. Required roles and permissions:+ - Security Admin or Owner for the resource group - Write permissions for the target resource. - If you use the [Azure Policy DeployIfNotExist policies](continuous-export-azure-policy.md), you must have permissions that let you assign policies. - To export data to Event Hubs, you must have Write permissions on the Event Hubs policy.-- To export to a Log Analytics workspace:
- - If it *has the SecurityCenterFree solution*, you must have a minimum of Read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`.
- - If it *doesn't have the SecurityCenterFree solution*, you must have write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`.
-
+- To export to a Log Analytics workspace:
+ - If it *has the SecurityCenterFree solution*, you must have a minimum of Read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`.
+ - If it *doesn't have the SecurityCenterFree solution*, you must have write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`.
+ Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions). ## Set up continuous export in the Azure portal
defender-for-cloud Data Aware Security Dashboard Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-aware-security-dashboard-overview.md
The **Closer look** section provides a more detailed view into the sensitive dat
- **Sensitive data discovery** - summarizes the results of the sensitive resources discovered, allowing customers to explore a specific sensitive information type and label. - **Internet-exposed data resources** - summarizes the discovery of sensitive data resources that are internet-exposed for storage and managed databases.
-
+ :::image type="content" source="media/data-aware-security-dashboard/closer-look.png" alt-text="Screenshot that shows the closer look section of the data security dashboard." lightbox="media/data-aware-security-dashboard/closer-look.png"::: You can select the **Manage data sensitivity settings** to get to the **Data sensitivity** page. The **Data sensitivity** page allows you to manage the data sensitivity settings of cloud resources at the tenant level, based on selective info types and labels originating from the Purview compliance portal, and [customize sensitivity settings](data-sensitivity-settings.md) such as creating your own customized info types and labels, and setting sensitivity label thresholds.
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
Use wildcards to simplify tracking across directories. The following rules apply
### Enable built-in recursive registry checks
-The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary might configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
+The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary might configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
![Registry.](./media/file-integrity-monitoring-enable-log-analytics/baselines-registry.png) >[!NOTE]
-> Recursive checks apply only to recommended security hives and not to custom registry paths.
+> Recursive checks apply only to recommended security hives and not to custom registry paths.
### Add a custom registry check
In the example in the following figure, **Contoso Web App** resides in the D:\ d
### Retrieve change data
-File Integrity Monitoring data resides within the Azure Log Analytics/ConfigurationChange table set.
+File Integrity Monitoring data resides within the Azure Log Analytics/ConfigurationChange table set.
1. Set a time range to retrieve a summary of changes by resource.
File Integrity Monitoring data resides within the Azure Log Analytics/Configurat
| order by Computer, RegistryKey ```
-Reports can be exported to CSV for archival and/or channeled to a Power BI report.
+Reports can be exported to CSV for archival and/or channeled to a Power BI report.
![FIM data.](./media/file-integrity-monitoring-enable-log-analytics/baselines-data.png)
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Last updated 03/11/2024
# File Integrity Monitoring in Microsoft Defender for Cloud
-File Integrity Monitoring (FIM) examines operating system files, Windows registries, application software, and Linux system files for changes that might indicate an attack.
+File Integrity Monitoring (FIM) examines operating system files, Windows registries, application software, and Linux system files for changes that might indicate an attack.
FIM (file integrity monitoring) uses the Azure Change Tracking solution to track and identify changes in your environment. When FIM is enabled, you have a **Change Tracking** resource of type **Solution**. If you remove the **Change Tracking** resource, you'll also disable the File Integrity Monitoring feature in Defender for Cloud. FIM lets you take advantage of [Change Tracking](../automation/change-tracking/overview.md) directly in Defender for Cloud. For data collection frequency details, see [Change tracking data collection details](../automation/change-tracking/overview.md#change-tracking-and-inventory-data-collection).
defender-for-cloud Iac Template Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-template-mapping.md
To set Microsoft Defender for Cloud to map IaC templates to cloud resources, you
- Supported cloud platforms: Microsoft Azure, Amazon Web Services, Google Cloud Platform - Supported source code management systems: Azure DevOps - Supported template languages: Azure Resource Manager, Bicep, CloudFormation, Terraform
-
+ > [!NOTE] > Microsoft Defender for Cloud uses only the following tags from IaC templates for mapping: >
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
The respective Azure Arc servers for GCP virtual machines that no longer exist (
Ensure that you fulfill the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). Enable these other extensions on the Azure Arc-connected machines:
-
+ - Microsoft Defender for Endpoint - A vulnerability assessment solution (Microsoft Defender Vulnerability Management or Qualys)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## April 2024
+|Date | Update |
+|--|--|
+| April 2| [Containers multicloud recommendations (GA)](#containers-multicloud-recommendations-ga) |
+
+### Containers multicloud recommendations (GA)
+
+April 2, 2024
+
+As part of Defender for Containers multicloud general availability, following recommendations are announced GA as well:
+
+- For Azure
+
+| **Recommendation** | **Description** | **Assessment Key** |
+| | | |
+| Azure registry container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
+| Azure running container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
+
+- For GCP
+
+| **Recommendation** | **Description** | **Assessment Key** |
+| | | |
+| GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
+| GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Google Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 5cc3a2c1-8397-456f-8792-fe9d0d4c9145 |
+
+- For AWS
+
+| **Recommendation** | **Description** | **Assessment Key** |
+| | | |
+| AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
+| AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f |
+
+Please note that those recommendations would affect the secure score calculation.
+
## March 2024 |Date | Update | |--|--|
+| March 31 | [Windows container images scanning is now generally available (GA)](#windows-container-images-scanning-is-now-generally-available-ga) |
| March 25 | [Continuous export now includes attack path data](#continuous-export-now-includes-attack-path-data) | | March 21 | [Agentless scanning supports CMK encrypted VMs in Azure (preview)](#agentless-scanning-supports-cmk-encrypted-vms-in-azure) | | March 18 | [New endpoint detection and response recommendations](#new-endpoint-detection-and-response-recommendations) |
If you're looking for items older than six months, you can find them in the [Arc
| March 5 | [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) | | March 3 | [Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement](#defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys-retirement) | +
+### Windows container images scanning is now generally available (GA)
+
+March 31, 2024
+
+We are announcing the general availability (GA) of the Windows container images support for scanning by Defender for Containers.
+ ### Continuous export now includes attack path data March 25, 2024
Learn more about [continuous export](benefits-of-continuous-export.md).
March 21, 2024 Until now agentless scanning covered CMK encrypted VMs in AWS and GCP. With this release we are completing support for Azure as well. The capability employs a unique scanning approach for CMK in Azure:+ - Defender for Cloud does not handle the key or decryption process. Key handling and decryption is seamlessly handled by Azure Compute and is transparent to Defender for Cloud's agentless scanning service. - The unencrypted VM disk data is never copied or re-encrypted with another key. - The original key is not replicated during the process. Purging it eradicates the data on both your production VM and Defender for CloudΓÇÖs temporary snapshot.
During public preview this capability is not automatically enabled. If you are u
- [Learn more on agentless scanning for VMs](concept-agentless-data-collection.md) - [Learn more on agentless scanning permissions](faq-permissions.yml#which-permissions-are-used-by-agentless-scanning-) - ### New endpoint detection and response recommendations March 18, 2024
-We are announcing new endpoint detection and response recommendations that discover and assesses the configuration of supported endpoint detection and response solutions. If issues are found, these recommendations offer remediation steps.
+We are announcing new endpoint detection and response recommendations that discover and assesses the configuration of supported endpoint detection and response solutions. If issues are found, these recommendations offer remediation steps.
-The following new agentless endpoint protection recommendations are now available if you have Defender for Servers Plan 2 or the Defender CSPM plan enabled on your subscription with the agentless machine scanning feature enabled. The recommendations support Azure and multicloud machines. On-premises machines are not supported.
+The following new agentless endpoint protection recommendations are now available if you have Defender for Servers Plan 2 or the Defender CSPM plan enabled on your subscription with the agentless machine scanning feature enabled. The recommendations support Azure and multicloud machines. On-premises machines are not supported.
| Recommendation name | Description | Severity | |--|
defender-for-cloud Sql Azure Vulnerability Assessment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-overview.md
Configuration modes benefits and limitations comparison:
| Scan export | Azure Resource Graph | Excel format, Azure Resource Graph | | Supported Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure operated by 21Vianet |
-## Next steps
+## Next steps
- Enable [SQL vulnerability assessments](sql-azure-vulnerability-assessment-enable.md) - Express configuration [common questions](faq-defender-for-databases.yml) and [Troubleshooting](sql-azure-vulnerability-assessment-manage.md?tabs=express#troubleshooting).
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Defender for Cloud uses connectors to collect monitoring data from Amazon Web Se
- **GKE clusters should have Microsoft Defender's extension for Azure Arc installed** - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed** - **GKE clusters should have the Azure Policy extension installed**-- If you're experiencing problems with deleting the AWS or GCP connector, check if you have a lock. An error in the Azure activity log might hint at the presence of a lock.
+- If you're experiencing problems with deleting the AWS or GCP connector, check if you have a lock. An error in the Azure activity log might hint at the presence of a lock.
- Check that workloads exist in the AWS account or GCP project. ### Tips for AWS connector problems
defender-for-cloud View And Remediate Vulnerabilities For Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images.md
If you are using Defender CSPM, first review and remediate vulnerabilities expos
:::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png" alt-text="Screenshot showing where to select a specific container." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png":::
-1. This pane includes a list of the container vulnerabilities. Select each vulnerability to [resolve the vulnerability](#remediate-vulnerabilities).
-
+1. This pane includes a list of the container vulnerabilities. Select each vulnerability to [resolve the vulnerability](#remediate-vulnerabilities).
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png" alt-text="Screenshot showing the list of container vulnerabilities." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png"::: ## View container images affected by a specific vulnerability
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
To configure integration with the Log Analytics agent:
> - [Where is the default Log Analytics workspace created?](./faq-data-collection-agents.yml#where-is-the-default-log-analytics-workspace-created-) > - [Can I delete the default workspaces created by Defender for Cloud?](./faq-data-collection-agents.yml#can-i-delete-the-default-workspaces-created-by-defender-for-cloud-)
- - **Connect Azure VMs to a different workspace** - From the dropdown list, select the workspace to store collected data. The dropdown list includes all workspaces across all of your subscriptions. You can use this option to collect data from virtual machines running in different subscriptions and store it all in your selected workspace.
+ - **Connect Azure VMs to a different workspace** - From the dropdown list, select the workspace to store collected data. The dropdown list includes all workspaces across all of your subscriptions. You can use this option to collect data from virtual machines running in different subscriptions and store it all in your selected workspace.
If you already have an existing Log Analytics workspace, you might want to use the same workspace (requires read and write permissions on the workspace). This option is useful if you're using a centralized workspace in your organization and want to use it for security data collection. Learn more in [Manage access to log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md).
defender-for-cloud Workload Protections Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workload-protections-dashboard.md
Defender for Cloud includes many advanced threat protection capabilities for vir
## Insights
-Insights provide you with news, suggested reading, and high priority alerts that are relevant in your environment.
+Insights provide you with news, suggested reading, and high priority alerts that are relevant in your environment.
## Next steps
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
For more information, see [Update Defender for IoT OT monitoring software](updat
Each OT sensor is onboarded as a cloud-connected or locally managed OT sensor and activated using a unique activation file. For cloud-connected sensors, the activation file is used to ensure the connection between the sensor and Azure.
-You need to upload a new activation file to your sensor if you want to switch sensor management modes, such as moving from a locally managed sensor to a cloud-connected sensor, or if you're [updating from a legacy software version](update-legacy-ot-software.md#update-legacy-ot-sensor-software). Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again.
+You need to upload a new activation file to your sensor if you want to switch sensor management modes, such as moving from a locally managed sensor to a cloud-connected sensor, or if you're [updating from a recent software version](update-ot-software.md?tabs=portal#update-ot-sensors). Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again.
**To add a new activation file:**
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Use the options on the **Sites and sensor** page and a sensor details page to do
| :::image type="icon" source="medi). | |:::image type="icon" source="medi). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. |
-|:::image type="icon" source="medi#update-legacy-ot-sensor-software). |
### Sensor deployment and access
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. | | **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB health monitoring on an OT sensor](how-to-set-up-snmp-mib-monitoring.md).| |:::image type="icon" source="medi#install-enterprise-iot-sensor-software). |
-|<a name="endpoint"></a> **Download endpoint details** (Public preview) | OT sensors only, with versions 22.x and higher only.<br><br>Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
+|<a name="endpoint"></a> **Download endpoint details** | OT sensors only.<br><br>Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
### Sensor maintenance and troubleshooting
In such cases, do the following steps:
1. [Onboard the sensor again](onboard-sensors.md), registering it with any new settings. 1. [Upload your new activation file](how-to-manage-individual-sensors.md#upload-a-new-activation-file).
-### Reactivate an OT sensor for upgrades to version 22.x from a legacy version
-
-If you're updating your OT sensor version from a legacy version to 22.1.x or higher, you need a different activation procedure than for earlier releases.
-
-Make sure that you've started with the relevant updates steps for this update. For more information, see [Update OT system software](update-ot-software.md).
-
-> [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log is accessible by the *admin* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *admin* user.
->
-> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
- ## Understand sensor health This procedure describes how to view sensor health data from the Azure portal. Sensor health includes data such as whether traffic is stable, the sensor is overloaded, notifications about sensor software versions, and more.
defender-for-iot Update Legacy Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-legacy-ot-software.md
- Title: Update from legacy Defender for IoT OT monitoring software versions
-description: Learn how to update (upgrade) from legacy Defender for IoT software on OT sensors and on-premises management servers.
Previously updated : 02/14/2023----
-# Update legacy OT sensors
-
-This section describes how to handle updates from legacy sensor versions, earlier than [version 22.x](release-notes.md#versions-221x).
-
-If you have earlier sensor versions installed on cloud-connected sensors, you may also have your cloud connection configured using the legacy IoT Hub method. If so, migrate to a new [cloud-connection method](architecture-connections.md), either [connecting directly](ot-deploy/provision-cloud-management.md) or using a [proxy](connect-sensors.md).
-
-## Update legacy OT sensor software
-
-Updating to version 22.x from an earlier version essentially onboards a new OT sensor, with all of the details from the legacy sensor.
-
-After the update, the newly onboarded, updated sensor requires a new activation file. We also recommend that you remove any resources left from your legacy sensor, such as deleting the sensor from Defender for IoT, and any private IoT Hubs that you'd used.
-
-For more information, see [Versioning and support for on-premises software versions](release-notes.md#versioning-and-support-for-on-premises-software-versions).
-
-**To update a legacy OT sensor version**
-
-1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
-
-1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row.
-
-1. <a name="activation-file"></a>In the **Prepare to update sensor to version 22.X** message, select **Let's go**.
-
- A new row is added on the **Sites and sensors** page, representing the newly updated OT sensor. In that row, select to download the activation file.
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
- The status for the new OT sensor switches to **Pending activation**.
-
-1. Sign into your OT sensor and select **System settings > Sensor management > Subscription & Mode Activation**.
-
-1. In the **Subscription & Mode Activation** pane, select **Select file**, and then browse to and select the activation file you'd downloaded [earlier](#activation-file).
-
- Monitor the activation status on the **Sites and sensors** page. When the OT sensor is fully activated:
-
- - The sensor status and health on the **Sites and sensors** page is updated with the new software version.
- - On the OT sensor, the **Overview** page shows an activation status of **Valid**.
-
-1. After you've applied your new activation file, make sure to [delete the legacy sensor](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). On the **Sites and sensors** page, select your legacy sensor, and then from the options (**...**) menu for that sensor, select **Delete sensor**.
-
-1. (Optional) After updating from a legacy OT sensor version, you may have leftover IoT Hubs that are no longer in use. In such cases:
-
- 1. Review your IoT hubs to ensure that they're not being used by other services.
- 1. Verify that your sensors are connected successfully.
- 1. Delete any private IoT Hubs that are no longer needed.
-
- For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md).
-
-## Migrate a cloud connection from the legacy method
-
-If you're an existing customer with a production deployment and sensors connected using the legacy IoT Hub method to connect your OT sensors to Azure, use the following steps to ensure a full and safe migration to the updated connection method.
-
-1. **Review your existing production deployment** and how sensors are currently connected to Azure. Confirm that the sensors in production networks can reach the Azure data center resource ranges.
-
-1. **Determine which connection method is right** for each production site. For more information, see [Choose a sensor connection method](architecture-connections.md#choose-a-sensor-connection-method).
-
-1. **Configure any other resources required**, such as a proxy, VPN, or ExpressRoute. For more information, see [Configure proxy settings on an OT sensor](connect-sensors.md).
-
- For any connectivity resources outside of Defender for IoT, such as a VPN or proxy, consult with Microsoft solution architects to ensure correct configurations, security, and high availability.
-
-1. **If you have legacy sensor versions installed**, we recommend that you [update your sensors](#update-legacy-ot-sensors) at least to a version 22.1.x or higher. In this case, make sure that you've [updated your firewall rules](ot-deploy/provision-cloud-management.md) and activated your sensor with a new activation file.
-
- Sign in to each sensor after the update to verify that the activation file was applied successfully. Also check the Defender for IoT **Sites and sensors** page in the Azure portal to make sure that the updated sensors show as **Connected**.
-
-1. **Start migrating with a test lab or reference project** where you can validate your connection and fix any issues found.
-
-1. **Create a plan of action for your migration**, including planning any maintenance windows needed.
-
-1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete aren't used by any other
-
- - If you've upgraded your versions, make sure that all updated sensors indicate software version 22.1.x or higher.
-
- - Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
-
- - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure.
-
- Use firewall rules that allow outbound HTTPS traffic on port 443 to each of the required endpoints. For more information, see [Provision OT sensors for cloud management](ot-deploy/provision-cloud-management.md).
-
-While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with updated connection methods.
-
-## Next steps
-
-For more information, see:
--- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Manage individual OT sensors](how-to-manage-individual-sensors.md)-- [Manage the on-premises management console](legacy-central-management/how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md)-- [Troubleshoot the on-premises management console](legacy-central-management/how-to-troubleshoot-on-premises-management-console.md)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see [Manage SSL/TLS certificates](how-to-manage-individual
Activation files on locally managed OT sensors now remain activated for as long as your Defender for IoT plan is active on your Azure subscription, just like activation files on cloud-connected OT sensors.
-You only need to update your activation file if you're [updating an OT sensor from a legacy version](update-legacy-ot-software.md#update-legacy-ot-sensor-software) or switching the sensor management mode, such as moving from locally managed to cloud-connected.
+You only need to update your activation file if you're [updating an OT sensor from a recent version](update-ot-software.md?tabs=portal#update-ot-sensors) or switching the sensor management mode, such as moving from locally managed to cloud-connected.
For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
For more information, see [Update OT system software](update-ot-software.md).
Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
-In addition to [migration steps](update-legacy-ot-software.md#migrate-a-cloud-connection-from-the-legacy-method), this new connectivity model requires that you open a new firewall rule. For more information, see:
+In addition to migration steps, this new connectivity model requires that you open a new firewall rule. For more information, see:
- **New firewall requirements**: [Sensor access to Azure portal](networking-requirements.md#sensor-access-to-azure-portal). - **Architecture**: [Sensor connection methods](architecture-connections.md)
deployment-environments How To Configure Azure Developer Cli Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-azure-developer-cli-deployment-environments.md
+
+ Title: Configure Azure Developer CLI templates for use with ADE
+description: Understand how ADE and AZD work together to provision application infrastructure and deploy application code to the new infrastructure.
++++ Last updated : 03/26/2024+
+# Customer intent: As a platform engineer, I want to use ADE and AZD together to provision application infrastructure and deploy application code to the new infrastructure.
+++
+# Configure Azure Developer CLI with Azure Deployment Environments
+
+In this article, you create a new environment from an existing Azure Developer CLI (AZD) compatible template by using AZD. You learn how to configure Azure Deployment Environments (ADE) and AZD to work together to provision application infrastructure and deploy application code to the new infrastructure.
+
+To learn the key concepts of how AZD and ADE work together, see [Use Azure Developer CLI with Azure Deployment Environments](concept-azure-developer-cli-with-deployment-environments.md).
+
+## Prerequisites
+
+- Create and configure a dev center with a project, environment types, and catalog. Use the following article as guidance:
+ - [Quickstart: Create and configure a dev center for Azure Deployment Environments](/azure/deployment-environments/quickstart-create-and-configure-devcenter).
+
+## Attach Microsoft quick start catalog
+
+Microsoft provides a quick start catalog that contains a set of AZD compatible templates that you can use to create environments. You can attach the quick start catalog to your dev center at creation or add it later. The quick start catalog contains a set of templates that you can use to create environments.
+
+## Examine an AZD compatible template
+
+You can use an existing AZD compatible template to create a new environment, or you can add an azure.yaml file to your repository. In this section, you examine an existing AZD compatible template.
+
+AZD provisioning for environments relies on curated templates from the catalog. Templates in the catalog might assign tags to provisioned Azure resources for you to associate your app services with in the azure.yaml file, or specify the resources explicitly. In this example, resources are specified explicitly.
+
+For more information on tagging resources, see [Tagging resources for Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration#tagging-resources-for-azure-deployment-environments).
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your dev center.
+
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then copy the quick start catalog **Clone URL**.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/catalog-url.png" alt-text="Screenshot of Azure portal showing the catalogs attached to a dev center, with clone URL highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/catalog-url.png":::
+
+1. To view the quick start catalog in GitHub, paste the **Clone URL** into the address bar and press Enter.
+
+1. In the GitHub repository, navigate to the **Environment-Definitions/ARMTemplates/Function-App-with-Cosmos_AZD-template** folder.
+
+1. Open the **environment.yaml** file. At the end of the file, you see the allowed repositories that contain sample application source code.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/application-source-templates.png" alt-text="Screenshot of GitHub repository, showing the environment.yaml file with source templates highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/application-source-templates.png":::
+
+1. Copy the **https://github.com/azure-samples/todo-python-mongo-swa-func** repository URL, and then navigate to the repository in GitHub.
+
+1. In the root of the repository, open the **azure.yaml** file.
+
+1. In the azure.yaml file, in the **services** section, you see the **web** and **API** services that are defined in the template.
+
+> [!NOTE]
+> Not all AZD compatible catalogs use the linked templates structure shown in the example. You can use a single catalog for all your environments by including the azure.yaml file. Using multiple catalogs and code repositories allows you more flexibility in configuring secure access for platform engineers and developers.
+
+If you're working with your own catalog & environment definition, you can create an azure.yaml file in the root of your repository. Use the azure.yaml file to define the services that you want to deploy to the environment.
+
+## Create an environment from an existing template
+
+Use an existing AZD compatible template to create a new environment.
+
+### Prepare to work with AZD
+
+When you work with AZD for the first time, there are some one-time setup tasks you need to complete. These tasks include installing the Azure Developer CLI, signing in to your Azure account, and enabling AZD support for Azure Deployment Environments.
+
+#### Install the Azure Developer CLI extension for Visual Studio Code
+
+When you install AZD, the AZD tools are installed within an AZD scope rather than globally, and are removed if AZD is uninstalled. You can install AZD in Visual Studio Code or from the command line.
+
+# [Visual Studio Code](#tab/visual-studio-code)
+
+To enable Azure Developer CLI features in Visual Studio Code, install the Azure Developer CLI extension, version v0.8.0-alpha.1-beta.3173884. Select the **Extensions** icon in the Activity bar, search for **Azure Developer CLI**, and then select **Install**.
++
+# [Azure Developer CLI](#tab/azure-developer-cli)
++
+```bash
+powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression"
+```
++
+#### Sign in with Azure Developer CLI
+
+Access your Azure resources by logging in. When you initiate a log in, a browser window opens and prompts you to log in to Azure. After you sign in, the terminal displays a message that you're signed in to Azure.
+
+Sign in to AZD using the command palette:
+
+# [Visual Studio Code](#tab/visual-studio-code)
++
+The output of commands issued from the command palette is displayed in an **azd dev** terminal like the following example:
++
+# [Azure Developer CLI](#tab/azure-developer-cli)
+
+Sign in to Azure at the CLI using the following command:
+
+```bash
+ azd auth login
+```
++++
+#### Enable AZD support for ADE
+
+When `platform.type` is set to `devcenter`, all AZD remote environment state and provisioning uses dev center components. AZD uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. In this configuration, the *infra* folder in your local templates isnΓÇÖt used.
+
+# [Visual Studio Code](#tab/visual-studio-code)
++
+# [Azure Developer CLI](#tab/azure-developer-cli)
+
+```bash
+ azd config set platform.type devcenter
+```
++
+### Create a new environment
+
+Now you're ready to create an environment to work in. You begin with an existing template. ADE defines the infrastructure for your application, and the AZD template provides sample application code.
+
+# [Visual Studio Code](#tab/visual-studio-code)
+
+1. In Visual Studio Code, open an empty folder.
+
+1. Open the command palette, enter *Azure Developer CLI init*, and then from the list, select **Azure Developer CLI (azd): init**.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/command-palette-azure-developer-initialize.png" alt-text="Screenshot of the Visual Studio Code command palette with Azure Developer CLI (azd): init highlighted." lightbox="media/how-to-create-environment-with-azure-developer/command-palette-azure-developer-initialize.png":::
+
+1. In the list of templates, select **Function-App-with-Cosmos_AZD-template**.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/command-palette-functionapp-template.png" alt-text="Screenshot of the Visual Studio Code command palette with a list of templates, Function App highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/command-palette-functionapp-template.png":::
+
+1. In the AZD terminal, enter an environment name.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt for a new environment name." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png":::
+
+1. Select a project.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-project.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select a project." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-project.png":::
+
+1. Select an environment definition.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select an environment definition." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png":::
+
+ AZD creates the project resources, including an *azure.yaml* file in the root of your project.
++
+# [Azure Developer CLI](#tab/azure-developer-cli)
+
+1. At the CLI, navigate to an empty folder.
+
+1. To list the templates available, in the AZD terminal, run the following command:
+
+ ```bash
+ azd template list
+
+ ```
+ Multiple templates are available. You can select the template that best fits your needs, depending on the application you want to build and the language you want to use.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/developer-cli-template-list.png" alt-text="Screenshot of the Azure Developer terminal, showing the templates available." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/developer-cli-template-list.png":::
+
+1. Run the following command to initialize your application and supply information when prompted:
+
+ ```bash
+ azd init
+ ```
+1. In the AZD terminal, enter an environment name.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt for a new environment name." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png":::
+
+1. Select a project.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-project.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select a project." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-project.png":::
+
+1. Select an environment definition.
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select an environment definition." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png":::
+
+ AZD creates the project resources, including an *azure.yaml* file in the root of your project.
+++
+## Configure your devcenter
+
+You can define AZD settings for your dev centers so that you don't need to specify them each time you update an environment. In this example, you define the names of the catalog, dev center, and project that you're using for your environment.
+
+1. In Visual Studio Code, navigate to the *azure.yaml* file in the root of your project.
+
+1. In the azure.yaml file, add the following settings:
+
+ ```yaml
+ platform:
+ type: devcenter
+ config:
+ catalog: MS-cat
+ name: Contoso-DevCenter
+ project: Contoso-Dev-project
+ ```
+
+ :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/azure-yaml-dev-center-settings.png" alt-text="Screenshot of the azure.yaml file with dev center settings highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/azure-yaml-dev-center-settings.png":::
+
+To learn more about the settings you can configure, see [Configure dev center settings](/azure/developer/azure-developer-cli/ade-integration#configure-dev-center-settings).
+
+## Provision your environment
+
+You can use AZD to provision and deploy resources to your deployment environments using commands like `azd up` or `azd provision`.
+
+To learn more about provisioning your environment, see [Create an environment by using the Azure Developer CLI](how-to-create-environment-with-azure-developer.md#provision-infrastructure-to-azure-deployment-environment).
+
+To how common AZD commands work with ADE, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments).
++
+## Related content
+
+- [Add and configure an environment definition](./configure-environment-definition.md)
+- [Create an environment by using the Azure Developer CLI](./how-to-create-environment-with-azure-developer.md)
deployment-environments How To Create Environment With Azure Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-environment-with-azure-developer.md
In this article, you install the Azure Developer CLI (AZD), create a new deploym
Azure Developer CLI (AZD) is an open-source tool that accelerates the time it takes for you to get your application from local development environment to Azure. AZD provides best practice, developer-friendly commands that map to key stages in your workflow, whether youΓÇÖre working in the terminal, your editor or integrated development environment (IDE), or CI/CD (continuous integration/continuous deployment).
-<!-- To learn how to set up AZD to work with Azure Deployment Environments, see [Use Azure Developer CLI with Azure Deployment Environments](/azure/deployment-environments/concept-azure-developer-cli-with-deployment-environments). -->
+To learn how to set up AZD to work with Azure Deployment Environments, see [Use Azure Developer CLI with Azure Deployment Environments](/azure/deployment-environments/concept-azure-developer-cli-with-deployment-environments).
## Prerequisites
AZD uses an *azure.yaml* file to define the environment. The azure.yaml file def
# [Visual Studio Code](#tab/visual-studio-code)
-1. In Visual Studio Code, and then open the folder that contains your application code.
+1. In Visual Studio Code, open the folder that contains your application code.
1. Open the command palette, and enter *Azure Developer CLI init*, then from the list, select **Azure Developer CLI (azd): init**.
azd down --environment <environmentName>
## Related content - [Create and configure a dev center](/azure/deployment-environments/quickstart-create-and-configure-devcenter) - [What is the Azure Developer CLI?](/azure/developer/azure-developer-cli/overview)-- [Install or update the Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd)
+- [Install or update the Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd)
event-grid Namespace Push Delivery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-push-delivery-overview.md
This article builds on [push delivery with HTTP for Event Grid basic](push-deliv
## Namespace topics and subscriptions
-Events published to Event Grid namespaces land on a topic, which is a namespace subresource that logically contains all events. Namespace topics allows you to create subscriptions with flexible consumption modes to push events to a particular destination or [pull events](pull-delivery-overview.md) at yourself pace.
+Events published to Event Grid namespaces land on a topic, which is a namespace subresource that logically contains all events. Namespace topics allows you to create subscriptions with flexible consumption modes to push events to a particular destination or [pull events](pull-delivery-overview.md) at your pace.
:::image type="content" source="media/namespace-push-delivery-overview/topic-event-subscriptions-namespace.png" alt-text="Diagram showing a topic and associated event subscriptions." lightbox="media/namespace-push-delivery-overview/topic-event-subscriptions-namespace.png" border="false":::
expressroute Design Architecture For Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/design-architecture-for-resiliency.md
+
+ Title: Design and architect Azure ExpressRoute for resiliency
+description: Learn how to design and architect Azure ExpressRoute for resiliency to ensure high availability and reliability in your network connections between on-premises and Azure.
++++ Last updated : 04/01/2024++++
+# Design and architect Azure ExpressRoute for resiliency
+
+Azure ExpressRoute is an essential hybrid connectivity service widely used for its low latency, resilience, high throughput private connectivity between their on-premises network and Azure workloads. It offers the ability to achieve reliability, resiliency, and disaster recovery in network connections between on-premises and Azure to ensure availability of business and mission-critical workloads. This capability also extends access to Azure resources in a scalable, and cost-effective way.
++
+Network connections that are highly reliable, resilient, and available are fundamental to a well-structured system. Reliability consists of two principles: *resiliency* and *availability*. The goal of resiliency is to prevent failures and, in the event they do occur, to restore your applications to a fully operational state. The objective of availability is to provide consistent access to your application or workloads. It's important to proactively plan for reliability based on your business needs and application requirements.
+
+Users of ExpressRoute rely on the availability and performance of edge sites, WAN, and availability zones to maintain their connectivity to Azure. However, these components or sites might experience failures due to various reasons, such as equipment malfunctioning, network disruptions, weather conditions, or natural disasters. Therefore, it's a joint responsibility between users and their cloud provider, when planning for reliability, resiliency, and availability.
+
+## Site resiliency for ExpressRoute
+
+There are three ExpressRoute resiliency architectures that can be utilized to ensure high availability and resiliency in your network connections between on-premises and Azure. These architecture designs include:
+
+* [Maximum resiliency](#maximum-resiliency)
+* [High resiliency](#high-resiliency)
+* [Standard resiliency](#standard-resiliency)
+
+### Maximum resiliency
+
+The maximum resiliency architecture in ExpressRoute is structured to eliminate any single point of failure within the Microsoft network path. This set up is achieved by configuring a pair of circuits across two distinct locations for site diversity with ExpressRoute. The objective of maximum resiliency is to enhance reliability, resiliency, and availability, as a result ensuring the highest level of resilience for business and/or mission-critical workloads. For such operations, we recommend that you configure maximum resiliency. This architectural design is recommended as part of the [Well Architected Framework](/azure/well-architected/service-guides/azure-expressroute#reliability) under the reliability pillar. The ExpressRoute engineering team developed a [guided portal experience](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview) to assist you in configuring maximum resiliency.
++
+### High resiliency
+
+High resiliency, also referred to as multi-site or site resiliency, enables the use of multiple sites within the same metropolitan (Metro) area to connect your on-premises network through ExpressRoute to Azure. High resiliency offers site diversity by splitting a single circuit across two sites. The first connection is established at one site and the second connection at a different site. The objective of multi-site resiliency is to mitigate the effect of edge-sites isolation and failures by introducing capabilities to enable site diversity. Site diversity is achieved by using a single circuit across paired sites within a metropolitan city, which offers resiliency to failures between edge and region. High resiliency provides a higher level of site resiliency than standard resiliency, but not as much as maximum resiliency. High resiliency is priced the same as standard resiliency, with latency parity across two sites. This architecture can be used for business and mission-critical workloads within a region. For more information, see [ExpressRoute Metro](metro.md)
++
+### Standard resiliency
+
+Standard resiliency in ExpressRoute is a single circuit with two connections configured at a single site. Built-in redundancy (Active-Active) is configured to facilitate failover across the two connections of the circuit. Microsoft guarantees an availability [service level agreements (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1) for Microsoft Enterprise Edge (MSEE) to the gateway for this configuration. Today, ExpressRoute offers two connections at a single peering location. If a failure happens at this site, users might experience loss of connectivity to their Azure workloads. This configuration is also known as *single-homed* as it represents users with an ExpressRoute circuit configured with only one peering location. This configuration is considered the *least* resilient and **not recommended** for business or mission-critical workloads because it doesn't provide site resiliency.
++
+## Zonal resiliency for ExpressRoute
+
+[Azure regions](/azure/cloud-adoption-framework/ready/azure-setup-guide/regions) are an integral part of your ExpressRoute design and resiliency strategy. These regions are geographical locations of data centers that host Azure services. Regions are interconnected through a dedicated low-latency network and are designed to be highly available, fault-tolerant, and scalable.
+
+Azure offers several features to ensure regional resiliency. One such feature is [availability zones](../reliability/availability-zones-overview.md). Availability zones protect applications and data from data center failures by spanning across multiple physical locations within a region. Regions and availability zones are central to your application design and resiliency strategy. By utilizing availability zones, you can achieve higher availability and resilience in your deployments. For more information, see [Regions & availability zones](../reliability/overview.md#regions-and-availability-zones).
+
+We recommend deploying your [ExpressRoute Virtual Network Gateways](expressroute-about-virtual-network-gateways.md) as zone redundant across availability zones within a region. These availability zones are separate physical locations with independent infrastructure (power, cooling, and networking). The purpose is to protect your on-premises network connectivity to Azure from zone level failures. [Zone-redundant ExpressRoute gateways](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=%2Fazure%2Fexpressroute%2Ftoc.json) provide resiliency, scalability, and higher availability for accessing mission-critical services on Azure.
+
+Equipment failures or disasters in regional and zonal data centers can affect ExpressRoute gateway deployments in virtual networks. If gateways aren't deployed as zone-redundant, such failures within an Azure data center can affect the ability for users to access their Azure workloads.
+
+If you have an existing non-zone redundant ExpressRoute gateways, there's now the ability to [migrate to an availability zone enabled gateway](gateway-migration.md).
+
+## Recommendations
+
+The following are recommendations to ensure high availability, resiliency, and reliability in your ExpressRoute network architecture:
+
+* [ExpressRoute circuit recommendations](#expressroute-circuit-recommendations)
+* [ExpressRoute Gateway recommendations](#expressroute-gateway-recommendations)
+* [Disaster recovery and high availability recommendations](#disaster-recovery-and-high-availability-recommendations)
+* [Monitoring and alerting recommendations](#monitoring-and-alerting-recommendations)
+
+### ExpressRoute circuit recommendations
+
+#### Plan for ExpressRoute circuit or ExpressRoute Direct
+
+During the initial planning phase, it's crucial to determine whether to configure an [ExpressRoute circuit](expressroute-circuit-peerings.md) or an [ExpressRoute Direct](expressroute-erdirect-about.md) connection. An ExpressRoute circuit allows a private dedicated connection into Azure with the assistance of a connectivity provider. ExpressRoute Direct enables the extension of an on-premises network directly into the Microsoft network at a peering location. It's also necessary to identify the bandwidth requirement and the circuit SKU type requirement to meet your business needs.
+
+#### Evaluate the resiliency of multi-site redundant ExpressRoute circuits
+
+After deploying multi-site redundant ExpressRoute circuits with [maximum resiliency](expressroute-howto-circuit-portal-resource-manager.md), it's essential to ensure that on-premises routes are advertised over the redundant circuits to fully utilize the benefits of multi-site redundancy. To evaluate the resiliency and test the failover of redundant circuits and routes Learn more here.
+
+#### Plan for active-active configuration
+
+To improve resiliency and availability, Microsoft recommends operating both connections of an ExpressRoute circuit in [active-active mode](designing-for-high-availability-with-expressroute.md#active-active-connections). By allowing two connections to operate in this mode, Microsoft load balances the network traffic across the connections on a per-flow basis.
+
+#### Physical layer diversity
+
+For better resiliency, plan to establish multiple paths between the on-premises edge and the peering locations (provider/Microsoft edge locations). This configuration can be achieved by utilizing different services providers or by routing through another peering location from the on-premises network. For high availability, it's essential to maintain the redundancy of the ExpressRoute circuit throughout the end-to-end network architecture. This includes maintaining redundancy within your on-premises network and redundancy within your service provider. Ensuring redundancy in these parts of your architecture means you shouldn't have a single point of failure.
+
+#### Ensure BFD (Bidirectional Forwarding Detection) is enabled and configured
+
+Enabling Bidirectional Forwarding Detection (BFD) over ExpressRoute can accelerate the link failure detection between the MSEE devices and the routers on which your ExpressRoute circuit is configured. Microsoft recommends configuring the Customer Premises Edge (CPE) devices with BFD. ExpressRoute can be configured over your edge routing devices or your Partner Edge routing devices. BFD is enabled by default on the MSEE devices on the Microsoft side.
+
+### ExpressRoute Gateway recommendations
+
+#### Plan for Virtual Network Gateway
+
+Create [zone-redundant Virtual Network Gateways](../vpn-gateway/about-zone-redundant-vnet-gateways.md) for greater resiliency and plan for Virtual Network Gateways in different regions for disaster recovery and high availability. When utilizing zone-redundant gateways, you can benefit from zone-resiliency for accessing your mission-critical and scalable services on Azure.
+
+#### Migrate to zone-redundant ExpressRoute gateways
+
+The [guided gateway migration](gateway-migration.md) experience facilitates your migration from a Non-Az-Enabled SKU to an Az-Enabled SKU gateway. This feature allows for the creation of an additional virtual network gateway within the same gateway subnet. During the migration process, Azure transfers the control plane and data path configurations from your existing gateway to the new one.
+
+### Disaster recovery and high availability recommendations
+
+#### Use VPN Gateway as a backup for ExpressRoute
+
+Microsoft recommends the use of site-to-site VPN as a failover when an ExpressRoute circuit becomes unavailable. ExpressRoute is designed for high availability and there's no single point of failure within the Microsoft network. However, there can be instances where an ExpressRoute circuit becomes unavailable due to various reasons such as regional service degradation or natural disasters. A site-to-site VPN can be configured as a secure failover path for ExpressRoute. If the ExpressRoute circuit becomes unavailable, the traffic is automatically route through the site-to-site VPN, ensuring that your connection to the Azure network remains. For more information, see [using site-to-site VPN as a backup for Azure ExpressRoute](use-s2s-vpn-as-backup-for-expressroute-privatepeering.md).
+
+#### Enable high availability and disaster recovery
+
+To maximize availability, both the customer and service provider segments on your ExpressRoute circuit should be architected for availability & resiliency. For Disaster Recovery, plan for scenarios such as regional service outages due to natural calamities. Implement a robust disaster recovery design for multiple circuits configured through different peering locations in different regions. To learn more, see: [Designing for disaster recovery](designing-for-disaster-recovery-with-expressroute-privatepeering.md).
+
+#### Plan for geo-redundancy
+
+For disaster recovery planning, we recommend setting up ExpressRoute circuits in multiple peering locations and regions. ExpressRoute circuits can be created in the same metropolitan area or different metropolitan areas, and different service providers can be used for diverse paths through each circuit. Geo-redundant ExpressRoute circuits are utilized to create a robust backend network connectivity for disaster recovery. To learn more, see [Designing for high availability](designing-for-high-availability-with-expressroute.md).
+
+#### Virtual network peering for connectivity between virtual networks
+
+Virtual Network (VNet) Peering provides a more efficient and direct method, enabling Azure services to communicate across virtual networks without the need of a virtual network gateway, extra hops, or transit over the public internet. To establish connectivity between virtual networks, VNet peering should be implemented for the best performance possible. For more information, seeΓÇ»[About Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md) andΓÇ»[Manage VNet peering](../virtual-network/virtual-network-manage-peering.md).
+
+### Monitoring and alerting recommendations
+
+#### Configure monitoring & alerting for ExpressRoute circuits
+
+As a baseline, we recommend configuring [Network Insights](expressroute-network-insights.md) within Azure Monitor to view all ExpressRoute circuit metrics, including ExpressRoute Direct and Global Reach. Within the circuits card you can visualize topologies and dependencies for peerings, connections, and gateways. The insights available for circuits include availability, throughput, and packet drops.
+
+#### Configure service health alerts for ExpressRoute circuit maintenance notifications
+
+ExpressRoute uses [Azure Service Health](../service-health/overview.md) to notify you of planned and upcoming [ExpressRoute circuit maintenance](maintenance-alerts.md). With Service Health, you can view planned and past maintenance in the Azure portal along with configuring alerts and notifications that best suit your needs. In Service Health, you can see Planned & Past maintenance. You can also set alerts within Service Health to be notified of upcoming maintenance.
+
+#### Configure connection monitor for ExpressRoute
+
+[Connection Monitor](how-to-configure-connection-monitor.md) is a cloud-based network monitoring solution that monitors connectivity between Azure cloud deployments and on-premises locations (Branch offices, etc.). Connection Monitor is an agent-based solution.
+
+#### Configure gateway health monitoring & alerting
+
+[Setup monitoring](expressroute-monitoring-metrics-alerts.md#expressroute-gateways) using Azure Monitor for ExpressRoute Gateway availability, performance, and scalability. When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are multiple [gateway metrics](expressroute-monitoring-metrics-alerts.md#expressroute-virtual-network-gateway-metrics) available to you to better understand the performance of your gateway.
+
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
Sign in to the Azure portal with this [Preview link](https://aka.ms/expressroute
| Provider (Provider port type)| Select the internet service provider who you are requesting your service from. | | ExpressRoute Direct resource (Direct port type) | Select the ExpressRoute Direct resource that you want to use. | | Bandwidth | Select the bandwidth for the ExpressRoute circuit. |
- | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard and Premium. |
+ | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard, and Premium. |
| Billing model | Select the billing type for egress data charge. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**. | > [!IMPORTANT]
Sign in to the Azure portal with this [Preview link](https://aka.ms/expressroute
> * You can't change the SKU from **Standard/Premium** to **Local** in Azure portal. To downgrade the SKU to **Local**, you can use [Azure PowerShell](expressroute-howto-circuit-arm.md) or [Azure CLI](howto-circuit-cli.md). > * You can't change the type from **Unlimited** to **Metered**.
- Complete the same information for the second ExpressRoute circuit. When selecting an ExpressRoute location for the second circuit, you are provided with distances information from the first ExpressRoute location. This information can help you select the second ExpressRoute location.
+ Complete the same information for the second ExpressRoute circuit. When selecting an ExpressRoute location for the second circuit, you're provided with distances information from the first ExpressRoute location. This information can help you select the second ExpressRoute location.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/peering-location-distance.png" alt-text="Screenshot of distance information from first ExpressRoute circuit.":::
+ **High Resiliency**
+
+ For high resiliency, select one of the supported ExpressRoute Metro service providers and the corresponding **Peering location**. For example, **Megaport** as the *Provider* and **Amsterdam Metro** as the *Peering location*. For more information, see [ExpressRoute Metro](metro.md).
+ **Standard Resiliency** For standard resiliency, you only need to enter information for one ExpressRoute circuit.
From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign
| Create new or import from classic | Select if you're creating a new circuit or if you're migrating a classic circuit to Azure Resource Manager. | | Provider | Select the internet service provider who you are requesting your service from. | | Peering Location | Select the physical location where you're peering with Microsoft. |
- | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard and Premium. |
+ | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard, and Premium. |
| Billing model | Select the billing type for egress data charge. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**. | | Allow classic operations | Enable this option to allow classic virtual networks to link to the circuit. |
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Each ExpressRoute circuit consists of two connections to two Microsoft Enterpris
### Resiliency
-Microsoft offers multiple ExpressRoute peering locations in many geopolitical regions. To ensure maximum resiliency, Microsoft recommends that you connect to two ExpressRoute circuits in two peering locations. For non-production and non-critical workloads, you can achieve standard resiliency by connecting to a single ExpressRoute circuit that offers redundant connections within a single peering location. The Azure portal provides a guided experience to help you create a resilient ExpressRoute configuration. For Azure PowerShell, CLI, ARM template, Terraform, and Bicep, maximum resiliency can be achieved by creating a second ExpressRoute circuit in a different ExpressRoute location and establishing a connection to it. For more information, see [Create maximum resiliency with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
+Microsoft offers multiple ExpressRoute peering locations in many geopolitical regions. For maximum resiliency, Microsoft recommends that you establish connection to two ExpressRoute circuits in two peering locations. If ExpressRoute Metro is available with your service provider and in your preferred peering location, you can achieve a higher level of resiliency compared to a standard ExpressRoute circuit. For non-production and non-critical workloads, you can achieve standard resiliency by connecting to a single ExpressRoute circuit that offers redundant connections within a single peering location. The Azure portal provides a guided experience to help you create a resilient ExpressRoute configuration. For Azure PowerShell, CLI, ARM template, Terraform, and Bicep, maximum resiliency can be achieved by creating a second ExpressRoute circuit in a different ExpressRoute location and establishing a connection to it. For more information, see [Create maximum resiliency with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of maximum resiliency for an ExpressRoute connection.":::
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
+
+ Title: About ExpressRoute Metro (preview)
+description: This article provides an overview of ExpressRoute Metro and how it works.
++++ Last updated : 04/01/2024++++
+# About ExpressRoute Metro (preview)
+
+> [!IMPORTANT]
+> ExpresRoute Metro is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+ExpressRoute facilitates the creation of private connections between your on-premises networks and Azure workloads in a designated peering locations. These locations are colocation facilities housing Microsoft Enterprise Edge (MSEE) devices, serving as the gateway to Microsoft's network.
+
+Within the peering location, two types of connections can be established:
+
+* **ExpressRoute circuit** - an ExpressRoute circuit consist of two logical connections between your on-premises network and Azure. These connections are made through a pair of physical links provided by an ExpressRoute partner, such as AT&T, Verizon, Equinix, among others.
+
+* **ExpressRoute Direct** - ExpressRoute Direct is a dedicated and private connection between your on-premises network and Azure, eliminating the need for partner provider involvement. It enables the direct connection of your routers to the Microsoft global network using dual 10G or 100G Ports.
+
+The standard ExpressRoute configuration is set up with a pair of links to enhance the reliability of your ExpressRoute connection. This setup is designed to provide redundancy and improves the availability of your ExpressRoute connections during hardware failures, maintenance events, or other unforeseen incidents within the peering locations. However, you should note that these redundant connections don't provide resilience against certain events. These events could disrupt or isolate the edge location where the MSEE devices are located. Such disruptions could potentially lead to a complete loss of connectivity from your on-premises networks to your cloud services.
+
+## ExpressRoute Metro
+
+ExpressRoute Metro (preview) is a high-resiliency configuration designed to provide multi-site redundancy. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. The high resiliency configuration benefits from the redundancy across the two peering locations to offer higher availability and resilience for your connectivity from your on-premises to resources in Azure.
+
+Key features of ExpressRoute Metro include:
+
+* Dual-homed connections to two distinct ExpressRoute peering locations within the same city.
+* Increased availability and resiliency for your ExpressRoute circuits.
+* Seamless connectivity from your on-premises environment to Azure resources through an ExpressRoute circuit with the assistance of a connectivity provider or with ExpressRoute Direct (Dual 10G or 100G ports)
+
+The following diagram allows for a comparison between the standard ExpressRoute circuit and a ExpressRoute Metro circuit.
++
+## ExpressRoute Metro locations
+
+| Metro location | Peering locations | Location address | Zone | Local Azure Region | ER Direct | Service Provider |
+|--|--|--|--|--|--|--|
+| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Equinix AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
+| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | West Europe | &check; | Megaport<sup>1</sup><br>Equinix<sup>1</sup><br>Console Connect<sup>1</sup> |
+| Zurich Metro | Zurich<br>Zurich2 | Interxion ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Reality<sup>1</sup> |
+
+<sup>1<sup> These service providers will be available in the future.
+
+> [!NOTE]
+> The naming convention for Metro sites will utilize `City` and `City2` to denote the two unique peering locations within the same metropolitan region. As an illustration, Amsterdam and Amsterdam2 are indicative of the two separate peering locations within the metropolitan area of Amsterdam. In the Azure portal, these locations will be referred to as `Amsterdam Metro`.
+
+## Configure ExpressRoute Metro
+
+### Create an ExpressRoute Metro circuit
+
+You can create an ExpressRoute Metro circuit in the Azure portal in any of the three metropolitan areas. Within the portal, specify one of the Metro peering locations and the corresponding service provider supported in that location. For more information, see [Create an ExpressRoute circuit](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
++
+### Create a Metro ExpressRoute Direct
+
+1. A Metro ExpressRoute Direct port can be created in the Azure portal. Within the portal, specify one of the Metro peering locations. For more information, see [Create an ExpressRoute Direct](how-to-expressroute-direct-portal.md).
+
+ :::image type="content" source="./media/metro/create-metro-direct.png" alt-text="Screenshot of creating Metro ExpressRoute Direct ports.":::
+
+1. One you provisioned the Metro ExpressRoute Direct ports, you can download the LOA (Letter of Authorization), obtain the Meet-Me-Room details, and extend your physical cross-connects.
+
+ :::image type="content" source="./media/metro/generate-letter-of-authorization.png" alt-text="Screenshot of generating letter of authorization.":::
+
+## Next steps
+
+* Review [ExpressRoute partners and peering locations](expressroute-locations.md) to understand the available ExpressRoute partners and peering locations.
+* Review [ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/) to understand the costs associated with ExpressRoute.
+* Review [Design architecture for ExpressRoute resiliency](design-architecture-for-resiliency.md) to understand the design considerations for ExpressRoute.
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
> [!NOTE] > The **Configure WAF policy upgrades** link will only appear if you have WAF policies associated to the Front Door (classic) profile.
- For each WAF policy associated to the Front Door (classic) profile select an action. You can make copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing compatible WAF policy. You may also change the WAF policy name from the default provided name. Once completed, select **Apply** to save your Front Door WAF settings.
+ For each WAF policy associated to the Front Door (classic) profile select an action. You can make a copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing compatible WAF policy. You may also change the WAF policy name from the default provided name. Once completed, select **Apply** to save your Front Door WAF settings.
:::image type="content" source="./media/migrate-tier/waf-policy.png" alt-text="Screenshot of the upgrade WAF policy screen.":::
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
Search-AzGraph -Query "Resources | distinct type, apiVersion | where isnotnull(a
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20distinct%20type%2C%20apiVersion%0D%0A%7C%20where%20isnotnull%28apiVersion%29%0D%0A%7C%20order%20by%20type%20asc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20distinct%20type%2C%20apiVersion%0D%0A%7C%20where%20isnotnull%28apiVersion%29%0D%0A%7C%20order%20by%20type%20asc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type=~ 'microsoft.compute/virtualmachin
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%3D~%20%27microsoft.compute%2Fvirtualmachinescalesets%27%0D%0A%7C%20where%20name%20contains%20%27contoso%27%0D%0A%7C%20project%20subscriptionId%2C%20name%2C%20location%2C%20resourceGroup%2C%20Capacity%20%3D%20toint%28sku.capacity%29%2C%20Tier%20%3D%20sku.name%0D%0A%7C%20order%20by%20Capacity%20desc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%3D~%20%27microsoft.compute%2Fvirtualmachinescalesets%27%0D%0A%7C%20where%20name%20contains%20%27contoso%27%0D%0A%7C%20project%20subscriptionId%2C%20name%2C%20location%2C%20resourceGroup%2C%20Capacity%20%3D%20toint%28sku.capacity%29%2C%20Tier%20%3D%20sku.name%0D%0A%7C%20order%20by%20Capacity%20desc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | summarize resourceCount=count() by subscripti
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20summarize%20resourceCount%3Dcount%28%29%20by%20subscriptionId%0D%0A%7C%20join%20%28ResourceContainers%20%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%27%20%7C%20project%20SubName%3Dname%2C%20subscriptionId%29%20on%20subscriptionId%0D%0A%7C%20project-away%20subscriptionId%2C%20subscriptionId1" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20summarize%20resourceCount%3Dcount%28%29%20by%20subscriptionId%0D%0A%7C%20join%20%28ResourceContainers%20%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%27%20%7C%20project%20SubName%3Dname%2C%20subscriptionId%29%20on%20subscriptionId%0D%0A%7C%20project-away%20subscriptionId%2C%20subscriptionId1" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | project tags | summarize buildschema(tags)"
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20tags%0D%0A%7C%20summarize%20buildschema%28tags%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20tags%0D%0A%7C%20summarize%20buildschema%28tags%29" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%20and%20name%20matches%20regex%20%40%27%5EContoso%28.%2A%29%5B0-9%5D%2B%24%27%0D%0A%7C%20project%20name%0D%0A%7C%20order%20by%20name%20asc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%20and%20name%20matches%20regex%20%40%27%5EContoso%28.%2A%29%5B0-9%5D%2B%24%27%0D%0A%7C%20project%20name%0D%0A%7C%20order%20by%20name%20asc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.documentdb/databasea
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.documentdb%2Fdatabaseaccounts%27%0D%0A%7C%20project%20id%2C%20name%2C%20writeLocations%20%3D%20%28properties.writeLocations%29%0D%0A%7C%20mv-expand%20writeLocations%0D%0A%7C%20project%20id%2C%20name%2C%20writeLocation%20%3D%20tostring%28writeLocations.locationName%29%0D%0A%7C%20where%20writeLocation%20in%20%28%27East%20US%27%2C%20%27West%20US%27%29%0D%0A%7C%20summarize%20by%20id%2C%20name" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.documentdb%2Fdatabaseaccounts%27%0D%0A%7C%20project%20id%2C%20name%2C%20writeLocations%20%3D%20%28properties.writeLocations%29%0D%0A%7C%20mv-expand%20writeLocations%0D%0A%7C%20project%20id%2C%20name%2C%20writeLocation%20%3D%20tostring%28writeLocations.locationName%29%0D%0A%7C%20where%20writeLocation%20in%20%28%27East%20US%27%2C%20%27West%20US%27%29%0D%0A%7C%20summarize%20by%20id%2C%20name" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | join kind=leftouter (ResourceContainers | whe
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20join%20kind%3Dleftouter%20%28ResourceContainers%20%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%27%20%7C%20project%20SubName%3Dname%2C%20subscriptionId%29%20on%20subscriptionId%0D%0A%7C%20where%20type%20%3D%3D%20%27microsoft.keyvault%2Fvaults%27%0D%0A%7C%20project%20type%2C%20name%2C%20SubName" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20join%20kind%3Dleftouter%20%28ResourceContainers%20%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%27%20%7C%20project%20SubName%3Dname%2C%20subscriptionId%29%20on%20subscriptionId%0D%0A%7C%20where%20type%20%3D%3D%20%27microsoft.keyvault%2Fvaults%27%0D%0A%7C%20project%20type%2C%20name%2C%20SubName" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.sql/servers/database
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.sql%2Fservers%2Fdatabases%27%0D%0A%7C%20project%20databaseId%20%3D%20id%2C%20databaseName%20%3D%20name%2C%20elasticPoolId%20%3D%20tolower%28tostring%28properties.elasticPoolId%29%29%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.sql%2Fservers%2Felasticpools%27%0D%0A%20%20%20%20%7C%20project%20elasticPoolId%20%3D%20tolower%28id%29%2C%20elasticPoolName%20%3D%20name%2C%20elasticPoolState%20%3D%20properties.state%29%0D%0Aon%20elasticPoolId%0D%0A%7C%20project-away%20elasticPoolId1" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.sql%2Fservers%2Fdatabases%27%0D%0A%7C%20project%20databaseId%20%3D%20id%2C%20databaseName%20%3D%20name%2C%20elasticPoolId%20%3D%20tolower%28tostring%28properties.elasticPoolId%29%29%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.sql%2Fservers%2Felasticpools%27%0D%0A%20%20%20%20%7C%20project%20elasticPoolId%20%3D%20tolower%28id%29%2C%20elasticPoolName%20%3D%20name%2C%20elasticPoolState%20%3D%20properties.state%29%0D%0Aon%20elasticPoolId%0D%0A%7C%20project-away%20elasticPoolId1" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0D%0A%7C%20extend%20nics%3Darray_length%28properties.networkProfile.networkInterfaces%29%20%0D%0A%7C%20mv-expand%20nic%3Dproperties.networkProfile.networkInterfaces%20%0D%0A%7C%20where%20nics%20%3D%3D%201%20or%20nic.properties.primary%20%3D~%20%27true%27%20or%20isempty%28nic%29%20%0D%0A%7C%20project%20vmId%20%3D%20id%2C%20vmName%20%3D%20name%2C%20vmSize%3Dtostring%28properties.hardwareProfile.vmSize%29%2C%20nicId%20%3D%20tostring%28nic.id%29%20%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%0D%0A%20%20%20%20%7C%20extend%20ipConfigsCount%3Darray_length%28properties.ipConfigurations%29%20%0D%0A%20%20%20%20%7C%20mv-expand%20ipconfig%3Dproperties.ipConfigurations%20%0D%0A%20%20%20%20%7C%20where%20ipConfigsCount%20%3D%3D%201%20or%20ipconfig.properties.primary%20%3D~%20%27true%27%0D%0A%20%20%20%20%7C%20project%20nicId%20%3D%20id%2C%20publicIpId%20%3D%20tostring%28ipconfig.properties.publicIPAddress.id%29%29%0D%0Aon%20nicId%0D%0A%7C%20project-away%20nicId1%0D%0A%7C%20summarize%20by%20vmId%2C%20vmName%2C%20vmSize%2C%20nicId%2C%20publicIpId%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%0D%0A%20%20%20%20%7C%20project%20publicIpId%20%3D%20id%2C%20publicIpAddress%20%3D%20properties.ipAddress%29%0D%0Aon%20publicIpId%0D%0A%7C%20project-away%20publicIpId1" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0D%0A%7C%20extend%20nics%3Darray_length%28properties.networkProfile.networkInterfaces%29%20%0D%0A%7C%20mv-expand%20nic%3Dproperties.networkProfile.networkInterfaces%20%0D%0A%7C%20where%20nics%20%3D%3D%201%20or%20nic.properties.primary%20%3D~%20%27true%27%20or%20isempty%28nic%29%20%0D%0A%7C%20project%20vmId%20%3D%20id%2C%20vmName%20%3D%20name%2C%20vmSize%3Dtostring%28properties.hardwareProfile.vmSize%29%2C%20nicId%20%3D%20tostring%28nic.id%29%20%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%0D%0A%20%20%20%20%7C%20extend%20ipConfigsCount%3Darray_length%28properties.ipConfigurations%29%20%0D%0A%20%20%20%20%7C%20mv-expand%20ipconfig%3Dproperties.ipConfigurations%20%0D%0A%20%20%20%20%7C%20where%20ipConfigsCount%20%3D%3D%201%20or%20ipconfig.properties.primary%20%3D~%20%27true%27%0D%0A%20%20%20%20%7C%20project%20nicId%20%3D%20id%2C%20publicIpId%20%3D%20tostring%28ipconfig.properties.publicIPAddress.id%29%29%0D%0Aon%20nicId%0D%0A%7C%20project-away%20nicId1%0D%0A%7C%20summarize%20by%20vmId%2C%20vmName%2C%20vmSize%2C%20nicId%2C%20publicIpId%0D%0A%7C%20join%20kind%3Dleftouter%20%28%0D%0A%20%20%20%20Resources%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%0D%0A%20%20%20%20%7C%20project%20publicIpId%20%3D%20id%2C%20publicIpAddress%20%3D%20properties.ipAddress%29%0D%0Aon%20publicIpId%0D%0A%7C%20project-away%20publicIpId1" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D%3D%20'microsoft.compute%2Fvirtualmachines'%0A%7C%20extend%0A%20%20%20%20JoinID%20%3D%20toupper(id)%2C%0A%20%20%20%20OSName%20%3D%20tostring(properties.osProfile.computerName)%2C%0A%20%20%20%20OSType%20%3D%20tostring(properties.storageProfile.osDisk.osType)%2C%0A%20%20%20%20VMSize%20%3D%20tostring(properties.hardwareProfile.vmSize)%0A%7C%20join%20kind%3Dleftouter(%0A%20%20%20%20Resources%0A%20%20%20%20%7C%20where%20type%20%3D%3D%20'microsoft.compute%2Fvirtualmachines%2Fextensions'%0A%20%20%20%20%7C%20extend%20%0A%20%20%20%20%20%20%20%20VMId%20%3D%20toupper(substring(id%2C%200%2C%20indexof(id%2C%20'%2Fextensions')))%2C%0A%20%20%20%20%20%20%20%20ExtensionName%20%3D%20name%0A)%20on%20%24left.JoinID%20%3D%3D%20%24right.VMId%0A%7C%20summarize%20Extensions%20%3D%20make_list(ExtensionName)%20by%20id%2C%20OSName%2C%20OSType%2C%20VMSize%0A%7C%20order%20by%20tolower(OSName)%20asc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D%3D%20'microsoft.compute%2Fvirtualmachines'%0A%7C%20extend%0A%20%20%20%20JoinID%20%3D%20toupper(id)%2C%0A%20%20%20%20OSName%20%3D%20tostring(properties.osProfile.computerName)%2C%0A%20%20%20%20OSType%20%3D%20tostring(properties.storageProfile.osDisk.osType)%2C%0A%20%20%20%20VMSize%20%3D%20tostring(properties.hardwareProfile.vmSize)%0A%7C%20join%20kind%3Dleftouter(%0A%20%20%20%20Resources%0A%20%20%20%20%7C%20where%20type%20%3D%3D%20'microsoft.compute%2Fvirtualmachines%2Fextensions'%0A%20%20%20%20%7C%20extend%20%0A%20%20%20%20%20%20%20%20VMId%20%3D%20toupper(substring(id%2C%200%2C%20indexof(id%2C%20'%2Fextensions')))%2C%0A%20%20%20%20%20%20%20%20ExtensionName%20%3D%20name%0A)%20on%20%24left.JoinID%20%3D%3D%20%24right.VMId%0A%7C%20summarize%20Extensions%20%3D%20make_list(ExtensionName)%20by%20id%2C%20OSName%2C%20OSType%2C%20VMSize%0A%7C%20order%20by%20tolower(OSName)%20asc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.storage/storageaccou
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.storage%2Fstorageaccounts%27%0D%0A%7C%20join%20kind%3Dinner%20%28%0D%0A%20%20%20%20ResourceContainers%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%0D%0A%20%20%20%20%7C%20where%20tags%5B%27Key1%27%5D%20%3D~%20%27Value1%27%0D%0A%20%20%20%20%7C%20project%20subscriptionId%2C%20resourceGroup%29%0D%0Aon%20subscriptionId%2C%20resourceGroup%0D%0A%7C%20project-away%20subscriptionId1%2C%20resourceGroup1" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.storage%2Fstorageaccounts%27%0D%0A%7C%20join%20kind%3Dinner%20%28%0D%0A%20%20%20%20ResourceContainers%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%0D%0A%20%20%20%20%7C%20where%20tags%5B%27Key1%27%5D%20%3D~%20%27Value1%27%0D%0A%20%20%20%20%7C%20project%20subscriptionId%2C%20resourceGroup%29%0D%0Aon%20subscriptionId%2C%20resourceGroup%0D%0A%7C%20project-away%20subscriptionId1%2C%20resourceGroup1" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.storage/storageaccou
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.storage%2Fstorageaccounts%27%0D%0A%7C%20join%20kind%3Dinner%20%28%0D%0A%20%20%20%20ResourceContainers%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%0D%0A%20%20%20%20%7C%20mv-expand%20bagexpansion%3Darray%20tags%0D%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0D%0A%20%20%20%20%7C%20where%20tags%5B0%5D%20%3D~%20%27key1%27%20and%20tags%5B1%5D%20%3D~%20%27value1%27%0D%0A%20%20%20%20%7C%20project%20subscriptionId%2C%20resourceGroup%29%0D%0Aon%20subscriptionId%2C%20resourceGroup%0D%0A%7C%20project-away%20subscriptionId1%2C%20resourceGroup1" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.storage%2Fstorageaccounts%27%0D%0A%7C%20join%20kind%3Dinner%20%28%0D%0A%20%20%20%20ResourceContainers%0D%0A%20%20%20%20%7C%20where%20type%20%3D~%20%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%0D%0A%20%20%20%20%7C%20mv-expand%20bagexpansion%3Darray%20tags%0D%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0D%0A%20%20%20%20%7C%20where%20tags%5B0%5D%20%3D~%20%27key1%27%20and%20tags%5B1%5D%20%3D~%20%27value1%27%0D%0A%20%20%20%20%7C%20project%20subscriptionId%2C%20resourceGroup%29%0D%0Aon%20subscriptionId%2C%20resourceGroup%0D%0A%7C%20project-away%20subscriptionId1%2C%20resourceGroup1" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "ResourceContainers | where type=='microsoft.resources/sub
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%0D%0A%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%20%7C%20project%20name%2C%20type%20%20%7C%20limit%205%0D%0A%7C%20union%20%20%28Resources%20%7C%20project%20name%2C%20type%20%7C%20limit%205%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%0D%0A%7C%20where%20type%3D%3D%27microsoft.resources%2Fsubscriptions%2Fresourcegroups%27%20%7C%20project%20name%2C%20type%20%20%7C%20limit%205%0D%0A%7C%20union%20%20%28Resources%20%7C%20project%20name%2C%20type%20%7C%20limit%205%29" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.network/networkinter
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%0A%7C%20project%20id%2C%20ipConfigurations%20%3D%20properties.ipConfigurations%0A%7C%20mvexpand%20ipConfigurations%0A%7C%20project%20id%2C%20subnetId%20%3D%20tostring%28ipConfigurations.properties.subnet.id%29%0A%7C%20parse%20kind%3Dregex%20subnetId%20with%20%27%2FvirtualNetworks%2F%27%20virtualNetwork%20%27%2Fsubnets%2F%27%20subnet%20%0A%7C%20project%20id%2C%20virtualNetwork%2C%20subnet" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%0A%7C%20project%20id%2C%20ipConfigurations%20%3D%20properties.ipConfigurations%0A%7C%20mvexpand%20ipConfigurations%0A%7C%20project%20id%2C%20subnetId%20%3D%20tostring%28ipConfigurations.properties.subnet.id%29%0A%7C%20parse%20kind%3Dregex%20subnetId%20with%20%27%2FvirtualNetworks%2F%27%20virtualNetwork%20%27%2Fsubnets%2F%27%20subnet%20%0A%7C%20project%20id%2C%20virtualNetwork%2C%20subnet" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%20%7C%20where%20type%20%3D%3D%20%27microsoft.compute%2Fvirtualmachines%27%20%7C%20summarize%20count%28%29%20by%20tostring%28properties.extended.instanceView.powerState.code%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%20%7C%20where%20type%20%3D%3D%20%27microsoft.compute%2Fvirtualmachines%27%20%7C%20summarize%20count%28%29%20by%20tostring%28properties.extended.instanceView.powerState.code%29" target="_blank">portal.azure.us</a>
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Search-AzGraph -Query "Resources | summarize count()" -UseTenantScope
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20summarize%20count%28%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20summarize%20count%28%29" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.keyvault/vaults' | c
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.keyvault%2Fvaults%27%0D%0A%7C%20count" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.keyvault%2Fvaults%27%0D%0A%7C%20count" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | project name, type, location | order by name
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20name%2C%20type%2C%20location%0D%0A%7C%20order%20by%20name%20asc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20name%2C%20type%2C%20location%0D%0A%7C%20order%20by%20name%20asc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | project name, location, type| where type =~ '
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20name%2C%20location%2C%20type%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20order%20by%20name%20desc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20project%20name%2C%20location%2C%20type%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20order%20by%20name%20desc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Compute/virtualMachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20project%20name%2C%20properties.storageProfile.osDisk.osType%0D%0A%7C%20top%205%20by%20name%20desc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20project%20name%2C%20properties.storageProfile.osDisk.osType%0D%0A%7C%20top%205%20by%20name%20desc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Compute/virtualMachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20summarize%20count%28%29%20by%20tostring%28properties.storageProfile.osDisk.osType%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20summarize%20count%28%29%20by%20tostring%28properties.storageProfile.osDisk.osType%29" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Compute/virtualMachi
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20extend%20os%20%3D%20properties.storageProfile.osDisk.osType%0D%0A%7C%20summarize%20count%28%29%20by%20tostring%28os%29" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Compute%2FvirtualMachines%27%0D%0A%7C%20extend%20os%20%3D%20properties.storageProfile.osDisk.osType%0D%0A%7C%20summarize%20count%28%29%20by%20tostring%28os%29" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type contains 'storage' | distinct type
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27storage%27%20%7C%20distinct%20type" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27storage%27%20%7C%20distinct%20type" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type == 'microsoft.network/virtualnetwo
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fvirtualnetworks%27%0A%7C%20extend%20subnets%20%3D%20properties.subnets%0A%7C%20mv-expand%20subnets%0A%7C%20project%20name%2C%20subnets.name%2C%20subnets.properties.addressPrefix%2C%20location%2C%20resourceGroup%2C%20subscriptionId" target="_blank">portal.Azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fvirtualnetworks%27%0A%7C%20extend%20subnets%20%3D%20properties.subnets%0A%7C%20mv-expand%20subnets%0A%7C%20project%20name%2C%20subnets.name%2C%20subnets.properties.addressPrefix%2C%20location%2C%20resourceGroup%2C%20subscriptionId" target="_blank">portal.Azure.us</a>
Search-AzGraph -Query "Resources | where type contains 'publicIPAddresses' and i
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty%28properties.ipAddress%29%0D%0A%7C%20project%20properties.ipAddress%0D%0A%7C%20limit%20100" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty%28properties.ipAddress%29%0D%0A%7C%20project%20properties.ipAddress%0D%0A%7C%20limit%20100" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type contains 'publicIPAddresses' and i
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty%28properties.ipAddress%29%0D%0A%7C%20summarize%20count%20%28%29%20by%20subscriptionId" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty%28properties.ipAddress%29%0D%0A%7C%20summarize%20count%20%28%29%20by%20subscriptionId" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where tags.environment=~'internal' | project
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20tags.environment%3D~%27internal%27%0D%0A%7C%20project%20name" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20tags.environment%3D~%27internal%27%0D%0A%7C%20project%20name" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where tags.environment=~'internal' | project
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20tags.environment%3D~%27internal%27%0D%0A%7C%20project%20name%2C%20tags" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20tags.environment%3D~%27internal%27%0D%0A%7C%20project%20name%2C%20tags" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Storage/storageAccou
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Storage%2FstorageAccounts%27%0D%0A%7C%20where%20tags%5B%27tag%20with%20a%20space%27%5D%3D%3D%27Custom%20value%27" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%27Microsoft.Storage%2FstorageAccounts%27%0D%0A%7C%20where%20tags%5B%27tag%20with%20a%20space%27%5D%3D%3D%27Custom%20value%27" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "ResourceContainers | where isnotempty(tags) | project tag
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%20%0A%7C%20where%20isnotempty%28tags%29%0A%7C%20project%20tags%0A%7C%20mvexpand%20tags%0A%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%7C%20union%20%28%0A%20%20%20%20resources%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0A%20%20%20%20%7C%20project%20tags%0A%20%20%20%20%7C%20mvexpand%20tags%0A%20%20%20%20%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%20%20%20%20%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%29%0A%7C%20distinct%20tagKey%2C%20tagValue%0A%7C%20where%20tagKey%20%21startswith%20%22hidden-%22" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/ResourceContainers%20%0A%7C%20where%20isnotempty%28tags%29%0A%7C%20project%20tags%0A%7C%20mvexpand%20tags%0A%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%7C%20union%20%28%0A%20%20%20%20resources%0A%20%20%20%20%7C%20where%20isnotempty%28tags%29%0A%20%20%20%20%7C%20project%20tags%0A%20%20%20%20%7C%20mvexpand%20tags%0A%20%20%20%20%7C%20extend%20tagKey%20%3D%20tostring%28bag_keys%28tags%29%5B0%5D%29%0A%20%20%20%20%7C%20extend%20tagValue%20%3D%20tostring%28tags%5BtagKey%5D%29%0A%29%0A%7C%20distinct%20tagKey%2C%20tagValue%0A%7C%20where%20tagKey%20%21startswith%20%22hidden-%22" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "Resources | where type =~ 'microsoft.network/networksecur
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%22microsoft.network%2Fnetworksecuritygroups%22%20and%20isnull%28properties.networkInterfaces%29%20and%20isnull%28properties.subnets%29%0D%0A%7C%20project%20name%2C%20resourceGroup%0D%0A%7C%20sort%20by%20name%20asc" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0D%0A%7C%20where%20type%20%3D~%20%22microsoft.network%2Fnetworksecuritygroups%22%20and%20isnull%28properties.networkInterfaces%29%20and%20isnull%28properties.subnets%29%0D%0A%7C%20project%20name%2C%20resourceGroup%0D%0A%7C%20sort%20by%20name%20asc" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "alertsmanagementresources | where type =~ 'microsoft.aler
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "alertsmanagementresources | where type =~ 'microsoft.aler
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%2C%20AlertState%20%3D%20tostring%28properties.essentials.alertState%29%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity%2C%20AlertState" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%2C%20AlertState%20%3D%20tostring%28properties.essentials.alertState%29%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity%2C%20AlertState" target="_blank">portal.azure.us</a>
Search-AzGraph -Query "alertsmanagementresources | where type =~ 'microsoft.aler
# [Portal](#tab/azure-portal) + - Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%2C%0D%0AMonitorCondition%20%3D%20tostring%28properties.essentials.monitorCondition%29%2C%0D%0AObjectState%20%3D%20tostring%28properties.essentials.alertState%29%2C%0D%0AMonitorService%20%3D%20tostring%28properties.essentials.monitorService%29%2C%0D%0AAlertRuleId%20%3D%20tostring%28properties.essentials.alertRule%29%2C%0D%0ASignalType%20%3D%20tostring%28properties.essentials.signalType%29%2C%0D%0ATargetResource%20%3D%20tostring%28properties.essentials.targetResourceName%29%2C%0D%0ATargetResourceType%20%3D%20tostring%28properties.essentials.targetResourceName%29%2C%20id%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity%2C%20MonitorService%20%2C%20TargetResourceType" target="_blank">portal.azure.com</a> - Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0D%0A%7C%20where%20type%20%3D~%20%27microsoft.alertsmanagement%2Falerts%27%0D%0A%7C%20where%20todatetime%28properties.essentials.startDateTime%29%20%3E%3D%20ago%282h%29%20and%20todatetime%28properties.essentials.startDateTime%29%20%3C%20now%28%29%0D%0A%7C%20project%20Severity%20%3D%20tostring%28properties.essentials.severity%29%2C%0D%0AMonitorCondition%20%3D%20tostring%28properties.essentials.monitorCondition%29%2C%0D%0AObjectState%20%3D%20tostring%28properties.essentials.alertState%29%2C%0D%0AMonitorService%20%3D%20tostring%28properties.essentials.monitorService%29%2C%0D%0AAlertRuleId%20%3D%20tostring%28properties.essentials.alertRule%29%2C%0D%0ASignalType%20%3D%20tostring%28properties.essentials.signalType%29%2C%0D%0ATargetResource%20%3D%20tostring%28properties.essentials.targetResourceName%29%2C%0D%0ATargetResourceType%20%3D%20tostring%28properties.essentials.targetResourceName%29%2C%20id%0D%0A%7C%20summarize%20AlertsCount%20%3D%20count%28%29%20by%20Severity%2C%20MonitorService%20%2C%20TargetResourceType" target="_blank">portal.azure.us</a>
hdinsight-aks Flink Job Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-management.md
Title: Apache Flink® job management in HDInsight on AKS
-description: HDInsight on AKS provides a feature to manage and submit Apache Flink jobs directly through the Azure portal
+description: HDInsight on AKS provides a feature to manage and submit Apache Flink jobs directly through the Azure portal.
Previously updated : 09/07/2023 Last updated : 04/01/2024 # Apache Flink® job management in HDInsight on AKS clusters
Portal --> HDInsight on AKS Cluster Pool --> Flink Cluster --> Settings --> Flin
| Entry class | Entry class for job from which job execution starts. | | Yes | | Args | Argument for main program of job. Separate all arguments with spaces. | | No | | parallelism | Job Flink Parallelism. | 2 | Yes |
- | savepoint.directory | Savepoint directory for job. It is recommended that users should create a new directory for job savepoint in storage account. | `abfs://<container>@<account>/<deployment-ID>/savepoints` | No |
+ | savepoint.directory | Savepoint directory for job. It's recommended that users should create a new directory for job savepoint in storage account. | `abfs://<container>@<account>/<deployment-ID>/savepoints` | No |
+ Once the job is launched, the job status on the portal is **RUNNING**. -- **Stop:** Stop job did not require any parameter, user can stop the job by selecting the action.
+- **Stop:** Stop job didn't require any parameter, user can stop the job by selecting the action.
:::image type="image" source="./media/flink-job-management/stop-job.png" alt-text="Screenshot shows how user can stop job." border="true" lightbox="./media/flink-job-management/stop-job.png":::
HDInsight on AKS supports user friendly ARM Rest APIs to submit job and manage j
#### Base URL format for Rest API
-See following URL for rest API, users need to replace subscription, resource group, cluster pool, cluster name and HDInsight on AKS API version in this before using it.
+See following URL for rest API, users need to replace subscription, resource group, cluster pool, cluster name, and HDInsight on AKS API version in this before using it.
`https://management.azure.com/subscriptions/{{USER_SUBSCRIPTION}}/resourceGroups/{{USER_RESOURCE_GROUP}}/providers/Microsoft.HDInsight/clusterpools/{{CLUSER_POOL}}/clusters/{{FLINK_CLUSTER}}/runjob?api-version={{API_VERSION}}` Using this REST API, users can initiate new jobs, stop jobs, start jobs, create savepoints, cancel jobs, and delete jobs. The current API_VERSION is 2023-06-01-preview.
To authenticate Flink ARM Rest API users, need to get the bearer token or acces
|entryClass | Entry class for job from which job execution starts. | | Yes | | args | Argument for main program of job. Separate arguments with space. | | No | | parallelism | Job Flink Parallelism. | 2 | Yes |
- | savepoint.directory | Savepoint directory for job. It is recommended that users should create a new directory for job savepoint in storage account. | `abfs://<container>@<account>/<deployment-ID>/savepoints`| No |
+ | savepoint.directory | Savepoint directory for job. It's recommended that users should create a new directory for job savepoint in storage account. | `abfs://<container>@<account>/<deployment-ID>/savepoints`| No |
+ Example:
To authenticate Flink ARM Rest API users, need to get the bearer token or acces
| jobType | Type of Job. It should be ΓÇ£FlinkJobΓÇ¥ | Yes | | jobName | Job Name that is used for launching the job. | Yes | | action | It should be ΓÇ£STARTΓÇ¥ | Yes |
- | savePointName | Save point name to start the job. It is optional property, by default start operation take last successful savepoint. | No |
+ | savePointName | Save point name to start the job. It's optional property, by default start operation take last successful savepoint. | No |
+ **Example:**
To authenticate Flink ARM Rest API users, need to get the bearer token or acces
| jobName | Job Name that is used for launching the job. | | Yes | | action | It should be ΓÇ£UPDATEΓÇ¥ always for new job launch. | | Yes | | args | Job JVM arguments | | No |
- | savePointName | Save point name to start the job. It is optional property, by default start operation will take last successful savepoint.| | No |
+ | savePointName | Save point name to start the job. It's optional property, by default start operation will take last successful savepoint.| | No |
+ Example:
hdinsight How To Custom Configure Hdinsight Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/how-to-custom-configure-hdinsight-autoscale.md
Following are few configurations that can be tuned to custom configure HDInsight
|yarn.max.scale.up.increment | Maximum number of nodes to scale up in one go|200 | Hadoop/Spark/Interactive Query|It has been tested with 200 nodes. We don't recommend setting this value to more than 200. It can be set to less than 200 if the customer wants less aggressive scale up | |yarn.max.scale.down.increment |Maximum number of nodes to scale up in one go | 50|Hadoop/Spark/Interactive Query|Can be set to up to 100 | |nodemanager.recommission.enabled |Feature to enabled recommissioning of decommissioning NMs before adding new nodes to the cluster|True |Hadoop/Spark load based autoscale |Disabling this feature can cause underutilization of cluster. There can be nodes in decommissioning state, which have no containers to run but are waiting for application to finish, even if there's more load in the cluster. **Note:** Applicable for images on **2304280205** or later|
-|UnderProvisioningDiagnoser.time.ms |The cluster which is under provisioned for time in milliseconds would trigger scaling up |180000 |Hadoop/Spark load based autoscaling |-|
-|OverProvisioningDiagnoser.time.ms |The cluster which is over provisioned for time in milliseconds would trigger scaling down |180000 |Hadoop/Spark load based autoscaling |-|
+|UnderProvisioningDiagnoser.time.ms |Time in milliseconds for which cluster needs to under provisioned for a scale up to trigger |180000 |Hadoop/Spark load based autoscaling |-|
+|OverProvisioningDiagnoser.time.ms |Time in milliseconds for which cluster needs to be overprovisioned for a scale down to trigger |180000 |Hadoop/Spark load based autoscaling |-|
|hdfs.decommission.enable |Decommission data nodes before triggering decommissioning node managers. HDFS doesn't support any graceful decommission timeout, itΓÇÖs immediate |True | Hadoop/Spark load based autoscaling|Decommissioning datanodes before decommissioning nodemanagers so that particular datanode isn't used for storing shuffle data.| |scaling.recommission.cooldown.ms | Cooldown period after recommission during which no metrics are sampled|120000 |Hadoop/Spark load based autoscaling |This cooldown period ensures the cluster has some time to redistribute the load to the newly recommissioned `nodemanagers`. **Note:** Applicable for images on **2304280205** or later| |scale.down.nodes.with.ms | Scale down nodes where an AM is running|false | Hadoop/Spark|Can be turned on if there are enough reattempts configured for the AM. Useful for cases where there are long running applications (example spark streaming) which can be killed for scaling down cluster if load has reduced. **Note:** Applicable for images on **2304280205** or later|
load-balancer Upgrade Basic Standard With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md
PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -R
## Use the module
-1. Use `Connect-AzAccount` to connect to the required Microsoft Entra tenant and Azure subscription
+1. Use `Connect-AzAccount` to connect to Azure, specifying the Basic Load Balancer's subscription ID if you have more than one subscription.
```powershell
- PS C:\> Connect-AzAccount -Tenant <TenantId> -Subscription <SubscriptionId>
+ PS C:\> Connect-AzAccount -Subscription <SubscriptionId>
``` 2. Find the Load Balancer you wish to upgrade. Record its name and resource group name.
One way to get a list of the Basic Load Balancers needing to be migrated in your
Resources | where type == 'microsoft.network/loadbalancers' and sku.name == 'Basic' ```-
-We have also written a more complex query which assesses the readiness of each Basic Load Balancer for migration on most of the criteria this module checks during [validation](#example-validate-a-scenario). The Resource Graph query can be found in our [GitHub project](https://github.com/Azure/AzLoadBalancerMigration/blob/main/AzureBasicLoadBalancerUpgrade/utilities/migration_graph_query.txt) or opened in the [Azure Resource Graph Explorer](https://portal.azure.com/?#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Floadbalancers%27%20and%20sku.name%20%3D%3D%20%27Basic%27%0A%7C%20project%20fes%20%3D%20properties.frontendIPConfigurations%2C%20bes%20%3D%20properties.backendAddressPools%2C%5B%27id%27%5D%2C%5B%27tags%27%5D%2CsubscriptionId%2CresourceGroup%2Cname%0A%7C%20extend%20backendPoolCount%20%3D%20array_length%28bes%29%0A%7C%20extend%20internalOrExternal%20%3D%20iff%28isnotempty%28fes%29%2Ciff%28isnotempty%28fes%5B0%5D.properties.privateIPAddress%29%2C%27Internal%27%2C%27External%27%29%2C%27None%27%29%0A%20%20%20%20%7C%20join%20kind%3Dleftouter%20hint.strategy%3Dshuffle%20%28%0A%20%20%20%20%20%20%20%20resources%0A%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fpublicipaddresses%27%0A%20%20%20%20%20%20%20%20%7C%20where%20properties.publicIPAddressVersion%20%3D%3D%20%27IPv6%27%0A%20%20%20%20%20%20%20%20%7C%20extend%20publicIPv6LBId%20%3D%20tostring%28split%28properties.ipConfiguration.id%2C%27%2FfrontendIPConfigurations%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%7C%20distinct%20publicIPv6LBId%0A%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.publicIPv6LBId%0A%20%20%20%20%7C%20join%20kind%20%3D%20leftouter%20hint.strategy%3Dshuffle%20%28%0A%20%20%20%20%20%20%20%20resources%20%0A%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fnetworkinterfaces%27%20and%20isnotempty%28properties.virtualMachine.id%29%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmNICHasNSG%20%3D%20isnotnull%28properties.networkSecurityGroup.id%29%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmNICSubnetIds%20%3D%20tostring%28extract_all%28%27%28%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FvirtualNetworks%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fsubnets%2F%5Ba-zA-Z0-9-_%5D%2A%29%27%2Ctostring%28properties.ipConfigurations%29%29%29%0A%20%20%20%20%20%20%20%20%7C%20mv-expand%20ipConfigs%20%3D%20properties.ipConfigurations%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmPublicIPId%20%3D%20extract%28%27%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FpublicIPAddresses%2F%5Ba-zA-Z0-9-_%5D%2A%27%2C0%2Ctostring%28ipConfigs%29%29%0A%20%20%20%20%20%20%20%20%7C%20where%20isnotempty%28ipConfigs.properties.loadBalancerBackendAddressPools%29%20%0A%20%20%20%20%20%20%20%20%7C%20mv-expand%20bes%20%3D%20ipConfigs.properties.loadBalancerBackendAddressPools%0A%20%20%20%20%20%20%20%20%7C%20extend%20nicLoadBalancerId%20%3D%20tostring%28split%28bes.id%2C%27%2FbackendAddressPools%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%7C%20summarize%20vmNICsNSGStatus%20%3D%20make_set%28vmNICHasNSG%29%20by%20nicLoadBalancerId%2CvmPublicIPId%2CvmNICSubnetIds%0A%20%20%20%20%20%20%20%20%7C%20extend%20allVMNicsHaveNSGs%20%3D%20set_has_element%28vmNICsNSGStatus%2CFalse%29%0A%20%20%20%20%20%20%20%20%7C%20summarize%20publicIpCount%20%3D%20dcount%28vmPublicIPId%29%20by%20nicLoadBalancerId%2C%20allVMNicsHaveNSGs%2C%20vmNICSubnetIds%0A%20%20%20%20%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.nicLoadBalancerId%0A%20%20%20%20%20%20%20%20%7C%20join%20kind%20%3D%20leftouter%20%28%0A%20%20%20%20%20%20%20%20%20%20%20%20resources%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.compute%2Fvirtualmachinescalesets%27%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssSubnetIds%20%3D%20tostring%28extract_all%28%27%28%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FvirtualNetworks%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fsubnets%2F%5Ba-zA-Z0-9-_%5D%2A%29%27%2Ctostring%28properties.virtualMachineProfile.networkProfile.networkInterfaceConfigurations%29%29%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20nicConfigs%20%3D%20properties.virtualMachineProfile.networkProfile.networkInterfaceConfigurations%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssNicHasNSG%20%3D%20isnotnull%28properties.networkSecurityGroup.id%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20ipConfigs%20%3D%20nicConfigs.properties.ipConfigurations%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssHasPublicIPConfig%20%3D%20iff%28tostring%28ipConfigs%29%20matches%20regex%20%40%27publicIPAddressVersion%27%2Ctrue%2Cfalse%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20where%20isnotempty%28ipConfigs.properties.loadBalancerBackendAddressPools%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20bes%20%3D%20ipConfigs.properties.loadBalancerBackendAddressPools%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssLoadBalancerId%20%3D%20tostring%28split%28bes.id%2C%27%2FbackendAddressPools%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20summarize%20vmssNICsNSGStatus%20%3D%20make_set%28vmssNicHasNSG%29%20by%20vmssLoadBalancerId%2C%20vmssHasPublicIPConfig%2C%20vmssSubnetIds%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20allVMSSNicsHaveNSGs%20%3D%20set_has_element%28vmssNICsNSGStatus%2CFalse%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20distinct%20vmssLoadBalancerId%2C%20vmssHasPublicIPConfig%2C%20allVMSSNicsHaveNSGs%2C%20vmssSubnetIds%0A%20%20%20%20%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.vmssLoadBalancerId%0A%7C%20extend%20subnetIds%20%3D%20set_difference%28todynamic%28coalesce%28vmNICSubnetIds%2CvmssSubnetIds%29%29%2Cdynamic%28%5B%5D%29%29%20%2F%2F%20return%20only%20unique%20subnet%20ids%0A%7C%20mv-expand%20subnetId%20%3D%20subnetIds%0A%7C%20extend%20subnetId%20%3D%20tostring%28subnetId%29%0A%7C%20project-away%20vmNICSubnetIds%2C%20vmssSubnetIds%2C%20subnetIds%0A%7C%20extend%20backendType%20%3D%20iff%28isnotempty%28bes%29%2Ciff%28isnotempty%28nicLoadBalancerId%29%2C%27VMs%27%2Ciff%28isnotempty%28vmssLoadBalancerId%29%2C%27VMSS%27%2C%27Empty%27%29%29%2C%27Empty%27%29%0A%7C%20extend%20lbHasIPv6PublicIP%20%3D%20iff%28isnotempty%28publicIPv6LBId%29%2Ctrue%2Cfalse%29%0A%7C%20project-away%20fes%2C%20bes%2C%20nicLoadBalancerId%2C%20vmssLoadBalancerId%2C%20publicIPv6LBId%2C%20subnetId%0A%7C%20extend%20vmsHavePublicIPs%20%3D%20iff%28publicIpCount%20%3E%200%2Ctrue%2Cfalse%29%0A%7C%20extend%20vmssHasPublicIPs%20%3D%20iff%28isnotempty%28vmssHasPublicIPConfig%29%2CvmssHasPublicIPConfig%2Cfalse%29%0A%7C%20extend%20warnings%20%3D%20dynamic%28%5B%5D%29%0A%7C%20extend%20errors%20%3D%20dynamic%28%5B%5D%29%0A%7C%20extend%20warnings%20%3D%20iff%28vmssHasPublicIPs%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMSS%20instances%20have%20Public%20IPs%3A%20VMSS%20Public%20IPs%20will%20change%20during%20migration%27%2C%27VMSS%20instances%20have%20Public%20IPs%3A%20NSGs%20will%20be%20required%20for%20internet%20access%20through%20VMSS%20instance%20public%20IPs%20once%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28vmsHavePublicIPs%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMs%20have%20Public%20IPs%3A%20NSGs%20will%20be%20required%20for%20internet%20access%20through%20VM%20public%20IPs%20once%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27Internal%27%20and%20not%28vmsHavePublicIPs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Internal%20Load%20Balancer%3A%20LB%20is%20internal%20and%20VMs%20do%20not%20have%20Public%20IPs.%20Unless%20internet%20traffic%20is%20already%20%20being%20routed%20through%20an%20NVA%2C%20VMs%20will%20have%20no%20internet%20connectivity%20post-migration%20without%20additional%20action.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27Internal%27%20and%20not%28vmssHasPublicIPs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Internal%20Load%20Balancer%3A%20LB%20is%20internal%20and%20VMSS%20instances%20do%20not%20have%20Public%20IPs.%20Unless%20internet%20traffic%20is%20already%20being%20routed%20through%20an%20NVA%2C%20VMSS%20instances%20will%20have%20no%20internet%20connectivity%20post-migration%20without%20additional%20action.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27External%27%20and%20backendPoolCount%20%3E%201%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27External%20Load%20Balancer%3A%20LB%20is%20external%20and%20has%20multiple%20backend%20pools.%20Outbound%20rules%20will%20not%20be%20created%20automatically.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28%28vmsHavePublicIPs%20or%20internalOrExternal%20%3D%3D%20%27External%27%29%20and%20not%28allVMNicsHaveNSGs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMs%20Missing%20NSGs%3A%20Not%20all%20VM%20NICs%20or%20subnets%20have%20associated%20NSGs.%20An%20NSG%20will%20be%20created%20to%20allow%20load%20balanced%20traffic%2C%20but%20it%20is%20preferred%20that%20you%20create%20and%20associate%20an%20NSG%20before%20starting%20the%20migration.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28%28vmssHasPublicIPs%20or%20internalOrExternal%20%3D%3D%20%27External%27%29%20and%20not%28allVMSSNicsHaveNSGs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMSS%20Missing%20NSGs%3A%20Not%20all%20VMSS%20NICs%20or%20subnets%20have%20associated%20NSGs.%20An%20NSG%20will%20be%20created%20to%20allow%20load%20balanced%20traffic%2C%20but%20it%20is%20preferred%20that%20you%20create%20and%20associate%20an%20NSG%20before%20starting%20the%20migration.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28bag_keys%28tags%29%20contains%20%27resourceType%27%20and%20tags%5B%27resourceType%27%5D%20%3D%3D%20%27Service%20Fabric%27%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Service%20Fabric%20LB%3A%20LB%20appears%20to%20be%20in%20front%20of%20a%20Service%20Fabric%20Cluster.%20Unmanaged%20SF%20clusters%20may%20take%20an%20hour%20or%20more%20to%20migrate%3B%20managed%20are%20not%20supported%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warningCount%20%3D%20array_length%28warnings%29%0A%7C%20extend%20errors%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27External%27%20and%20lbHasIPv6PublicIP%29%2Carray_concat%28errors%2Cdynamic%28%5B%27External%20Load%20Balancer%20has%20IPv6%3A%20LB%20is%20external%20and%20has%20an%20IPv6%20Public%20IP.%20Basic%20SKU%20IPv6%20public%20IPs%20cannot%20be%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cerrors%29%0A%7C%20extend%20errors%20%3D%20iff%28%28id%20matches%20regex%20%40%27%2F%28kubernetes%7Ckubernetes-internal%29%5E%27%20or%20%28bag_keys%28tags%29%20contains%20%27aks-managed-cluster-name%27%29%29%2Carray_concat%28errors%2Cdynamic%28%5B%27AKS%20Load%20Balancer%3A%20Load%20balancer%20appears%20to%20be%20in%20front%20of%20a%20Kubernetes%20cluster%2C%20which%20is%20not%20supported%20for%20migration%27%5D%29%29%2Cerrors%29%0A%7C%20extend%20errorCount%20%3D%20array_length%28errors%29%0A%7C%20project%20id%2CinternalOrExternal%2Cwarnings%2Cerrors%2CwarningCount%2CerrorCount%2CsubscriptionId%2CresourceGroup%2Cname%0A%7C%20sort%20by%20errorCount%2CwarningCount%0A%7C%20project-away%20errorCount%2CwarningCount).
+''
+We have also written a more complex query which assesses the readiness of each Basic Load Balancer for migration on most of the criteria this module checks during [validation](#example-validate-a-scenario). The Resource Graph query can be found in our [GitHub project](https://github.com/Azure/AzLoadBalancerMigration/blob/main/AzureBasicLoadBalancerUpgrade/utilities/migration_graph_query.txt) or opened in the [Azure Resource Graph Explorer](https://portal.azure.com/?#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Floadbalancers%27%20and%20sku.name%20%3D%3D%20%27Basic%27%0A%7C%20project%20fes%20%3D%20properties.frontendIPConfigurations%2C%20bes%20%3D%20properties.backendAddressPools%2C%5B%27id%27%5D%2C%5B%27tags%27%5D%2CsubscriptionId%2CresourceGroup%2Cname%0A%7C%20extend%20backendPoolCount%20%3D%20array_length%28bes%29%0A%7C%20extend%20internalOrExternal%20%3D%20iff%28isnotempty%28fes%29%2Ciff%28isnotempty%28fes%5B0%5D.properties.privateIPAddress%29%2C%27Internal%27%2C%27External%27%29%2C%27None%27%29%0A%20%20%20%20%7C%20join%20kind%3Dleftouter%20hint.strategy%3Dshuffle%20%28%0A%20%20%20%20%20%20%20%20resources%0A%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fpublicipaddresses%27%0A%20%20%20%20%20%20%20%20%7C%20where%20properties.publicIPAddressVersion%20%3D%3D%20%27IPv6%27%0A%20%20%20%20%20%20%20%20%7C%20extend%20publicIPv6LBId%20%3D%20tostring%28split%28properties.ipConfiguration.id%2C%27%2FfrontendIPConfigurations%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%7C%20distinct%20publicIPv6LBId%0A%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.publicIPv6LBId%0A%20%20%20%20%7C%20join%20kind%20%3D%20leftouter%20hint.strategy%3Dshuffle%20%28%0A%20%20%20%20%20%20%20%20resources%20%0A%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.network%2Fnetworkinterfaces%27%20and%20isnotempty%28properties.virtualMachine.id%29%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmNICHasNSG%20%3D%20isnotnull%28properties.networkSecurityGroup.id%29%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmNICSubnetIds%20%3D%20tostring%28extract_all%28%27%28%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FvirtualNetworks%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fsubnets%2F%5Ba-zA-Z0-9-_%5D%2A%29%27%2Ctostring%28properties.ipConfigurations%29%29%29%0A%20%20%20%20%20%20%20%20%7C%20mv-expand%20ipConfigs%20%3D%20properties.ipConfigurations%0A%20%20%20%20%20%20%20%20%7C%20extend%20vmPublicIPId%20%3D%20extract%28%27%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FpublicIPAddresses%2F%5Ba-zA-Z0-9-_%5D%2A%27%2C0%2Ctostring%28ipConfigs%29%29%0A%20%20%20%20%20%20%20%20%7C%20where%20isnotempty%28ipConfigs.properties.loadBalancerBackendAddressPools%29%20%0A%20%20%20%20%20%20%20%20%7C%20mv-expand%20bes%20%3D%20ipConfigs.properties.loadBalancerBackendAddressPools%0A%20%20%20%20%20%20%20%20%7C%20extend%20nicLoadBalancerId%20%3D%20tostring%28split%28bes.id%2C%27%2FbackendAddressPools%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%7C%20summarize%20vmNICsNSGStatus%20%3D%20make_set%28vmNICHasNSG%29%20by%20nicLoadBalancerId%2CvmPublicIPId%2CvmNICSubnetIds%0A%20%20%20%20%20%20%20%20%7C%20extend%20allVMNicsHaveNSGs%20%3D%20set_has_element%28vmNICsNSGStatus%2CFalse%29%0A%20%20%20%20%20%20%20%20%7C%20summarize%20publicIpCount%20%3D%20dcount%28vmPublicIPId%29%20by%20nicLoadBalancerId%2C%20allVMNicsHaveNSGs%2C%20vmNICSubnetIds%0A%20%20%20%20%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.nicLoadBalancerId%0A%20%20%20%20%20%20%20%20%7C%20join%20kind%20%3D%20leftouter%20%28%0A%20%20%20%20%20%20%20%20%20%20%20%20resources%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20where%20type%20%3D%3D%20%27microsoft.compute%2Fvirtualmachinescalesets%27%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssSubnetIds%20%3D%20tostring%28extract_all%28%27%28%2Fsubscriptions%2F%5Ba-f0-9-%5D%2B%3F%2FresourceGroups%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fproviders%2FMicrosoft.Network%2FvirtualNetworks%2F%5Ba-zA-Z0-9-_%5D%2B%3F%2Fsubnets%2F%5Ba-zA-Z0-9-_%5D%2A%29%27%2Ctostring%28properties.virtualMachineProfile.networkProfile.networkInterfaceConfigurations%29%29%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20nicConfigs%20%3D%20properties.virtualMachineProfile.networkProfile.networkInterfaceConfigurations%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssNicHasNSG%20%3D%20isnotnull%28properties.networkSecurityGroup.id%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20ipConfigs%20%3D%20nicConfigs.properties.ipConfigurations%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssHasPublicIPConfig%20%3D%20iff%28tostring%28ipConfigs%29%20matches%20regex%20%40%27publicIPAddressVersion%27%2Ctrue%2Cfalse%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20where%20isnotempty%28ipConfigs.properties.loadBalancerBackendAddressPools%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20mv-expand%20bes%20%3D%20ipConfigs.properties.loadBalancerBackendAddressPools%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20vmssLoadBalancerId%20%3D%20tostring%28split%28bes.id%2C%27%2FbackendAddressPools%2F%27%29%5B0%5D%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20summarize%20vmssNICsNSGStatus%20%3D%20make_set%28vmssNicHasNSG%29%20by%20vmssLoadBalancerId%2C%20vmssHasPublicIPConfig%2C%20vmssSubnetIds%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20extend%20allVMSSNicsHaveNSGs%20%3D%20set_has_element%28vmssNICsNSGStatus%2CFalse%29%0A%20%20%20%20%20%20%20%20%20%20%20%20%7C%20distinct%20vmssLoadBalancerId%2C%20vmssHasPublicIPConfig%2C%20allVMSSNicsHaveNSGs%2C%20vmssSubnetIds%0A%20%20%20%20%20%20%20%20%29%20on%20%24left.id%20%3D%3D%20%24right.vmssLoadBalancerId%0A%7C%20extend%20subnetIds%20%3D%20set_difference%28todynamic%28coalesce%28vmNICSubnetIds%2CvmssSubnetIds%29%29%2Cdynamic%28%5B%5D%29%29%20%2F%2F%20return%20only%20unique%20subnet%20ids%0A%7C%20mv-expand%20subnetId%20%3D%20subnetIds%0A%7C%20extend%20subnetId%20%3D%20tostring%28subnetId%29%0A%7C%20project-away%20vmNICSubnetIds%2C%20vmssSubnetIds%2C%20subnetIds%0A%7C%20extend%20backendType%20%3D%20iff%28isnotempty%28bes%29%2Ciff%28isnotempty%28nicLoadBalancerId%29%2C%27VMs%27%2Ciff%28isnotempty%28vmssLoadBalancerId%29%2C%27VMSS%27%2C%27Empty%27%29%29%2C%27Empty%27%29%0A%7C%20extend%20lbHasIPv6PublicIP%20%3D%20iff%28isnotempty%28publicIPv6LBId%29%2Ctrue%2Cfalse%29%0A%7C%20project-away%20fes%2C%20bes%2C%20nicLoadBalancerId%2C%20vmssLoadBalancerId%2C%20publicIPv6LBId%2C%20subnetId%0A%7C%20extend%20vmsHavePublicIPs%20%3D%20iff%28publicIpCount%20%3E%200%2Ctrue%2Cfalse%29%0A%7C%20extend%20vmssHasPublicIPs%20%3D%20iff%28isnotempty%28vmssHasPublicIPConfig%29%2CvmssHasPublicIPConfig%2Cfalse%29%0A%7C%20extend%20warnings%20%3D%20dynamic%28%5B%5D%29%0A%7C%20extend%20errors%20%3D%20dynamic%28%5B%5D%29%0A%7C%20extend%20warnings%20%3D%20iff%28vmssHasPublicIPs%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMSS%20instances%20have%20Public%20IPs%3A%20VMSS%20Public%20IPs%20will%20change%20during%20migration%27%2C%27VMSS%20instances%20have%20Public%20IPs%3A%20NSGs%20will%20be%20required%20for%20internet%20access%20through%20VMSS%20instance%20public%20IPs%20once%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28vmsHavePublicIPs%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMs%20have%20Public%20IPs%3A%20NSGs%20will%20be%20required%20for%20internet%20access%20through%20VM%20public%20IPs%20once%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28backendType%20%3D%3D%20%27VMs%27%20and%20internalOrExternal%20%3D%3D%20%27Internal%27%20and%20not%28vmsHavePublicIPs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Internal%20Load%20Balancer%3A%20LB%20is%20internal%20and%20VMs%20do%20not%20have%20Public%20IPs.%20Unless%20internet%20traffic%20is%20already%20%20being%20routed%20through%20an%20NVA%2C%20VMs%20will%20have%20no%20internet%20connectivity%20post-migration%20without%20additional%20action.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28backendType%20%3D%3D%20%27VMSS%27%20and%20internalOrExternal%20%3D%3D%20%27Internal%27%20and%20not%28vmssHasPublicIPs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Internal%20Load%20Balancer%3A%20LB%20is%20internal%20and%20VMSS%20instances%20do%20not%20have%20Public%20IPs.%20Unless%20internet%20traffic%20is%20already%20being%20routed%20through%20an%20NVA%2C%20VMSS%20instances%20will%20have%20no%20internet%20connectivity%20post-migration%20without%20additional%20action.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27External%27%20and%20backendPoolCount%20%3E%201%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27External%20Load%20Balancer%3A%20LB%20is%20external%20and%20has%20multiple%20backend%20pools.%20Outbound%20rules%20will%20not%20be%20created%20automatically.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28backendType%20%3D%3D%20%27VMs%27%20and%20%28vmsHavePublicIPs%20or%20internalOrExternal%20%3D%3D%20%27External%27%29%20and%20not%28allVMNicsHaveNSGs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMs%20Missing%20NSGs%3A%20Not%20all%20VM%20NICs%20or%20subnets%20have%20associated%20NSGs.%20An%20NSG%20will%20be%20created%20to%20allow%20load%20balanced%20traffic%2C%20but%20it%20is%20preferred%20that%20you%20create%20and%20associate%20an%20NSG%20before%20starting%20the%20migration.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28backendType%20%3D%3D%20%27VMSS%27%20and%20%28vmssHasPublicIPs%20or%20internalOrExternal%20%3D%3D%20%27External%27%29%20and%20not%28allVMSSNicsHaveNSGs%29%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27VMSS%20Missing%20NSGs%3A%20Not%20all%20VMSS%20NICs%20or%20subnets%20have%20associated%20NSGs.%20An%20NSG%20will%20be%20created%20to%20allow%20load%20balanced%20traffic%2C%20but%20it%20is%20preferred%20that%20you%20create%20and%20associate%20an%20NSG%20before%20starting%20the%20migration.%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warnings%20%3D%20iff%28%28bag_keys%28tags%29%20contains%20%27resourceType%27%20and%20tags%5B%27resourceType%27%5D%20%3D%3D%20%27Service%20Fabric%27%29%2Carray_concat%28warnings%2Cdynamic%28%5B%27Service%20Fabric%20LB%3A%20LB%20appears%20to%20be%20in%20front%20of%20a%20Service%20Fabric%20Cluster.%20Unmanaged%20SF%20clusters%20may%20take%20an%20hour%20or%20more%20to%20migrate%3B%20managed%20are%20not%20supported%27%5D%29%29%2Cwarnings%29%0A%7C%20extend%20warningCount%20%3D%20array_length%28warnings%29%0A%7C%20extend%20errors%20%3D%20iff%28%28internalOrExternal%20%3D%3D%20%27External%27%20and%20lbHasIPv6PublicIP%29%2Carray_concat%28errors%2Cdynamic%28%5B%27External%20Load%20Balancer%20has%20IPv6%3A%20LB%20is%20external%20and%20has%20an%20IPv6%20Public%20IP.%20Basic%20SKU%20IPv6%20public%20IPs%20cannot%20be%20upgraded%20to%20Standard%20SKU%27%5D%29%29%2Cerrors%29%0A%7C%20extend%20errors%20%3D%20iff%28%28id%20matches%20regex%20%40%27%2F%28kubernetes%7Ckubernetes-internal%29%5E%27%20or%20%28bag_keys%28tags%29%20contains%20%27aks-managed-cluster-name%27%29%29%2Carray_concat%28errors%2Cdynamic%28%5B%27AKS%20Load%20Balancer%3A%20Load%20balancer%20appears%20to%20be%20in%20front%20of%20a%20Kubernetes%20cluster%2C%20which%20is%20not%20supported%20for%20migration%27%5D%29%29%2Cerrors%29%0A%7C%20extend%20errorCount%20%3D%20array_length%28errors%29%0A%7C%20project%20id%2CinternalOrExternal%2Cwarnings%2Cerrors%2CwarningCount%2CerrorCount%2CsubscriptionId%2CresourceGroup%2Cname%0A%7C%20sort%20by%20errorCount%2CwarningCount%0A%7C%20project-away%20errorCount%2CwarningCount).
### Will this migration cause downtime to my application?
machine-learning Resnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/resnet.md
This classification algorithm is a supervised learning method, and requires a la
> [!NOTE] > This component does not support labeled dataset generated from *Data Labeling* in the studio, but only support labeled image directory generated from [Convert to Image Directory](convert-to-image-directory.md) component.
-You can train the model by providing a model and a labeled image directory as inputs to [Train Pytorch Model](train-pytorch-model.md). The trained model can then be used to predict values for the new input examples using [Score Image Model](score-image-model.md).
+You can train the model by providing a model and a labeled image directory as inputs to [Train PyTorch Model](train-pytorch-model.md). The trained model can then be used to predict values for the new input examples using [Score Image Model](score-image-model.md).
### More about ResNet
Refer to [this paper](https://pytorch.org/vision/stable/models.html#torchvision.
4. For **Zero init residual**, specify whether to zero-initialize the last batch norm layer in each residual branch. If selected, the residual branch starts with zeros, and each residual block behaves like an identity. This can help with convergence at large batch sizes according to https://arxiv.org/abs/1706.02677.
-5. Connect the output of **ResNet** component, training and validation image dataset component to the [Train Pytorch Model](train-pytorch-model.md).
+5. Connect the output of **ResNet** component, training and validation image dataset component to the [Train PyTorch Model](train-pytorch-model.md).
6. Submit the pipeline.
After pipeline run is completed, to use the model for scoring, connect the [Trai
| Name | Type | Description | | | -- | - |
-| Untrained model | UntrainedModelDirectory | An untrained ResNet model that can be connected to Train Pytorch Model. |
+| Untrained model | UntrainedModelDirectory | An untrained ResNet model that can be connected to Train PyTorch Model. |
## Next steps
-See the [set of components available](component-reference.md) to Azure Machine Learning.
+See the [set of components available](component-reference.md) to Azure Machine Learning.
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image
To define the input data of a job that references the Web-based data, run:
-[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
machine-learning How To Use Batch Model Openai Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-openai-embeddings.md
Model deployments in batch endpoints can only deploy registered models. You can
> * Add an environment variable `AZUREML_BI_TEXT_COLUMN` to control (optionally) which input field you want to generate embeddings for. > [!TIP]
- > By default, MLflow will use the first text column available in the input data to generate embeddings from. Use the environment variable `AZUREML_BI_TEXT_COLUMN` with the name of an existing column in the input dataset to change the column if needed. Leave it blank if the defaut behavior works for you.
+ > By default, MLflow will use the first text column available in the input data to generate embeddings from. Use the environment variable `AZUREML_BI_TEXT_COLUMN` with the name of an existing column in the input dataset to change the column if needed. Leave it blank if the default behavior works for you.
The scoring script looks as follows:
machine-learning Tutorial Feature Store Domain Specific Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-feature-store-domain-specific-language.md
- Title: "Tutorial 7: Develop a feature set using Domain Specific Language (preview)"-
-description: This is part 7 of the managed feature store tutorial series.
------- Previously updated : 03/29/2024--
-#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
--
-# Tutorial 7: Develop a feature set using Domain Specific Language (preview)
--
-An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and proceeds to the inference steps that look up feature data. For more information about feature stores, visit [feature store concepts](./concept-what-is-managed-feature-store.md).
-
-This tutorial describes how to develop a feature set using Domain Specific Language. The Domain Specific Language (DSL) for the managed feature store provides a simple and user-friendly way to define the most commonly used feature aggregations. With the feature store SDK, users can perform the most commonly used aggregations with a DSL *expression*. Aggregations that use the DSL *expression* ensure consistent results, compared with user-defined functions (UDFs). Additionally, those aggregations avoid the overhead of writing UDFs.
-
-This Tutorial shows how to
-
-> [!div class="checklist"]
-> * Create a new, minimal feature store workspace
-> * Locally develop and test a feature, through use of Domain Specific Language (DSL)
-> * Develop a feature set through use of User Defined Functions (UDFs) that perform the same transformations as a feature set created with DSL
-> * Compare the results of the feature sets created with DSL, and feature sets created with UDFs
-> * Register a feature store entity with the feature store
-> * Register the feature set created using DSL with the feature store
-> * Generate sample training data using the created features
-
-## Prerequisites
-
-> [!NOTE]
-> This tutorial uses an Azure Machine Learning notebook with **Serverless Spark Compute**.
-
-Before you proceed with this tutorial, make sure that you cover these prerequisites:
-
-1. An Azure Machine Learning workspace. If you don't have one, visit [Quickstart: Create workspace resources](./quickstart-create-resources.md?view-azureml-api-2) to learn how to create one.
-1. To perform the steps in this tutorial, your user account needs either the **Owner** or **Contributor** role to the resource group where the feature store will be created.
-
-## Set up
-
- This tutorial relies on the Python feature store core SDK (`azureml-featurestore`). This SDK is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
-
- You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yml` file covers them.
-
- To prepare the notebook environment for development:
-
- 1. Clone the [examples repository - (azureml-examples)](https://github.com/azure/azureml-examples) to your local machine with this command:
-
- `git clone --depth 1 https://github.com/Azure/azureml-examples`
-
- You can also download a zip file from the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local machine.
-
- 1. Upload the feature store samples directory to project workspace
- 1. Open Azure Machine Learning studio UI of your Azure Machine Learning workspace
- 1. Select **Notebooks** in left navigation panel
- 1. Select your user name in the directory listing
- 1. Select the ellipses (**...**), and then select **Upload folder**
- 1. Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`
-
- 1. Run the tutorial
-
- * Option 1: Create a new notebook, and execute the instructions in this document, step by step
- * Option 2: Open existing notebook `featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb`. You can keep this document open, and refer to it for more explanation and documentation links
-
- 1. To configure the notebook environment, you must upload the `conda.yml` file
-
- 1. Select **Notebooks** on the left navigation panel, and then select the **Files** tab
- 1. Navigate to the `env` directory (select **Users** > *your_user_name* > **featurestore_sample** > **project** > **env**), and then select the `conda.yml` file
- 1. Select **Download**
- 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for the status bar in the top to display the **Configure session** link
- 1. Select **Configure session** in the top status bar
- 1. Select **Settings**
- 1. Select **Apache Spark version** as `Spark version 3.3`
- 1. Optionally, increase the **Session timeout** (idle time) if you want to avoid frequent restarts of the serverless Spark session
- 1. Under **Configuration settings**, define *Property* `spark.jars.packages` and *Value* `com.microsoft.azure:azureml-fs-scala-impl:1.0.4`
- :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" lightbox="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" alt-text="This screenshot shows the Spark session property for a package that contains the jar file used by managed feature store domain-specific language.":::
- 1. Select **Python packages**
- 1. Select **Upload conda file**
- 1. Select the `conda.yml` you downloaded on your local device
- 1. Select **Apply**
-
- > [!TIP]
- > Except for this specific step, you must run all the other steps every time you start a new spark session, or after session time out.
-
- 1. This code cell sets up the root directory for the samples and starts the Spark session. It needs about 10 minutes to install all the dependencies and start the Spark session:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=setup-root-dir)]
-
-## Provision the necessary resources
-
- 1. Create a minimal feature store:
-
- Create a feature store in a region of your choice, from the Azure Machine Learning studio UI or with Azure Machine Learning Python SDK code.
-
- * Option 1: Create feature store from the Azure Machine Learning studio UI
-
- 1. Navigate to the feature store UI [landing page](https://ml.azure.com/featureStores)
- 1. Select **+ Create**
- 1. The **Basics** tab appears
- 1. Choose a **Name** for your feature store
- 1. Select the **Subscription**
- 1. Select the **Resource group**
- 1. Select the **Region**
- 1. Select **Apache Spark version** 3.3, and then select **Next**
- 1. The **Materialization** tab appears
- 1. Toggle **Enable materialization**
- 1. Select **Subscription** and **User identity** to **Assign user managed identity**
- 1. Select **From Azure subscription** under **Offline store**
- 1. Select **Store name** and **Azure Data Lake Gen2 file system name**, then select **Next**
- 1. On the **Review** tab, verify the displayed information and then select **Create**
-
- * Option 2: Create a feature store using the Python SDK
- Provide `featurestore_name`, `featurestore_resource_group_name`, and `featurestore_subscription_id` values, and execute this cell to create a minimal feature store:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-min-fs)]
-
- 1. Assign permissions to your user identity on the offline store:
-
- If feature data is materialized, then you must assign the **Storage Blob Data Reader** role to your user identity to read feature data from offline materialization store.
- 1. Open the [Azure ML global landing page](https://ml.azure.com/home)
- 1. Select **Feature stores** in the left navigation
- 1. You'll see the list of feature stores that you have access to. Select the feature store that you created above
- 1. Select the storage account link under **Account name** on the **Offline materialization store** card, to navigate to the ADLS Gen2 storage account for the offline store
- :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" lightbox="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" alt-text="This screenshot shows the storage account link for the offline materialization store on the feature store UI.":::
- 1. Visit [this resource](../role-based-access-control/role-assignments-portal.md) for more information about how to assign the **Storage Blob Data Reader** role to your user identity on the ADLS Gen2 storage account for offline store. Allow some time for permissions to propagate.
-
-## Available DSL expressions and benchmarks
-
- Currently, these aggregation expressions are supported:
- - Average - `avg`
- - Sum - `sum`
- - Count - `count`
- - Min - `min`
- - Max - `max`
-
- This table provides benchmarks that compare performance of aggregations that use DSL *expression* with the aggregations that use UDF, using a representative dataset of size 23.5 GB with the following attributes:
- - `numberOfSourceRows`: 348,244,374
- - `numberOfOfflineMaterializedRows`: 227,361,061
-
- |Function|*Expression*|UDF execution time|DSL execution time|
- |--||||
- |`get_offline_features(use_materialized_store=false)`|`sum`, `avg`, `count`|~2 hours|< 5 minutes|
- |`get_offline_features(use_materialized_store=true)`|`sum`, `avg`, `count`|~1.5 hours|< 5 minutes|
- |`materialize()`|`sum`, `avg`, `count`|~1 hour|< 15 minutes|
-
- > [!NOTE]
- > The `min` and `max` DSL expressions provide no performance improvement over UDFs. We recommend that you use UDFs for `min` and `max` transformations.
-
-## Create a feature set specification using DSL expressions
-
- 1. Execute this code cell to create a feature set specification, using DSL expressions and parquet files as source data.
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-dsl-parq-fset)]
-
- 1. This code cell defines the start and end times for the feature window.
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=define-feat-win)]
-
- 1. This code cell uses `to_spark_dataframe()` to get a dataframe in the defined feature window from the above feature set specification defined using DSL expressions:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-dsl-parq)]
-
- 1. Print some sample feature values from the feature set defined with DSL expressions:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-parq)]
-
-## Create a feature set specification using UDF
-
- 1. Create a feature set specification that uses UDF to perform the same transformations:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-udf-parq-fset)]
-
- This transformation code shows that the UDF defines the same transformations as the DSL expressions:
-
- ```python
- class TransactionFeatureTransformer(Transformer):
- def _transform(self, df: DataFrame) -> DataFrame:
- days = lambda i: i * 86400
- w_3d = (
- Window.partitionBy("accountID")
- .orderBy(F.col("timestamp").cast("long"))
- .rangeBetween(-days(3), 0)
- )
- w_7d = (
- Window.partitionBy("accountID")
- .orderBy(F.col("timestamp").cast("long"))
- .rangeBetween(-days(7), 0)
- )
- res = (
- df.withColumn("transaction_7d_count", F.count("transactionID").over(w_7d))
- .withColumn(
- "transaction_amount_7d_sum", F.sum("transactionAmount").over(w_7d)
- )
- .withColumn(
- "transaction_amount_7d_avg", F.avg("transactionAmount").over(w_7d)
- )
- .withColumn("transaction_3d_count", F.count("transactionID").over(w_3d))
- .withColumn(
- "transaction_amount_3d_sum", F.sum("transactionAmount").over(w_3d)
- )
- .withColumn(
- "transaction_amount_3d_avg", F.avg("transactionAmount").over(w_3d)
- )
- .select(
- "accountID",
- "timestamp",
- "transaction_3d_count",
- "transaction_amount_3d_sum",
- "transaction_amount_3d_avg",
- "transaction_7d_count",
- "transaction_amount_7d_sum",
- "transaction_amount_7d_avg",
- )
- )
- return res
-
- ```
-
- 1. Use `to_spark_dataframe()` to get a dataframe from the above feature set specification, defined using UDF:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-udf-parq)]
-
- 1. Compare the results and verify consistency between the results from the DSL expressions and the transformations performed with UDF. To verify, select one of the `accountID` values to compare the values in the two dataframes:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-acct)]
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-udf-acct)]
-
-## Export feature set specifications as YAML
-
- To register the feature set specification with the feature store, it must be saved in a specific format. To review the generated `transactions-dsl` feature set specification, open this file from the file tree, to see the specification: `featurestore/featuresets/transactions-dsl/spec/FeaturesetSpec.yaml`
-
- The feature set specification contains these elements:
-
- 1. `source`: Reference to a storage resource; in this case, a parquet file in a blob storage
- 1. `features`: List of features and their datatypes. If you provide transformation code, the code must return a dataframe that maps to the features and data types
- 1. `index_columns`: The join keys required to access values from the feature set
-
- For more information, read the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set specification YAML reference](./reference-yaml-featureset-spec.md) resources.
-
- As an extra benefit of persisting the feature set specification, it can be source controlled.
-
- 1. Execute this code cell to write YAML specification file for the feature set, using parquet data source and DSL expressions:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-dsl-parq-fset-spec)]
-
- 1. Execute this code cell to write a YAML specification file for the feature set, using UDF:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-udf-parq-fset-spec)]
-
-## Initialize SDK clients
-
- The following steps of this tutorial use two SDKs.
-
- 1. Feature store CRUD SDK: The Azure Machine Learning (AzureML) SDK `MLClient` (package name `azure-ai-ml`), similar to the one used with Azure Machine Learning workspace. This SDK facilitates feature store CRUD operations
-
- - Create
- - Read
- - Update
- - Delete
-
- for feature store and feature set entities, because feature store is implemented as a type of Azure Machine Learning workspace
-
- 1. Feature store core SDK: This SDK (`azureml-featurestore`) facilitates feature set development and consumption:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=init-python-clients)]
-
-## Register `account` entity with the feature store
-
- Create an account entity that has a join key `accountID` of `string` type:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-account-entity)]
-
-## Register the feature set with the feature store
-
- 1. Register the `transactions-dsl` feature set (that uses DSL) with the feature store, with offline materialization enabled, using the exported feature set specification:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-dsl-trans-fset)]
-
- 1. Materialize the feature set to persist the transformed feature data to the offline store:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=mater-dsl-trans-fset)]
-
- 1. Execute this code cell to track the progress of the materialization job:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=track-mater-job)]
-
- 1. Print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method used to retrieve the training/inference data also uses the materialization store by default:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=lookup-trans-dsl-fset)]
-
-## Generate a training dataframe using the registered feature set
-
-### Load observation data
-
- Observation data is typically the core data used in training and inference steps. Then, the observation data is joined with the feature data, to create a complete training data resource. Observation data is the data captured during the time of the event. In this case, it has core transaction data including transaction ID, account ID, and transaction amount. Since this data is used for training, it also has the target variable appended (`is_fraud`).
-
- 1. First, explore the observation data:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=load-obs-data)]
-
- 1. Select features that would be part of the training data, and use the feature store SDK to generate the training data:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl)]
-
- 1. The `get_offline_features()` function appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl)]
-
-### Generate a training dataframe from feature sets using DSL and UDF
-
- 1. Register the `transactions-udf` feature set (that uses UDF) with the feature store, using the exported feature set specification. Enable offline materialization for this feature set while registering with the feature store:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-udf-trans-fset)]
-
- 1. Select features from the feature sets (created using DSL and UDF) that you would like to become part of the training data, and use the feature store SDK to generate the training data:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl-udf)]
-
- 1. The function `get_offline_features()` appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl-udf)]
-
-The features are appended to the training data with a point-in-time join. The generated training data can be used for subsequent training and batch inferencing steps.
-
-## Clean up
-
-The [fifth tutorial in the series](./tutorial-develop-feature-set-with-custom-source.md#clean-up) describes how to delete the resources.
-
-## Next steps
-
-* [Part 2: Experiment and train models using features](./tutorial-experiment-train-models-using-features.md)
-* [Part 3: Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md)
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
The appliance can be deployed using a couple of methods:
- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script.Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md). - Download links are available in the tables below.
-> [!Note]
-> Don't install any other components, such as the **Microsoft Monitoring Agent (MMA)** or **Replication appliance**, on the same server hosting the Azure Migrate appliance. If you install the MMA agent, you can face problems like **"Multiple custom attributes of the same type found"**. It's recommended to have a dedicated server to deploy the appliance.
+ > [!Note]
+ > - Don't install any other components, such as the **Microsoft Monitoring Agent (MMA)** or **Replication appliance**, on the same server hosting the Azure Migrate appliance. If you install the MMA agent, you can face problems like **"Multiple custom attributes of the same type found"**. It's recommended to have a dedicated server to deploy the appliance.
+ > - Federal Information Processing Standards (FIPS) mode is not supported for appliance deployment.
## Appliance services
nat-gateway Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md
# Azure NAT Gateway resource
-This article describes the key components of the NAT gateway resource that enable it to provide highly secure, scalable and resilient outbound connectivity. Some of these components can be configured in your subscription through the Azure portal, Azure CLI, Azure PowerShell, Resource Manager templates or appropriate alternatives.
+This article describes the key components of the NAT gateway resource that enable it to provide highly secure, scalable, and resilient outbound connectivity. Some of these components can be configured in your subscription through the Azure portal, Azure CLI, Azure PowerShell, Resource Manager templates, or appropriate alternatives.
## NAT Gateway architecture
A NAT gateway can be attached to multiple subnets within a virtual network to pr
The following subnet configurations canΓÇÖt be used with a NAT gateway:
-* A subnet canΓÇÖt be attached to more than one NAT gateway. The NAT gateway becomes the default route to the internet for a subnet, only one NAT gateway can serve as the default route.
+* When NAT gateway is attached to a subnet, it assumes the default route to the internet. Only one NAT gateway can serve as the default route to the internet for a subnet.
* A NAT gateway canΓÇÖt be attached to subnets from different virtual networks.
The following subnet configurations canΓÇÖt be used with a NAT gateway:
## Static public IP addresses
-A NAT gateway can be associated with static public IP addresses or public IP prefixes for providing outbound connectivity. NAT Gateway supports IPv4 addresses. A NAT gateway can use public IP addresses or prefixes in any combination up to a total of 16 IP addresses. If you assign a public IP prefix, the entire public IP prefix is used. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT gateway will groom all traffic to the range of IP addresses of the prefix.
+A NAT gateway can be associated with static public IP addresses or public IP prefixes for providing outbound connectivity. NAT Gateway supports IPv4 addresses. A NAT gateway can use public IP addresses or prefixes in any combination up to a total of 16 IP addresses. If you assign a public IP prefix, the entire public IP prefix is used. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT gateway grooms all traffic to the range of IP addresses of the prefix.
* A NAT gateway canΓÇÖt be used with IPv6 public IP addresses or prefixes.
The connection flow may not exist if:
* The sender, either from the Azure network side or from the public internet side, sent traffic after the connection dropped.
-A TCP reset packet is sent only upon detecting traffic on the dropped connection flow. This operation means a TCP reset packet may not be sent right away after a connection flow has dropped.
+A TCP reset packet is sent only upon detecting traffic on the dropped connection flow. This operation means a TCP reset packet may not be sent right away after a connection flow drops.
The system sends a TCP reset packet in response to detecting traffic on a nonexisting connection flow, regardless of whether the traffic originates from the Azure network side or the public internet side.
The following table provides information about when a TCP port becomes available
|||| | TCP FIN | After a connection closes by a TCP FIN packet, a 65-second timer is activated that holds down the SNAT port. The SNAT port is available for reuse after the timer ends. | 65 seconds | | TCP RST | After a connection closes by a TCP RST packet (reset), a 16-second timer is activated that holds down the SNAT port. When the timer ends, the port is available for reuse. | 16 seconds |
-| TCP half open | During connection establishment where one connection endpoint is waiting for acknowledgment from the other endpoint, a 30-second timer is activated. If no traffic is detected, the connection closes. Once the connection has closed, the source port is available for reuse to the same destination endpoint. | 30 seconds |
+| TCP half open | During connection establishment where one connection endpoint is waiting for acknowledgment from the other endpoint, a 30-second timer is activated. If no traffic is detected, the connection closes. Once the connection closes, the source port is available for reuse to the same destination endpoint. | 30 seconds |
For UDP traffic, after a connection closes, the port is in hold down for 65 seconds before it's available for reuse.
For UDP traffic, after a connection closes, the port is in hold down for 65 seco
## Bandwidth
-Each NAT gateway can provide up to 50 Gbps of throughput. This data throughput includes data processed both outbound and inbound (response) through a NAT gateway resource. You can split your deployments into multiple subnets and assign each subnet or group of subnets to a NAT gateway to scale out.
+Each NAT gateway can provide up to a total of 50 Gbps of throughput. Data throughput rate limiting is split between outbound and inbound (response) data. Data throughput is rate limited at 25 Gbps for outbound and 25 Gbps for inbound (response) data per NAT gateway resource. You can split your deployments into multiple subnets and assign each subnet or group of subnets to a NAT gateway to scale out.
## Performance A NAT gateway can support up to 50,000 concurrent connections per public IP address **to the same destination endpoint** over the internet for TCP and UDP. The NAT gateway can process 1M packets per second and scale up to 5M packets per second.
-The total number of connections that a NAT gateway can support at any given time is up to 2 million. While it's possible that the NAT gateway can exceed 2 million connections, you have increased risk of connection failures.
+The total number of connections that a NAT gateway can support at any given time is up to 2 million. If NAT gateway exceeds 2 million connections, you will see a decline in your datapath availability and new connections will fail.
## Limitations
The total number of connections that a NAT gateway can support at any given time
- NAT Gateway doesn't support Public IP addresses with routing configuration type **internet**. To see a list of Azure services that do support routing configuration **internet** on public IPs, see [supported services for routing over the public internet](/azure/virtual-network/ip-services/routing-preference-overview#supported-services). -- Public IPs with DDoS protection enabled are not supported with NAT gateway. See [DDoS limitations](/azure/ddos-protection/ddos-protection-sku-comparison#limitations) for more information.
+- Public IPs with DDoS protection enabled aren't supported with NAT gateway. For more information, see [DDoS limitations](/azure/ddos-protection/ddos-protection-sku-comparison#limitations).
## Next steps
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
The following parameters for isolation domains are optional.
| Parameter|Description|Example|Required| ||||| | `redistributeConnectedSubnet` | Advertise connected subnets default value is True |True | |
-| `redistributeStaticRoutes` |Advertise Static Routes can have value of true/False. Defualt Value is False | False | |
+| `redistributeStaticRoutes` |Advertise Static Routes can have value of true/False. Default Value is False | False | |
| `aggregateRouteConfiguration`|List of Ipv4 and Ipv6 route configurations | | | | `connectedSubnetRoutePolicy` | Route Policy Configuration for IPv4 or Ipv6 L3 ISD connected subnets. Refer to help file for using correct syntax | | |
postgresql Concepts Networking Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private-link.md
The same public service instance can be referenced by multiple private endpoints
- **Global reach: Connect privately to services running in other regions.** The consumer's virtual network could be in region A and it can connect to services behind Private Link in region B.
-## Use Cases for Private Link with Azure Database for PostgreSQL flexible server in Preview
+## Use Cases for Private Link with Azure Database for PostgreSQL flexible server
Clients can connect to the private endpoint from the same VNet, peered VNet in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases.
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
System.setProperty("javax.net.ssl.trustStorePassword","password");
``` 6. Replace the original root CA pem file with the combined root CA file and restart your application/client.
+For more information on configuring client certificates with PostgreSQL JDBC driver see this [documentation](https://jdbc.postgresql.org/documentation/ssl/)
+ > [!NOTE] > Azure Database for PostgreSQL - Flexible server doesn't support [certificate based authentication](https://www.postgresql.org/docs/current/auth-cert.html) at this time.
For Azure App services, connecting to Azure Database for PostgreSQL, we can have
If you're trying to connect to the Azure Database for PostgreSQL using applications hosted in Azure Kubernetes Services (AKS) and pinning certificates, it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-tls.md).
+## Testing SSL\TLS Connectivity
+
+Before trying to access your SSL enabled server from client application, make sure you can get to it via psql. You should see output like the following if you have established a SSL connection.
++
+*psql (14.5)*
+*SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)*
+*Type "help" for help.*
+++ ## Cipher Suites A **cipher suite** is a set of cryptographic algorithms. TLS/SSL protocols use algorithms from a cipher suite to create keys and encrypt information.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. Azure Database
| Australia Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Central 2 *| :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Australia Southeast | (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Australia Southeast | :heavy_check_mark: (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: | | Brazil Southeast * | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 3/21/2024 Last updated : 4/1/2024 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 3/21/2024
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Azure Database for PostgreSQL flexible server.
+## Release: March 2024
+* Public preview of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server.
+ ## Release: February 2024 * Support for [minor versions](./concepts-supported-versions.md) 16.1, 15.5, 14.10, 13.13, 12.17, 11.22 <sup>$</sup> * General availability of [Major Version Upgrade logs](./concepts-major-version-upgrade.md#major-version-upgrade-logs)
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Azure Route Server Keepalive timer is 60 seconds and the Hold timer is 180 secon
Azure Route Server supports ***NO_ADVERTISE*** BGP community. If a network virtual appliance (NVA) advertises routes with this community string to the route server, the route server doesn't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes sent from Azure Route Server to ExpressRoute.
+### When a VNet peering is created between your hub and spoke VNet, does this cause a BGP soft reset between Azure Route Server and its peered NVAs?
+
+Yes. If a VNet peering is created between your hub and spoke VNet, Azure Route Server will perform a BGP soft reset by sending route refresh requests to all its peered NVAs. If the NVAs do not support BGP route refresh, then Azure Route Server will perform a BGP hard reset with the peered NVAs, which may cause connectivity disruption for traffic traversing the NVAs.
+ ### What Autonomous System Numbers (ASNs) can I use? You can use your own public ASNs or private ASNs in your network virtual appliance (NVA). You can't use ASNs reserved by Azure or IANA.
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 01/22/2024 Last updated : 04/01/2024
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- April 1, 2024: Reference the considerations section for sizing HANA shared file system in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md), [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [Azure Files NFS for SAP](planning-guide-storage-azure-files.md)
- March 18, 2024: Added considerations for sizing the HANA shared file system in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - February 07, 2024: Clarified disk allocation when using PPGs to bind availability set in specific Availability Zone in [Configuration options for optimal network latency with SAP applications](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups)-- February 01, 2024: Added guidance for [SAP front-end printing to Universal Print](./universal-print-sap-frontend.md).
+- February 01, 2024: Added guidance for [SAP front-end printing to Universal Print](./universal-print-sap-frontend.md)
- January 24, 2024: Split [SAP RISE integration documentation](./rise-integration.md) into multiple segments for improved legibility, additional overview information added. - January 22, 2024: Changes in all high availability documentation to include guidelines for setting the ΓÇ£probeThresholdΓÇ¥ property to 2 in the load balancerΓÇÖs health probe configuration. - January 21, 2024: Change recommendations around LARGEPAGES in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
sap Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-netapp.md
keywords: 'SAP, Azure, ANF, HANA, Azure NetApp Files, snapshot'
Previously updated : 08/02/2023 Last updated : 04/01/2024
To meet the SAP minimum throughput requirements for data and log, and according
| /hana/logbackup | 3 x RAM | 3 x RAM | v3 or v4.1 | | /hana/backup | 2 x RAM | 2 x RAM | v3 or v4.1 |
-For all volumes, NFS v4.1 is highly recommended
+For all volumes, NFS v4.1 is highly recommended.
+Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system), as appropriately sized **/hana/shared** volume contributes to system's stability.
The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one (or two) larger volumes, which could have a lower service level of ANF.
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
Previously updated : 11/15/2023 Last updated : 04/01/2024
For the **/hana/log** volume. the configuration would look like:
For the other volumes, the configuration would look like:
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/shared | /root volume | /usr/sap |
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/shared<sup>2</sup> | /root volume | /usr/sap |
| | | | | | | | | -- | | M32ts | 192 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 | | M32ls | 256 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 |
For the other volumes, the configuration would look like:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
-
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>2</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system)
Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual machine type.
You may want to use Azure Ultra disk storage instead of Azure premium storage on
For the other volumes, including **/hana/log** on Ultra disk, the configuration could look like:
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | /hana/shared | /root volume | /usr/sap |
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | /hana/shared<sup>1</sup> | /root volume | /usr/sap |
| | | | | | | | | -- | | E20ds_v4 | 160 GiB | 480 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 | | E20(d)s_v5 | 160 GiB | 750 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
For the other volumes, including **/hana/log** on Ultra disk, the configuration
| E64(d)s_v5 | 512 GiB | 1,735 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 | | E96(d)s_v5 | 672 GiB | 2,600 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+<sup>1</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system)
## Cost conscious solution with Azure premium storage So far, the Azure premium storage solution described in this document in section [Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines](#solutions-with-premium-storage-and-azure-write-accelerator-for-azure-m-series-virtual-machines) were meant for SAP HANA production supported scenarios. One of the characteristics of production supportable configurations is the separation of the volumes for SAP HANA data and redo log into two different volumes. Reason for such a separation is that the workload characteristics on the volumes are different. And that with the suggested production configurations, different type of caching or even different types of Azure block storage could be necessary. For non-production scenarios, some of the considerations taken for production systems may not apply to more low end non-production systems. As a result the HANA data and log volume could be combined. Though eventually with some culprits, like eventually not meeting certain throughput or latency KPIs that are required for production systems. Another aspect to reduce costs in such environments can be the usage of [Azure Standard SSD storage](./planning-guide-storage.md#azure-standard-ssd-storage). Keep in mind that choosing Standard SSD or Standard HDD Azure storage has impact on your single VM SLAs as documented in the article [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines).
So far, the Azure premium storage solution described in this document in section
A less costly alternative for such configurations could look like:
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hanADM | /hana/shared | /root volume | /usr/sap | comments |
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hanADM | /hana/shared<sup>3</sup> | /root volume | /usr/sap | comments |
| | | | | | | | -- | | DS14v2 | 112 GiB | 768 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> | | E16v3 | 128 GiB | 384 MB/s | 4 x P6 | 1 x E10 | 1 x E6 | 1 x E6 | VM type not HANA certified <br /> won't achieve less than 1ms storage latency<sup>1</sup> |
A less costly alternative for such configurations could look like:
<sup>2</sup> The VM family supports [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md), but there's a potential that the IOPS limit of Write accelerator could limit the disk configurations IOPS capabilities
+<sup>3</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system)
+ When combining the data and log volume for SAP HANA, the disks building the striped volume shouldn't have read cache or read/write cache enabled. There are VM types listed that aren't certified with SAP and as such not listed in the so called [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Feedback of customers was that those non-listed VM types were used successfully for some non-production tasks.
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2'
Previously updated : 11/17/2023 Last updated : 04/01/2024
Configuration for SAP **/hana/data** volume:
For the **/hana/log** volume. the configuration would look like:
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput | **/hana/log** IOPS | **/hana/shared** capacity <br />using default IOPS <br /> and throughput |
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput | **/hana/log** IOPS | **/hana/shared**<sup>2</sup> capacity <br />using default IOPS <br /> and throughput |
| | | | | | | | | E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB | | E20(d)s_v5 | 160 GiB | 750 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
For the **/hana/log** volume. the configuration would look like:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>2</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system)
Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase either IOPS, and/or throughput on the individual disks you're using.
sap Planning Guide Storage Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage-azure-files.md
Previously updated : 04/26/2023 Last updated : 04/01/2024
For SAP workloads, the supported uses of Azure Files shares are:
- sapmnt volume for a distributed SAP system - transport directory for SAP landscape-- /hana/shared for HANA scale-out
+- /han#considerations-for-the-hana-shared-file-system), as appropriately sized **/hana/shared** volume contributes to system's stability
- file interface between your SAP landscape and other applications > [!NOTE]
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Debug sessions work with all generally available [indexer data sources](search-d
+ For custom skills, a user-assigned managed identity isn't supported for a debug session connection to Azure Storage. As stated in the prerequisites, you can use a system managed identity, or specify a full access connection string that includes a key. For more information, see [Connect a search service to other Azure resources using a managed identity](search-howto-managed-identities-data-sources.md).
+The portal doesn't support customer-managed key encryption (CMK), which means that portal experiences like debug sessions can't have CMK-encrypted connection strings or other encrypted metadata. If your search service is configured for [CMK enforcement](search-security-manage-encryption-keys.md#6set-up-policy), debug sessions won't work.
+ ## Create a debug session 1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service.
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-deferral.md
Last updated 06/08/2023
When a queue or subscription client receives a message that it's willing to process, but the processing isn't currently possible because of special circumstances, it has the option of "deferring" retrieval of the message to a later point. The message remains in the queue or subscription, but it's set aside. > [!NOTE]
-> Deferred messages won't be automatically moved to the dead-letter queue [after they expire](./service-bus-dead-letter-queues.md#time-to-live). This behavior is by design.
+> Deferred messages aren't expired and automatically moved to a dead-letter queue until a client app attempts to receive them using an API and the sequence number. This behavior is by design. When a client tries to retrieve a deferred message, it's checked for [expired condition](service-bus-dead-letter-queues.md#time-to-live) and moved to a dead-letter queue if it's already expired. An expired message is moved to a deadletter subqueue only when the dead-letter feature is enabled for the entity (queue or subscription).
## Sample scenarios
-Deferral is a feature created specifically for workflow processing scenarios. Workflow frameworks may require certain operations to be processed in a particular order. They may have to postpone processing of some received messages until prescribed prior work that's informed by other messages has been completed.
+Deferral is a feature created specifically for workflow processing scenarios. Workflow frameworks might require certain operations to be processed in a particular order. They might have to postpone processing of some received messages until prescribed prior work that's informed by other messages has been completed.
-A simple illustrative example is an order processing sequence in which a payment notification from an external payment provider appears in a system before the matching purchase order has been propagated from the store front to the fulfillment system. In that case, the fulfillment system might defer processing the payment notification until there's an order with which to associate it. In rendezvous scenarios, where messages from different sources drive a workflow forward, the real-time execution order may indeed be correct, but the messages reflecting the outcomes may arrive out of order.
+A simple illustrative example is an order processing sequence in which a payment notification from an external payment provider appears in a system before the matching purchase order has been propagated from the store front to the fulfillment system. In that case, the fulfillment system might defer processing the payment notification until there's an order with which to associate it. In rendezvous scenarios, where messages from different sources drive a workflow forward, the real-time execution order might indeed be correct, but the messages reflecting the outcomes might arrive out of order.
Ultimately, deferral aids in reordering messages from the arrival order into an order in which they can be processed, while leaving those messages safely in the message store for which processing needs to be postponed. If a message can't be processed because a particular resource for handling that message is temporarily unavailable but message processing shouldn't be summarily suspended, a way to put that message on the side for a few minutes is to remember the sequence number in a [scheduled message](message-sequencing.md) to be posted in a few minutes, and re-retrieve the deferred message when the scheduled message arrives. If a message handler depends on a database for all operations and that database is temporarily unavailable, it shouldn't use deferral, but rather suspend receiving messages altogether until the database is available again. ## Retrieving deferred messages
-Deferred messages remain in the main queue along with all other active messages (unlike dead-letter messages that live in a subqueue), but they can no longer be received using the regular receive operations. Deferred messages can be discovered via [message browsing](message-browsing.md) if an application loses track of them.
+Deferred messages remain in the main queue along with all other active messages (unlike dead-letter messages that live in a subqueue), but they can no longer be received using the regular receive operations. Deferred messages can be discovered via [message browsing or peeking](message-browsing.md) if an application loses track of them.
+
+To retrieve a deferred message, its owner is responsible for remembering the **sequence number** as it defers it. Any receiver that knows the sequence number of a deferred message can later receive the message by using receive methods that take the sequence number as a parameter. For more information about sequence numbers, see [Message sequencing and timestamps](message-sequencing.md).
-To retrieve a deferred message, its owner is responsible for remembering the sequence number as it defers it. Any receiver that knows the sequence number of a deferred message can later receive the message by using receive methods that take the sequence number as a parameter. For more information about sequence numbers, see [Message sequencing and timestamps](message-sequencing.md).
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features.
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
If the application produces bursts of messages, illustrated here with a plain lo
With an assumed 70-millisecond Transmission Control Protocol (TCP) roundtrip latency distance from an on-premises site to Service Bus and giving just 10 ms for Service Bus to accept and store each message, the following loop takes up at least 8 seconds, not counting payload transfer time or potential route congestion effects: ```csharp
-for (int i = 0; i < 100; i++)
+for (int i = 0; i < 10; i++)
{
- // creating the message omitted for brevity
- await client.SendAsync(…);
+ // creating the message omitted for brevity
+ await sender.SendMessageAsync(message);
} ```
With the same assumptions as for the prior loop, the total overlapped execution
```csharp var tasks = new List<Task>();
-for (int i = 0; i < 100; i++)
+for (int i = 0; i < 10; i++)
{
- tasks.Add(client.SendAsync(…));
+ tasks.Add(sender.SendMessageAsync(message));
} await Task.WhenAll(tasks); ```
Semaphores, as shown in the following code snippet in C#, are synchronization ob
var semaphore = new SemaphoreSlim(10); var tasks = new List<Task>();
-for (int i = 0; i < 100; i++)
+for (int i = 0; i < 10; i++)
{
- await semaphore.WaitAsync();
+ await semaphore.WaitAsync();
- tasks.Add(client.SendAsync(…).ContinueWith((t)=>semaphore.Release()));
+ tasks.Add(sender.SendMessageAsync(message).ContinueWith((t)=>semaphore.Release()));
} await Task.WhenAll(tasks); ```
await Task.WhenAll(tasks);
Applications should **never** initiate an asynchronous send operation in a "fire and forget" manner without retrieving the outcome of the operation. Doing so can load the internal and invisible task queue up to memory exhaustion, and prevent the application from detecting send errors: ```csharp
-for (int i = 0; i < 100; i++)
+for (int i = 0; i < 10; i++)
{-
- client.SendAsync(message); // DONΓÇÖT DO THIS
+ sender.SendMessageAsync(message); // DONΓÇÖT DO THIS
} ```
-With a low-level AMQP client, Service Bus also accepts "pre-settled" transfers. A pre-settled transfer is a fire-and-forget operation for which the outcome, either way, isn't reported back to the client and the message is considered settled when sent. The lack of feedback to the client also means that there's no actionable data available for diagnostics, which means that this mode doesn't qualify for help via Azure support.
+With a low-level AMQP client, Service Bus also accepts "presettled" transfers. A presettled transfer is a fire-and-forget operation for which the outcome, either way, isn't reported back to the client and the message is considered settled when sent. The lack of feedback to the client also means that there's no actionable data available for diagnostics, which means that this mode doesn't qualify for help via Azure support.
## Settling receive operations
service-bus-messaging Topic Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/topic-filters.md
All rules **without actions** are combined using an `OR` condition and result in
Each rule **with an action** produces a copy of the message. This message will have a property called `RuleName` where the value is the name of the matching rule. The action can add or update properties, or delete properties from the original message to produce a message on the subscription.
-Consider the following scenario:
--- Subscription has five rules.-- Two rules contain actions.-- Three rules don't contain actions.-
-In this example, if you send one message that matches all five rules, you get three messages on the subscription. That's two messages for two rules with actions and one message for three rules without actions.
+Consider the following scenario where a subscription has five rules: two rules with actions and the other three without actions. In this example, if you send one message that matches all five rules, you get three messages on the subscription. That's two messages for two rules with actions and one message for three rules without actions.
Each newly created topic subscription has an initial default subscription rule. If you don't explicitly specify a filter condition for the rule, the applied filter is the **true** filter that enables all messages to be selected into the subscription. The default rule has no associated annotation action.
+> [!NOTE]
+> This article applies to non-JMS scenarios. For JMS scenarios, use [message selectors](java-message-service-20-entities.md#message-selectors).
+ ## Filters Service Bus supports three types of filters:
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
If you want to find a list of all the available Service Fabric runtime versions
| 10.0 CU3<br>10.0.2226.9590 | 9.0 CU10<br>9.0.1553.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 | | 10.0 CU1<br>10.0.1949.9590 | 9.0 CU10<br>9.0.1553.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 | | 10.0 RTO<br>10.0.1816.9590 | 9.0 CU10<br>9.0.1553.9590 | 9.0 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
-| 9.1 CU9<br>9.1.2277.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU7<br>9.1.1993.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU6<br>9.1.1851.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU5<br>9.1.1833.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU4<br>9.1.1799.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU3<br>9.1.1653.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU2<br>9.1.1583.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 CU1<br>9.1.1436.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
-| 9.1 RTO<br>9.1.1390.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | April 30, 2024 |
+| 9.1 CU9<br>9.1.2277.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU7<br>9.1.1993.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU6<br>9.1.1851.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU5<br>9.1.1833.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU4<br>9.1.1799.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU3<br>9.1.1653.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU2<br>9.1.1583.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 CU1<br>9.1.1436.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
+| 9.1 RTO<br>9.1.1390.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | September 30, 2024 |
| 9.0 CU12<br>9.0.1672.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | January 1, 2024 | | 9.0 CU11<br>9.0.1569.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU10<br>9.0.1553.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |
site-recovery Failover Failback Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview.md
- Title: About failover and failback in Azure Site - Classic
-description: Learn about failover and failback in Azure Site Recovery - Classic
- Previously updated : 06/30/2021----
-# About on-premises disaster recovery failover/failback - Classic
-
-[Azure Site Recovery](site-recovery-overview.md) contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business apps up and running during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VMs). Disaster recovery includes replication, failover, and recovery of various workloads.
-
-> [!IMPORTANT]
-> This article provides an overview of failover and failback during disaster recovery of on-premises machines to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
-><br>
-> For information about failover and failback in Azure Site Recovery Modernized release, [see this article](failover-failback-overview-modernized.md).
-
-## Recovery stages
-
-Failover and failback in Site Recovery has four stages:
--- **Stage 1: Fail over from on-premises**: After setting up replication to Azure for on-premises machines, when your on-premises site goes down, you fail those machines over to Azure. After failover, Azure VMs are created from replicated data.-- **Stage 2: Reprotect Azure VMs**: In Azure, you reprotect the Azure VMs so that they start replicating back to the on-premises site. The on-premises VM (if available) is turned off during reprotection, to help ensure data consistency.-- **Stage 3: Fail over from Azure**: When your on-premises site is running as normal again, you run another failover, this time to fail back Azure VMs to your on-premises site. You can fail back to the original location from which you failed over, or to an alternate location.-- **Stage 4: Reprotect on-premises machines**: After failing back, again enable replication of the on-premises machines to Azure.-
-## Failover
-
-You perform a failover as part of your business continuity and disaster recovery (BCDR) strategy.
--- As a first step in your BCDR strategy, you replicate your on-premises machines to Azure on an ongoing basis. Users access workloads and apps running on the on-premises source machines.-- If the need arises, for example if there's an outage on-premises, you fail the replicating machines over to Azure. Azure VMs are created using the replicated data.-- For business continuity, users can continue accessing apps on the Azure VMs.-
-Failover is a two-phase activity:
--- **Failover**: The failover that creates and brings up an Azure VM using the selected recovery point.-- **Commit**: After failover you verify the VM in Azure:
- - You can then commit the failover to the selected recovery point, or select a different point for the commit.
- - After committing the failover, the recovery point can't be changed.
--
-## Connect to Azure after failover
-
-To connect to the Azure VMs created after failover using RDP/SSH, there are a number of requirements.
-
-**Failover** | **Location** | **Actions**
- | |
-**Azure VM running Windows** | On the on-premises machine before failover | **Access over the internet**: Enable RDP. Make sure that TCP and UDP rules are added for **Public**, and that RDP is allowed for all profiles in **Windows Firewall** > **Allowed Apps**.<br/><br/> **Access over site-to-site VPN**: Enable RDP on the machine. Check that RDP is allowed in the **Windows Firewall** -> **Allowed apps and features**, for **Domain and Private** networks.<br/><br/> Make sure the operating system SAN policy is set to **OnlineAll**. [Learn more](https://support.microsoft.com/kb/3031135).<br/><br/> Make sure there are no Windows updates pending on the VM when you trigger a failover. Windows Update might start when you fail over, and you won't be able to log onto the VM until updates are done.
-**Azure VM running Windows** | On the Azure VM after failover | [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> The network security group rules on the failed over VM (and the Azure subnet to which it is connected) must allow incoming connections to the RDP port.<br/><br/> Check **Boot diagnostics** to verify a screenshot of the VM. If you can't connect, check that the VM is running, and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx).
-**Azure VM running Linux** | On the on-premises machine before failover | Ensure that the Secure Shell service on the VM is set to start automatically on system boot.<br/><br/> Check that firewall rules allow an SSH connection to it.
-**Azure VM running Linux** | On the Azure VM after failover | The network security group rules on the failed over VM (and the Azure subnet to which it is connected) need to allow incoming connections to the SSH port.<br/><br/> [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> Check **Boot diagnostics** for a screenshot of the VM.<br/><br/>
-
-## Types of failover
-
-Site Recovery provides different failover options.
-
-**Failover** | **Details** | **Recovery** | **Workflow**
- | | |
-**Test failover** | Used to run a drill that validates your BCDR strategy, without any data loss or downtime.| Creates a copy of the VM in Azure, with no impact on ongoing replication, or on your production environment. | 1. Run a test failover on a single VM, or on multiple VMs in a recovery plan.<br/><br/> 2. Select a recovery point to use for the test failover.<br/><br/> 3. Select an Azure network in which the Azure VM will be located when it's created after failover. The network is only used for the test failover.<br/><br/> 4. Verify that the drill worked as expected. Site Recovery automatically cleans up VMs created in Azure during the drill.
-**Planned failover-Hyper-V** | Usually used for planned downtime.<br/><br/> Source VMs are shut down. The latest data is synchronized before initiating the failover. | Zero data loss for the planned workflow. | 1. Plan a downtime maintenance window and notify users.<br/><br/> 2. Take user-facing apps offline.<br/><br/> 3. Initiate a planned failover with the latest recovery point. The failover doesn't run if the machine isn't shut down, or if errors are encountered.<br/><br/> 4. After the failover, check that the replica Azure VM is active in Azure.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points.
-**Failover-Hyper-V** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally shut down the VM, and synchronize final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover. Specify whether Site Recovery should shut down the VM and synchronize/replicate the latest changes before triggering the failover.<br/><br/> 3. You can fail over to a number of recovery point options, summarized in the table below.<br/><br/> If you don't enable the option to shut down the VM, or if Site Recovery can't shut it down, the latest recovery point is used.<br/>The failover runs even if the machine can't be shut down.<br/><br/> 4. After failover, you check that the replica Azure VM is active in Azure.<br/> If required, you can select a different recovery point from the retention window of 24 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all available recovery points.
-**Failover-VMware** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally specify that Site Recovery should try to trigger a shutdown of the VM, and to synchronize and replicate final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover from Site Recovery. Specify whether Site Recovery should try to trigger VM shutdown and synchronize before running the failover.<br/> The failover runs even if the machines can't be shut down.<br/><br/> 3. After the failover, check that the replica Azure VM is active in Azure. <br/>If required, you can select a different recovery point from the retention window of 72 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points.<br/> For Windows VMs, Site Recovery disables the VMware tools during failover.
-
-## Failover processing
-
-In some scenarios, failover requires additional processing that takes around 8 to 10 minutes to complete. You might notice longer test failover times for:
-
-* VMware VMs running a Mobility service version older than 9.8.
-* Physical servers.
-* VMware Linux VMs.
-* Hyper-V VMs protected as physical servers.
-* VMware VMs that don't have the DHCP service enabled.
-* VMware VMs that don't have the following boot drivers: storvsc, vmbus, storflt, intelide, atapi.
-
-## Recovery point options
-
-During failover, you can select a number of recovery point options.
-
-**Option** | **Details**
- |
-**Latest (lowest RPO)** | This option provides the lowest recovery point objective (RPO). It first processes all the data that has been sent to Site Recovery service, to create a recovery point for each VM, before failing over to it. This recovery point has all the data replicated to Site Recovery when the failover was triggered.
-**Latest processed** | This option fails over VMs to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check **Latest Recovery Points** in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
-**Latest app-consistent** | This option fails over VMs to the latest application-consistent recovery point processed by Site Recovery, if app-consistent recovery points are enabled. Check the latest recovery point in the VM settings.
-**Latest multi-VM processed** | This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs with the setting enabled fail over to the latest common multi-VM consistent recovery point. Any other VMs in the plan fail over to the latest processed recovery point.
-**Latest multi-VM app-consistent** | This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs that are part of a replication group fail over to the latest common multi-VM application-consistent recovery point. Other VMs fail over to their latest application-consistent recovery point.
-**Custom** | Use this option to fail over a specific VM to a particular recovery point in time. This option isn't available for recovery plans.
-
-> [!NOTE]
-> Recovery points can't be migrated to another Recovery Services vault.
-
-## Reprotection/failback
-
-After failover to Azure, the replicated Azure VMs are in an unprotected state.
--- As a first step to failing back to your on-premises site, you need to start the Azure VMs replicating to on-premises. The reprotection process depends on the type of machines you failed over.-- After machines are replicating from Azure to on-premises, you can run a failover from Azure to your on-premises site.-- After machines are running on-premises again, you can enable replication so that they replicate to Azure for disaster recovery.-
-Failback works as follows:
--- To fail back, a VM needs at least one recovery point in order to fail back. In a recovery plan, all VMs in the plan need at least one recovery point.-- We recommend that you use the **Latest** recovery point to fail back (this is a crash-consistent point).
- - There is an app-consistent recovery point option. In this case, a single VM recovers to its latest available app-consistent recovery point. For a recovery plan with a replication group, each replication group recovers to its common available recovery point.
- - App-consistent recovery points can be behind in time, and there might be loss in data.
-- During failover from Azure to the on-premises site, Site Recovery shuts down the Azure VMs. When you commit the failover, Site Recovery removes the failed back Azure VMs in Azure.--
-## VMware/physical reprotection/failback
-
-To reprotect and fail back VMware machines and physical servers from Azure to on-premises, you need a failback infrastructure, and there are a number of requirements.
--- **Temporary process server in Azure**: To fail back from Azure, you set up an Azure VM to act as a process server to handle replication from Azure. You can delete this VM after failback finishes.-- **VPN connection**: To fail back, you need a VPN connection (or ExpressRoute) from the Azure network to the on-premises site.-- **Separate master target server**: By default, the master target server that was installed with the configuration server on the on-premises VMware VM handles failback. If you need to fail back large volumes of traffic, set up a separate on-premises master target server for this purpose.-- **Failback policy**: To replicate back to your on-premises site, you need a failback policy. This policy is automatically created when you create a replication policy from on-premises to Azure.
- - This policy is automatically associated with the configuration server.
- - You can't edit this policy.
- - Policy values: RPO threshold - 15 minutes; Recovery point retention - 24 Hours; App-consistent snapshot frequency - 60 minutes.
-
-Learn more about VMware/physical reprotection and failback:
-- [Review](vmware-azure-reprotect.md#before-you-begin) additional requirements for reprotection and failback.-- [Deploy](vmware-azure-prepare-failback.md#deploy-a-process-server-in-azure) a process server in Azure.-- [Deploy](vmware-azure-prepare-failback.md#deploy-a-separate-master-target-server) a separate master target server.-
-When you reprotect Azure VMs to on-premises, you can specify that you want to fail back to the original location, or to an alternate location.
--- **Original location recovery**: This fails back from Azure to the same source on-premises machine if it exists. In this scenario, only changes are replicated back to on-premises.-- **Alternate location recovery**: If the on-premises machine doesn't exist, you can fail back from Azure to an alternate location. When you reprotect the Azure VM to on-premises, the on-premises machine is created. Full data replication occurs from Azure to on-premises. ---
-## Hyper-V reprotection/failback
-
-To reprotect and fail back Hyper-V VMs from Azure to on-premises:
--- You can only fail back Hyper-V VMs replicating using a storage account. Failback of Hyper-V VMs that replicate using managed disks isn't supported.-- On-premises Hyper-V hosts (or System Center VMM if used) should be connected to Azure.-- You run a planned failback from Azure to on-premises.-- No specific components need to be set up for Hyper-V VM failback.-- During planned failover, you can select options to synchronize data before failback:
- - **Synchronize data before failover**: This option minimizes downtime for virtual machines as it synchronizes machines without shutting them down.
- - Phase 1: Takes a snapshot of the Azure VM and copies it to the on-premises Hyper-V host. The machine continues running in Azure.
- - Phase 2: Shuts down the Azure VM so that no new changes occur there. The final set of delta changes is transferred to the on-premises server and the on-premises VM is started up.
- - **Synchronize data during failover only**: This option is faster because we expect that most of the disk has changed, and thus don't perform checksum calculations. It performs a download of the disk. We recommend that you use this option if the VM has been running in Azure for a while (a month or more), or if the on-premises VM has been deleted.
-
-[Learn more](hyper-v-azure-failback.md) about Hyper-V reprotection and failback.
-
-When you reprotect Azure VMs to on-premises, you can specify that you want to fail back to the original location, or to an alternate location.
--- **Original location recovery**: This fails back from Azure to the same source on-premises machine if it exists. In this scenario, you select one of the synchronization options described in the previous procedure.-- **Alternate location recovery**: If the on-premises machine doesn't exist, you can fail back from Azure to an alternate location. When you reprotect the Azure VM to on-premises, the on-premises machine is created. With this option, we recommend that you select the option to synchronize data before failover-- [Review](hyper-v-azure-failback.md) the requirements and limitations for location failback.--
-After failing back to the on-premises site, you enable **Reverse Replicate** to start replicating the VM to Azure, completing the cycle.
----
-## Next steps
-- Fail over [specific VMware VMs](vmware-azure-tutorial-failover-failback.md)-- Fail over [specific Hyper-V VMs](hyper-v-azure-failover-failback-tutorial.md).-- [Create](site-recovery-create-recovery-plans.md) a recovery plan.-- Fail over [VMs in a recovery plan](site-recovery-failover.md).-- [Prepare for](vmware-azure-failback.md) VMware reprotection and failback.-- Fail back [Hyper-V VMs](hyper-v-azure-failback.md).
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Previously updated : 02/07/2024 Last updated : 04/01/2024
The following table summarizes the available attributes by source:
| | [Blob index tags [Keys]](#blob-index-tags-keys) | Index tags on a blob resource (keys); available only for storage accounts where hierarchical namespace is not enabled | | | [Blob index tags [Values in key]](#blob-index-tags-values-in-key) | Index tags on a blob resource (values in key); available only for storage accounts where hierarchical namespace is not enabled | | | [Blob prefix](#blob-prefix) | Allowed prefix of blobs to be listed |
+| | [List blob include](#list-blob-include) | Information that can be included with listing operations, such as metadata, snapshots, or versions |
| | [Snapshot](#snapshot) | The Snapshot identifier for the Blob snapshot | | | [Version ID](#version-id) | The version ID of the versioned blob; available only for storage accounts where hierarchical namespace is not enabled | | **Resource** | | |
The following table summarizes the available attributes by source:
| | [Blob index tags [Values in key]](#blob-index-tags-values-in-key) | Index tags on a blob resource (values in key) | | | [Blob path](#blob-path) | Path of a virtual directory, blob, folder or file resource | | | [Container name](#container-name) | Name of a storage container or file system |
+| | [Container metadata](#container-metadata) | Metadata key/value pair associated with a container |
| | [Encryption scope name](#encryption-scope-name) | Name of the encryption scope used to encrypt data | | | [Is current version](#is-current-version) | Whether the resource is the current version of the blob | | | [Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled) | Whether hierarchical namespace is enabled on the storage account |
The following table summarizes the available attributes by source:
> | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'`<br/>[Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers) |
+### Container metadata
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Container metadata |
+> | **Description** | Metadata key/value pair associated with a container.<br/>Use when you want to check specific metadata for a container. *Currently in preview.* |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/metadata` |
+> | **Attribute source** | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/metadata:testKey] StringEquals 'testValue'`<br/>[Example: Read blobs in a container with specific metadata](storage-auth-abac-examples.md#example-read-blobs-in-container-with-specific-metadata)<br/>[Example: Write or delete blobs in container with specific metadata](storage-auth-abac-examples.md#example-write-or-delete-blobs-in-container-with-specific-metadata) |
+ ### Encryption scope name > [!div class="mx-tdCol2BreakAll"]
The following table summarizes the available attributes by source:
> | **Examples** | `@Environment[isPrivateLink] BoolEquals true`<br/>[Example: Require private link access to read blobs with high sensitivity](storage-auth-abac-examples.md#example-require-private-link-access-to-read-blobs-with-high-sensitivity) | > | **Learn more** | [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md) |
+### List blob include
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | List blob include |
+> | **Description** | Information that can be included with a [List Blobs](/rest/api/storageservices/list-blobs) operation, such as metadata, snapshots, or versions.<br/>Use when you want to allow or restrict values for the `include` parameter when calling the [List Blobs](/rest/api/storageservices/list-blobs) operation.<br/>*Currently in preview. Available only for storage accounts where hierarchical namespace is not enabled.* |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:include` |
+> | **Attribute source** | [Request](../../role-based-access-control/conditions-format.md#request-attributes) |
+> | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
+> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:include] ForAllOfAnyValues:StringEqualsIgnoreCaseΓÇ»{'metadata',ΓÇ»'snapshots',ΓÇ»'versions'}`<br/>`@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:include] ForAllOfAllValues:StringNotEquals {'metadata'}`<br/>[Example: Allow list blob operation to include blob metadata, snapshots, or versions](storage-auth-abac-examples.md#example-allow-list-blob-operation-to-include-blob-metadata-snapshots-or-versions)<br/>[Example: Restrict list blob operation to not include blob metadata](storage-auth-abac-examples.md#example-restrict-list-blob-operation-to-not-include-blob-metadata) |
+ ### Private endpoint > [!div class="mx-tdCol2BreakAll"]
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Previously updated : 02/08/2024 Last updated : 04/01/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
Use the following table to quickly locate an example that fits your ABAC scenari
| [Read or list blobs in named containers with a path](#example-read-or-list-blobs-in-named-containers-with-a-path) | | | blob prefix | container name</br> blob path | | [Write blobs in named containers with a path](#example-write-blobs-in-named-containers-with-a-path) | | | | container name</br> blob path | | [Read blobs with a blob index tag and a path](#example-read-blobs-with-a-blob-index-tag-and-a-path) | | | | tags</br> blob path |
+| [Read blobs in container with specific metadata](#example-read-blobs-in-container-with-specific-metadata) | | | | container metadata |
+| [Write or delete blobs in container with specific metadata](#example-write-or-delete-blobs-in-container-with-specific-metadata) | | | | container metadata |
| [Read only current blob versions](#example-read-only-current-blob-versions) | | | | isCurrentVersion | | [Read current blob versions and a specific blob version](#example-read-current-blob-versions-and-a-specific-blob-version) | | | versionId | isCurrentVersion | | [Delete old blob versions](#example-delete-old-blob-versions) | | | versionId | | | [Read current blob versions and any blob snapshots](#example-read-current-blob-versions-and-any-blob-snapshots) | | | snapshot | isCurrentVersion |
+| [Allow list blob operation to include blob metadata, snapshots, or versions](#example-allow-list-blob-operation-to-include-blob-metadata-snapshots-or-versions) | | | list blob include | |
+| [Restrict list blob operation to not include blob metadata](#example-restrict-list-blob-operation-to-not-include-blob-metadata) | | | list blob include | |
| [Read only storage accounts with hierarchical namespace enabled](#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) | | | | isHnsEnabled | | [Read blobs with specific encryption scopes](#example-read-blobs-with-specific-encryption-scopes) | | | | Encryption scope name | | [Read or write blobs in named storage account with specific encryption scope](#example-read-or-write-blobs-in-named-storage-account-with-specific-encryption-scope) | | | | Storage account name</br> Encryption scope name |
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/Alp
+## Blob container metadata
+
+### Example: Read blobs in container with specific metadata
+
+This condition allows users to read blobs in blob containers with a specific metadata key/value pair.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
+
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container metadata](storage-auth-abac-attributes.md#container-metadata) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
++
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
+
+**Storage Blob Data Reader**
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/metadata:testKey] StringEquals 'testValue'
+ )
+)
+
+```
++
+# [PowerShell](#tab/azure-powershell)
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "( `
+ ( `
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) `
+ ) `
+ OR `
+ ( `
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/metadata:testKey] StringEquals 'testValue' `
+ ) `
+)"
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+++
+### Example: Write or delete blobs in container with specific metadata
+
+This condition allows users to write or delete blobs in blob containers with a specific metadata key/value pair.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | |
+
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Delete a blob](storage-auth-abac-attributes.md#delete-a-blob) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container metadata](storage-auth-abac-attributes.md#container-metadata) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
++
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
+
+**Storage Blob Data Contributor**
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/metadata:testKey] StringEquals 'testValue'
+ )
+)
+
+```
++
+# [PowerShell](#tab/azure-powershell)
+
+Here's how to add this condition using Azure PowerShell.
+
+```azurepowershell
+$condition = "( `
+ ( `
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) `
+ AND `
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'}) `
+ ) `
+ OR `
+ ( `
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/metadata:testKey] StringEquals 'testValue' `
+ ) `
+) `
+$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
+$testRa.Condition = $condition
+$testRa.ConditionVersion = "2.0"
+Set-AzRoleAssignment -InputObject $testRa -PassThru
+```
+++ ## Blob versions or blob snapshots This section includes examples showing how to restrict access to objects based on the blob version or snapshot.
Currently no example provided.
+### Example: Allow list blob operation to include blob metadata, snapshots, or versions
+
+This condition allows a user to list blobs in a container and include metadata, snapshot, and version information. The [List blobs include](storage-auth-abac-attributes.md#list-blob-include) attribute is available for storage accounts where hierarchical namespace isn't enabled.
+
+> [!NOTE]
+>[List blobs include](storage-auth-abac-attributes.md#list-blob-include) is a request attribute, and works by allowing or restricting values in the `include` parameter when calling the [List Blobs](/rest/api/storageservices/list-blobs) operation. The values in the `include` parameter are compared against the values specified in the condition using [cross product comparison operators](/azure/role-based-access-control/conditions-format#cross-product-comparison-operators). If the comparison evaluates to true, the `List Blobs` request is allowed. If the comparison evaluates to false, the `List Blobs` request is denied.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
++
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)|
+> | Attribute source | Request |
+> | Attribute | [List blobs include](storage-auth-abac-attributes.md#list-blob-include) |
+> | Operator | [ForAllOfAnyValues:StringEqualsIgnoreCase](../../role-based-access-control/conditions-format.md#forallofanyvalues) |
+> | Value | {'metadata',ΓÇ»'snapshots',ΓÇ»'versions'} |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
+
+**Storage Blob Data Reader**
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:include] ForAllOfAnyValues:StringEqualsIgnoreCase {'metadata', 'snapshots', 'versions'}
+ )
+)
+```
+
+In this example, the condition restricts the read action when the suboperation is `Blob.List`. This means that a [List Blobs](/rest/api/storageservices/list-blobs) operation is further evaluated against the expression that checks the `include` values, but all other read actions are allowed.
++
+# [PowerShell](#tab/azure-powershell)
+
+Currently no example provided.
+++
+### Example: Restrict list blob operation to not include blob metadata
+
+This condition restricts a user from listing blobs when metadata is included in the request. The [List blobs include](storage-auth-abac-attributes.md#list-blob-include) attribute is available for storage accounts where hierarchical namespace isn't enabled.
+
+> [!NOTE]
+>[List blobs include](storage-auth-abac-attributes.md#list-blob-include) is a request attribute, and works by allowing or restricting values in the `include` parameter when calling the [List Blobs](/rest/api/storageservices/list-blobs) operation. The values in the `include` parameter are compared against the values specified in the condition using [cross product comparison operators](/azure/role-based-access-control/conditions-format#cross-product-comparison-operators). If the comparison evaluates to true, the `List Blobs` request is allowed. If the comparison evaluates to false, the `List Blobs` request is denied.
+
+You must add this condition to any role assignments that include the following action.
+
+> [!div class="mx-tableFixed"]
+> | Action | Notes |
+> | | |
+> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | |
++
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)|
+> | Attribute source | Request |
+> | Attribute | [List blobs include](storage-auth-abac-attributes.md#list-blob-include) |
+> | Operator | [ForAllOfAllValues:StringNotEquals](../../role-based-access-control/conditions-format.md#forallofallvalues) |
+> | Value | {'metadata'} |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
+
+**Storage Blob Data Reader**
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:include] ForAllOfAllValues:StringNotEquals {'metadata'}
+ )
+)
+```
+
+In this example, the condition restricts the read action when the suboperation is `Blob.List`. This means that a [List Blobs](/rest/api/storageservices/list-blobs) operation is further evaluated against the expression that checks the `include` values, but all other read actions are allowed.
++
+# [PowerShell](#tab/azure-powershell)
+
+Currently no example provided.
+++ ## Hierarchical namespace This section includes examples showing how to restrict access to objects based on whether hierarchical namespace is enabled for a storage account.
storage Troubleshoot Container Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/troubleshoot-container-storage.md
az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-sto
### Can't set storage pool type to NVMe
-If you try to install Azure Container Storage with ephemeral disk, specifically with local NVMe on a cluster where the virtual machine (VM) SKU doesn't have NVMe drives, you get the following error message: *Cannot set --storage-pool-option as NVMe as none of the node pools can support ephemeral NVMe disk*.
+If you try to install Azure Container Storage with Ephemeral Disk, specifically with local NVMe on a cluster where the virtual machine (VM) SKU doesn't have NVMe drives, you get the following error message: *Cannot set --storage-pool-option as NVMe as none of the node pools can support ephemeral NVMe disk*.
To remediate, create a node pool with a VM SKU that has NVMe drives and try again. See [storage optimized VMs](../../virtual-machines/sizes-storage.md).
If you're trying to create an Elastic SAN storage pool, you might see the messag
### No block devices found
-If you see this message, you're likely trying to create an ephemeral disk storage pool on a cluster where the VM SKU doesn't have NVMe drives.
+If you see this message, you're likely trying to create an Ephemeral Disk storage pool on a cluster where the VM SKU doesn't have NVMe drives.
To remediate, create a node pool with a VM SKU that has NVMe drives and try again. See [storage optimized VMs](../../virtual-machines/sizes-storage.md).
If you created an Elastic SAN storage pool, you might not be able to delete the
To resolve this, sign in to the [Azure portal](https://portal.azure.com?azure-portal=true) and select **Resource groups**. Locate the resource group that AKS created (the resource group name starts with **MC_**). Select the SAN resource object within that resource group. Manually remove all volumes and volume groups. Then retry deleting the resource group that includes your AKS cluster.
+## Troubleshoot persistent volume issues
+
+### Can't create persistent volumes from ephemeral disk storage pools
+Because ephemeral disks (local NVMe and Temp SSD) are ephemeral and not durable, we enforce the use of [Kubernetes Generic Ephemeral Volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes). If you try to create a persistent volume claim using an ephemeral disk pool, you'll see the following error: *Error from server (Forbidden): error when creating "eph-pvc.yaml": admission webhook "pvc.acstor.azure.com" denied the request: only generic ephemeral volumes are allowed in unreplicated ephemeralDisk storage pools*.
+
+If you need a persistent volume, where the volume has a lifecycle independent of any individual pod that's using the volume, Azure Container Storage supports replication for NVMe. You can create a storage pool with replication and create persistent volumes from there. See [Create storage pool with volume replication](use-container-storage-with-local-disk.md#optional-create-storage-pool-with-volume-replication-nvme-only) for guidance. Note that because ephemeral disk storage pools consume all the available NVMe disks, you must delete any existing ephemeral disk storage pools before creating a new storage pool with replication enabled. If you don't need persistence, you can create a generic ephemeral volume.
+ ## See also - [Azure Container Storage FAQ](container-storage-faq.md)
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Ephemeral Disk as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using either local NVMe or temp SSD as its storage. > [!IMPORTANT]
-> Local disks are ephemeral, meaning that they're created on the local virtual machine (VM) storage and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM.
+> Local disks are ephemeral, meaning that they're created on the local virtual machine (VM) storage and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM. You can only create [Kubernetes generic ephemeral volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes) from an Ephemeral Disk storage pool. If you want to create a persistent volume, you have to enable [replication for your storage pool](#optional-create-storage-pool-with-volume-replication-nvme-only).
## Prerequisites
Run `kubectl get sc` to display the available storage classes. You should see a
> [!IMPORTANT] > Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work.
-## Create a persistent volume claim
-
-A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
+## Deploy a pod with a generic ephemeral volume
-1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
-
-1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
-
- ```yml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: ephemeralpvc
- spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: acstor-ephemeraldisk # replace with the name of your storage class if different
- resources:
- requests:
- storage: 100Gi
- ```
-
-1. Apply the YAML manifest file to create the PVC.
-
- ```azurecli-interactive
- kubectl apply -f acstor-pvc.yaml
- ```
-
- You should see output similar to:
-
- ```output
- persistentvolumeclaim/ephemeralpvc created
- ```
-
- You can verify the status of the PVC by running the following command:
-
- ```azurecli-interactive
- kubectl describe pvc ephemeralpvc
- ```
-
-Once the PVC is created, it's ready for use by a pod.
-
-## Deploy a pod and attach a persistent volume
-
-Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
+Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, that uses a generic ephemeral volume.
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`. 1. Paste in the following code and save the file.- ```yml kind: Pod apiVersion: v1
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
spec: nodeSelector: acstor.azure.com/io-engine: acstor
- volumes:
- - name: ephemeralpv
- persistentVolumeClaim:
- claimName: ephemeralpvc
containers: - name: fio image: nixery.dev/shell/fio
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
- "1000000" volumeMounts: - mountPath: "/volume"
- name: ephemeralpv
+ name: ephemeralvolume
+ volumes:
+ - name: ephemeralvolume
+ ephemeral:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ type: my-ephemeral-volume
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ storageClassName: "acstor-ephemeraldisk-nvme" # replace with the name of your storage class if different
+ resources:
+ requests:
+ storage: 1Gi
``` 1. Apply the YAML manifest file to deploy the pod.
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
pod/fiopod created ```
-1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
+1. Check that the pod is running and that the ephemeral volume claim has been bound successfully to the pod:
```azurecli-interactive kubectl describe pod fiopod
- kubectl describe pvc ephemeralpvc
+ kubectl describe pvc fiopod-ephemeralvolume
``` 1. Check fio testing to see its current status:
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
You've now deployed a pod that's using Ephemeral Disk as its storage, and you can use it for your Kubernetes workloads.
-## Detach and reattach a persistent volume
-
-To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**.
-
-```azurecli-interactive
-kubectl delete pods <pod-name>
-```
-
-To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume).
-
-To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`.
- ## Expand a storage pool You can expand storage pools backed by local NVMe or temp SSD to scale up quickly and without downtime. Shrinking storage pools isn't currently supported.
kubectl delete sp -n acstor <storage-pool-name>
## Optional: Create storage pool with volume replication (NVMe only)
-Applications that use local NVMe can leverage storage replication for improved resiliency. Replication isn't currently supported for local SSD.
+Applications that use local NVMe can leverage storage replication for improved resiliency. Replication isn't currently supported for temp SSD.
Azure Container Storage currently supports three-replica and five-replica configurations. If you specify three replicas, you must have at least three nodes in your AKS cluster. If you specify five replicas, you must have at least five nodes. Follow these steps to create a storage pool using local NVMe with replication.
+> [!NOTE]
+> Because Ephemeral Disk storage pools consume all the available NVMe disks, you must delete any existing Ephemeral Disk local NVMe storage pools before creating a new storage pool with replication.
+ 1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`. + 1. Paste in the following code and save the file. The storage pool **name** value can be whatever you want. Set replicas to 3 or 5. ```yml
Follow these steps to create a storage pool using local NVMe with replication.
kubectl describe sp <storage-pool-name> -n acstor ```
-When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. Now you can [display the available storage classes](#display-the-available-storage-classes) and [create a persistent volume claim](#create-a-persistent-volume-claim).
+When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. Now you can [display the available storage classes](#display-the-available-storage-classes) and create a persistent volume claim.
+
+## Create a persistent volume claim
+
+A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
+
+1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
+
+1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: ephemeralpvc
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: acstor-ephemeraldisk-nvme # replace with the name of your storage class if different
+ resources:
+ requests:
+ storage: 100Gi
+ ```
+
+1. Apply the YAML manifest file to create the PVC.
+
+ ```azurecli-interactive
+ kubectl apply -f acstor-pvc.yaml
+ ```
+
+ You should see output similar to:
+
+ ```output
+ persistentvolumeclaim/ephemeralpvc created
+ ```
+
+ You can verify the status of the PVC by running the following command:
+
+ ```azurecli-interactive
+ kubectl describe pvc ephemeralpvc
+ ```
+
+Once the PVC is created, it's ready for use by a pod.
+
+## Deploy a pod and attach a persistent volume
+
+Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
+
+1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`.
+
+1. Paste in the following code and save the file.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: fiopod
+ spec:
+ nodeSelector:
+ acstor.azure.com/io-engine: acstor
+ volumes:
+ - name: ephemeralpv
+ persistentVolumeClaim:
+ claimName: ephemeralpvc
+ containers:
+ - name: fio
+ image: nixery.dev/shell/fio
+ args:
+ - sleep
+ - "1000000"
+ volumeMounts:
+ - mountPath: "/volume"
+ name: ephemeralpv
+ ```
+
+1. Apply the YAML manifest file to deploy the pod.
+
+ ```azurecli-interactive
+ kubectl apply -f acstor-pod.yaml
+ ```
+
+ You should see output similar to the following:
+
+ ```output
+ pod/fiopod created
+ ```
+
+1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
+
+ ```azurecli-interactive
+ kubectl describe pod fiopod
+ kubectl describe pvc ephemeralpvc
+ ```
+
+1. Check fio testing to see its current status:
+
+ ```azurecli-interactive
+ kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
+ ```
+
+You've now deployed a pod that's using Ephemeral Disk as its storage, and you can use it for your Kubernetes workloads.
+
+## Detach and reattach a persistent volume
+
+To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**.
+
+```azurecli-interactive
+kubectl delete pods <pod-name>
+```
+
+To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume).
+
+To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`.
## See also
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares significantly impr
Previously updated : 03/29/2024 Last updated : 04/01/2024
Azure Files geo-redundancy for large file shares is generally available in the m
| Southeast Asia | GA | | Sweden Central | GA | | Sweden South | GA |
-| Switzerland North | Preview |
-| Switzerland West | Preview |
+| Switzerland North | GA |
+| Switzerland West | GA |
| UAE Central | GA | | UAE North | GA | | UK South | GA |
stream-analytics Azure Cosmos Db Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-cosmos-db-output.md
Title: Azure Cosmos DB output from Azure Stream Analytics description: This article describes how to output data from Azure Stream Analytics to Azure Cosmos DB.--++ Last updated 12/13/2021
stream-analytics Azure Data Explorer Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-data-explorer-managed-identity.md
Title: Use managed identities to access Azure Data Explorer from an Azure Stream Analytics job description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure Data Explorer output.--++ Last updated 10/27/2022
stream-analytics Azure Database Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-database-explorer-output.md
Title: Azure Data Explorer output from Azure Stream Analytics description: This article describes using Azure Data Explorer as an output for Azure Stream Analytics.--++ Last updated 06/01/2023
stream-analytics Azure Functions Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-functions-output.md
Title: Azure Functions output from Azure Stream Analytics description: This article describes Azure functions as output for Azure Stream Analytics.--++ Last updated 05/28/2021
stream-analytics Azure Synapse Analytics Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-synapse-analytics-output.md
Title: Azure Synapse Analytics output from Azure Stream Analytics description: This article describes Azure Synapse Analytics as output for Azure Stream Analytics.--++ Last updated 08/25/2020
stream-analytics Blob Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-output-managed-identity.md
Title: Authenticate blob output with Managed Identity Azure Stream Analytics description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Azure Blob storage output.--++ Last updated 09/16/2022
stream-analytics Confluent Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-input.md
Title: Stream data from confluent cloud Kafka with Azure Stream Analytics description: Learn about how to set up an Azure Stream Analytics job as a consumer from confluent cloud kafka--++
stream-analytics Confluent Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-output.md
Title: Stream data from Azure Stream Analytics into confluent cloud kafka description: Learn about how to set up an Azure Stream Analytics job as a producer to confluent cloud kafka--++
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
Title: Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure Cosmos DB output.--++ Last updated 10/20/2023
stream-analytics Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-protection.md
Title: Data protection in Azure Stream Analytics description: This article explains how to encrypt your private data used by an Azure Stream Analytics job.--++ Last updated 03/13/2023
stream-analytics Event Hubs Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-managed-identity.md
Title: Use managed identities to access Event Hubs from an Azure Stream Analytics job description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Azure Event Hubs input and output.--++ Last updated 05/15/2023
stream-analytics Geospatial Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/geospatial-scenarios.md
Title: Geofencing and geospatial aggregation with Azure Stream Analytics description: This article describes how to use Azure Stream Analytics for geofencing and geospatial aggregation. --++ Last updated 04/02/2019
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Title: Stream data from Azure Stream Analytics into Kafka description: Learn about setting up Azure Stream Analytics as a producer to kafka--++ Last updated 02/20/2024
stream-analytics Postgresql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/postgresql-database-output.md
Title: Azure Database for PostgreSQL output from Azure Stream Analytics description: This article describes using Azure Database for PostgreSQL as output for Azure Stream Analytics.--++ Last updated 05/12/2023
stream-analytics Power Bi Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/power-bi-output.md
Title: Power BI output from Azure Stream Analytics description: This article describes how to output data from Azure Stream Analytics to Power BI.--++ Last updated 07/20/2023
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Title: Use Managed Identity to authenticate your Azure Stream Analytics job to P
description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Power BI output. --++ Last updated 08/16/2023
stream-analytics Service Bus Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-managed-identity.md
Title: Use managed identities to access Service Bus from an Azure Stream Analytics job description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure Service Bus output.--++ Last updated 07/20/2023
stream-analytics Service Bus Queues Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-queues-output.md
Title: Service Bus queues output from Azure Stream Analytics description: This article describes Service Bus queues as output for Azure Stream Analytics.--++ Last updated 09/23/2020
stream-analytics Service Bus Topics Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-topics-output.md
Title: Service Bus topics output from Azure Stream Analytics description: This article describes Service Bus topics as output for Azure Stream Analytics.--++ Last updated 09/23/2020
stream-analytics Sql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-output.md
Title: Azure SQL Database output from Azure Stream Analytics description: This article describes Azure SQL Database as output for Azure Stream Analytics.--++ Last updated 07/21/2022
stream-analytics Start Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/start-job.md
Title: How to start an Azure Stream Analytics job description: This article describes how to start a Stream Analytics job from Azure portal, PowerShell, and Visual Studio.--++ Last updated 04/03/2019
stream-analytics Stream Analytics Add Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-add-inputs.md
Title: Understand inputs for Azure Stream Analytics description: This article describes the concept of inputs in an Azure Stream Analytics job, comparing streaming input to reference data input. --++ Last updated 02/26/2024
stream-analytics Stream Analytics Clean Up Your Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-clean-up-your-job.md
Title: Clean up your Azure Stream Analytics job description: This article shows you different methods for deleting your Azure Stream Analytics jobs.--++ Last updated 06/21/2019
stream-analytics Stream Analytics Compatibility Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-compatibility-level.md
Title: Azure Stream Analytics compatibility levels description: Learn how to set a compatibility level for an Azure Stream Analytics job and major changes in the latest compatibility level--++ Last updated 03/18/2021
stream-analytics Stream Analytics Define Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-inputs.md
Title: Stream data as input into Azure Stream Analytics description: Learn about setting up a data connection in Azure Stream Analytics. Inputs include a data stream from events, and also reference data.--++ Last updated 01/25/2024
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Title: Stream data from Kafka into Azure Stream Analytics description: Learn about setting up Azure Stream Analytics as a consumer from Kafka--++ Last updated 02/20/2024
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Title: Outputs from Azure Stream Analytics description: This article describes data output options available for Azure Stream Analytics.--++ Last updated 01/25/2024
stream-analytics Stream Analytics Geospatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-geospatial-functions.md
Title: Introduction to Azure Stream Analytics geospatial functions description: This article describes geospatial functions that are used in Azure Stream Analytics jobs. --++ Last updated 12/06/2018
stream-analytics Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md
Title: Process real-time IoT data streams with Azure Stream Analytics description: IoT sensor tags and data streams with stream analytics and real-time data processing--++ Last updated 08/15/2023
stream-analytics Stream Analytics High Frequency Trading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-high-frequency-trading.md
Title: High-frequency trading using Azure Stream Analytics description: How to perform linear regression model training and scoring in an Azure Stream Analytics job.--++ Last updated 03/16/2021
stream-analytics Stream Analytics Login Credentials Inputs Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-login-credentials-inputs-outputs.md
Title: Rotate login credentials in Azure Stream Analytics jobs description: This article describes how to update the credentials of inputs and output sinks in Azure Stream Analytics jobs.--++ Last updated 06/21/2019
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
Title: Managed identities for Azure Stream Analytics description: This article describes managed identities for Azure Stream Analytics.--++ Last updated 10/27/2022
stream-analytics Stream Analytics Output Error Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-output-error-policy.md
Title: Output error policies in Azure Stream Analytics description: Learn about the output error handling policies available in Azure Stream Analytics.--++ Last updated 05/30/2021
stream-analytics Stream Analytics Parsing Protobuf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-parsing-protobuf.md
Title: Parse Protobuf description: This article describes how to use Azure Stream Analytics with Protobuf as data input. --++ Last updated 11/20/2023
stream-analytics Stream Analytics Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-previews.md
Title: Azure Stream Analytics preview features description: This article lists the Azure Stream Analytics features that are currently in preview--++ Last updated 06/10/2022
stream-analytics Stream Analytics Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-powershell.md
Title: Quickstart - Create a Stream Analytics job using Azure PowerShell description: This quickstart demonstrates how to use the Azure PowerShell module to deploy and run an Azure Stream Analytics job.--++ Last updated 06/07/2023
stream-analytics Stream Analytics Quick Create Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-vs.md
Title: Quickstart - Create an Azure Stream Analytics job using Visual Studio description: This quickstart shows you how to get started by creating a Stream Analytics job, configuring inputs, outputs, and defining a query with Visual Studio.--++ Last updated 06/07/2023
stream-analytics Stream Analytics Threshold Based Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-threshold-based-rules.md
Title: Configurable threshold-based rules in Azure Stream Analytics description: This article describes how to use reference data to achieve an alerting solution that has configurable threshold based rules in Azure Stream Analytics.--++ Last updated 04/30/2018
stream-analytics Stream Analytics Twitter Sentiment Analysis Trends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-twitter-sentiment-analysis-trends.md
Title: Social media analysis with Azure Stream Analytics description: This article describes how to use Stream Analytics for social media analysis using the twitter client API. Step-by-step guidance from event generation to data on a live dashboard.--++
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Title: User-assigned managed identities for Azure Stream Analytics description: This article describes configuring user-assigned managed identities for Azure Stream Analytics.--++ Last updated 08/15/2023
stream-analytics Streaming Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/streaming-technologies.md
Title: Choose a real-time and stream processing solution on Azure description: Learn about how to choose the right real-time analytics and streaming processing technology to build your application on Azure.--++ Last updated 01/29/2024
stream-analytics Table Storage Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/table-storage-output.md
Title: Table storage output from Azure Stream Analytics description: This article describes Azure Table storage as output for Azure Stream Analytics.--++ Last updated 08/25/2020
virtual-desktop Troubleshoot Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-teams.md
This article describes known issues and limitations for Teams on Azure Virtual D
## Known issues and limitations
-Using Teams in a virtualized environment is different from using Teams in a non-virtualized environment. For more information about the limitations of Teams in virtualized environments, check out [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi#known-issues-and-limitations).
+Using Teams in a virtualized environment is different from using Teams in a nonvirtualized environment. For more information about the limitations of Teams in virtualized environments, check out [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi#known-issues-and-limitations).
-### Client deployment, installation, and setup
+### Client deployment, installation, and set up
- With per-machine installation, Teams on VDI isn't automatically updated the same way non-VDI Teams clients are. To update the client, you'll need to update the VM image by installing a new MSI. - Media optimization for Teams is only supported for the Remote Desktop client on machines running Windows 10 or later or macOS 10.14 or later. - Use of explicit HTTP proxies defined on the client endpoint device should work, but isn't supported. - Zoom in/zoom out of chat windows isn't supported.
+- Media optimizations isn't supported for Teams running as a RemoteApp on macOS endpoints.
### Calls and meetings
Using Teams in a virtualized environment is different from using Teams in a non-
- If you've opened a window overlapping the window you're currently sharing during a meeting, the contents of the shared window that are covered by the overlapping window won't update for meeting users. - If you're sharing admin windows for programs like Windows Task Manager, meeting participants may see a black area where the presenter toolbar or call monitor is located. - Switching tenants can result in call-related issues such as screen sharing not rendering correctly. You can mitigate these issues by restarting your Teams client. -- Teams does not support the ability to be on a native Teams call and a Teams call in the Azure Virtual Desktop session simultaneously while connected to a HID device.
+- Teams doesn't support the ability to be on a native Teams call and a Teams call in the Azure Virtual Desktop session simultaneously while connected to a HID device.
For Teams known issues that aren't related to virtualized environments, see [Support Teams in your organization](/microsoftteams/known-issues).
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
In this article, you learn how to install and uninstall Network Watcher Agent fo
The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
- You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. This article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
# [**Azure CLI**](#tab/cli)
In this article, you learn how to install and uninstall Network Watcher Agent fo
- Azure PowerShell or Azure CLI installed locally to deploy the template.
- - You can [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
+ - You can [install Azure PowerShell](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
- - You can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.
+ - You can [install Azure CLI](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
In this article, you learn how to install and uninstall Network Watcher Agent fo
The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
- You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. This article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
# [**Azure CLI**](#tab/cli)
In this article, you learn how to install and uninstall Network Watcher Agent fo
- Azure PowerShell or Azure CLI installed locally to deploy the template.
- - You can [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
+ - You can [install Azure PowerShell](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
- - You can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.
+ - You can [install Azure CLI](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.