Updates from: 11/11/2024 02:05:00
Service Microsoft Docs article Related commit history on GitHub Change details
api-center Build Register Apis Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/build-register-apis-vscode-extension.md
description: API developers can use the Azure API Center extension for Visual St
Previously updated : 10/16/2024 Last updated : 11/08/2024
API developers in your organization can build and register APIs in your [API center](overview.md) inventory by using the Azure API Center extension for Visual Studio Code. API developers can: * Add an existing API to an API center as a one-time operation, or integrate a development pipeline to register APIs as part of a CI/CD workflow.
-* Generate OpenAPI specification files from API code using GitHub Copilot, and register the API to an API center.
+* Use GitHub Copilot to generate new OpenAPI specs from API code.
+* Use natural language prompts with the API Center plugin for GitHub Copilot for Azure to create new OpenAPI specs.
API developers can also take advantage of features in the extension to [discover and consume APIs](discover-apis-vscode-extension.md) in the API center and ensure [API governance](govern-apis-vscode-extension.md).
The following Visual Studio Code extensions are needed for the specified scenari
* [GitHub Actions](https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-github-actions) - to register APIs using a CI/CD pipeline with GitHub Actions * [Azure Pipelines](https://marketplace.visualstudio.com/items?itemName=ms-azure-devops.azure-pipelines) - to register APIs using a CI/CD pipeline with Azure Pipelines * [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) - to generate OpenAPI specification files from API code
+* [GitHub Copilot for Azure](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azure-github-copilot) - to generate OpenAPI specification files using the Azure API Center Plugin for GitHub Copilot for Azure
[!INCLUDE [vscode-extension-setup](includes/vscode-extension-setup.md)]
The following steps register an API in your API center with a CI/CD pipeline. Wi
Learn more about setting up a [GitHub Actions workflow](register-apis-github-actions.md) to register APIs with your API center.
-## Generate OpenAPI specification file from API code
+## Generate OpenAPI spec from API code
-Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. GitHub Copilot creates an OpenAPI specification file.
+Use the power of [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. GitHub Copilot creates an OpenAPI specification file.
> [!NOTE] > This feature is available in the pre-release version of the API Center extension.
Use the power of GitHub Copilot with the Azure API Center extension for Visual S
After generating the OpenAPI specification file and checking for accuracy, you can register the API with your API center using the **Azure API Center: Register API** command.
+## Generate OpenAPI spec using natural language prompts
+
+The API Center plugin for [GitHub Copilot for Azure](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azure-github-copilot) helps you design new APIs starting from natural language prompts. With AI assistance, quickly generate an OpenAPI spec for API development that complies with your organization's standards.
+
+> [!NOTE]
+> This feature is available in the pre-release version of the API Center extension.
+
+1. If desired, set an active API style guide. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Set API Style Guide**, make a selection, and hit **Enter**.
+
+ If no style guide is set, the default `spectral:oas` ruleset is used.
+1. In the chat panel, make a request in natural language to the `@azure` agent to describe what the API does. Example:
+
+ ```vscode
+ @azure Generate OpenAPI spec: An API that allows customers to pay for an order using various payment methods such as cash, checks, credit cards, and debit cards.
+ ```
+
+ The agent responds with an OpenAPI specification document.
+
+ :::image type="content" source="media/build-register-apis-vscode-extension/generate-api-specification.png" alt-text="Screenshot showing how to use @azure extension to generate an OpenAPI spec from a prompt.":::
++
+1. Review the generated output for accuracy and compliance with your API style guide. Refine the prompt if needed to regenerate.
+
+ > [!TIP]
+ > Effective prompts focus on an API's business requirements rather than implementation details. Shorter prompts sometimes work better than longer ones.
+1. When it meets your requirements, save the generated OpenAPI specification to a file.
+1. Register the API with your API center. Select **Register your API in API Center** button in the chat panel, or select **Azure API Center: Register API** from the Command Palette, and follow the prompts.
## Related content
app-service Ase Multi Tenant Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/ase-multi-tenant-comparison.md
App Service Environment v3 tends to be more expensive than the public multitenan
|Pricing |[Pay per instance](overview.md#pricing)|[Pay per instance](../../app-service/overview-hosting-plans.md)| |Reserved instances|[Available](overview.md#pricing)|[Available](../../app-service/overview-hosting-plans.md)| |Savings plans|[Available](overview.md#pricing)|[Available](../../app-service/overview-hosting-plans.md)|
-|Availability zone pricing|[There's a minimum charge of 18 cores.](overview.md#pricing) There's no added charge for availability zone support if you have 18 or more cores across your App Service plan instances. If you have fewer than 18 cores across your App Service plans in the zone redundant App Service Environment, the difference between 18 cores and the sum of the cores from the running instance count is charged as Windows I1v2 instances.|[Three instance minimum enforced per App Service plan](../../reliability/reliability-app-service.md#pricing).|
+|Availability zone pricing|[There's a minimum charge of 18 cores.](overview.md#pricing) There's no added charge for availability zone support if you have 18 or more cores across your App Service plan instances. If you have fewer than 18 cores across your App Service plans in the zone redundant App Service Environment, the difference between 18 cores and the sum of the cores from the running instance count is charged as Windows I1v2 instances.|[Three instance minimum enforced per App Service plan](../../reliability/reliability-app-service.md#cost).|
### Frequently asked questions
app-service Network Secure Outbound Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/network-secure-outbound-traffic-azure-firewall.md
Outbound traffic from your app is now routed through the integrated virtual netw
1. Navigate to the firewall's overview page and select its firewall policy. 1. In the firewall policy page, from the left navigation, select **Application Rules** > **Add a rule collection**.
-1. In **Rules**, add a network rule with the App Service subnet as the source address, and specify an FQDN destination. In the screenshot below, the destination FQDN is set to `api.my-ip.io`.
+1. In **Rules**, add a network rule with the App Service subnet as the source address, and specify an FQDN destination. In the screenshot below, the destination FQDN is set to `contoso.com`.
:::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/config-azfw-policy-app-rule.png" alt-text="Screenshot of configure Azure Firewall policy rule.":::
Outbound traffic from your app is now routed through the integrated virtual netw
An easy way to verify your configuration is to use the `curl` command from your app's SCM debug console to verify the outbound connection. 1. In a browser, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole`.
-1. In the console, run `curl -s <protocol>://<fqdn-address>` with a URL that matches the application rule you configured, To continue example in the previous screenshot, you can use **curl -s https://api.my-ip.io/ip**. The following screenshot shows a successful response from the API, showing the public IP address of your App Service app.
+1. In the console, run `curl -s <protocol>://<fqdn-address>` with a URL that matches an application rule you configured. The following screenshot is an example to a website that shows a successful response from a API, showing an IP address.
:::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/verify-outbound-traffic-fw-allow-rule.png" alt-text="Screenshot of verifying the success outbound traffic by using curl command in SCM debug console.":::
application-gateway How To Backend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md
See the following figure:
Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate backend mutual authentication (mTLS). ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-ssl-with-backend-mtls/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/end-to-end-ssl-with-backend-mtls/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To End To End Tls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-end-to-end-tls-gateway-api.md
Application Gateway for Containers enables end-to-end TLS for improved privacy a
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To End To End Tls Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-end-to-end-tls-ingress-api.md
Application Gateway for Containers enables end-to-end TLS for improved privacy a
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Frontend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-frontend-mtls-gateway-api.md
The revoked client certificate flow shows a client presenting a revoked certific
Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate frontend mutual authentication (mTLS). ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Header Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-gateway-api.md
The following figure illustrates a request with a specific user agent being rewr
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Header Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-ingress-api.md
The following figure illustrates an example of a request with a specific user ag
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Multiple Site Hosting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-gateway-api.md
Application Gateway for Containers enables multi-site hosting by allowing you to
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Multiple Site Hosting Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md
Application Gateway for Containers enables multi-site hosting by allowing you to
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Path Header Query String Routing Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-path-header-query-string-routing-gateway-api.md
Application Gateway for Containers enables traffic routing based on URL path, qu
1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) 2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md). 3. Deploy sample HTTP application
- Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Ssl Offloading Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-gateway-api.md
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
```
-
+ This command creates the following on your cluster: - a namespace called `test-infra` - one service called `echo` in the `test-infra` namespace
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
Application Gateway for Containers enables you to set weights and shift traffic
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Redirect Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-gateway-api.md
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Redirect Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-ingress-api.md
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash
- kubectl apply -f kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-ingress-api.md
The following figure illustrates a request destined for _contoso.com/shop_ being
3. Deploy sample HTTP application:<br> Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/refs/heads/main/articles/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
Previously updated : 08/15/2024 Last updated : 10/31/2024
Download the latest release of the binary for [Linux](https://aka.ms/azacsnap-li
For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
+## Oct-2024
+
+### AzAcSnap 10a (Build: 1B79BA*)
+
+AzAcSnap 10a is being released with the following fixes and improvements:
+
+- Fixes and Improvements:
+ - Allow configurable wait timeout for Microsoft SQL Server. This will help you increase timeout for slow responding systems (default and minimum value is 30 seconds).
+ - Added a global override variable `MSSQL_CMD_TIMEOUT_SECS` to be used in either the `.azacsnaprc` file or as an environment variable set to the required wait timeout in seconds. For details on configuration refer to the [global override settings to control AzAcSnap behavior](azacsnap-tips.md#global-override-settings-to-control-azacsnap-behavior).
+
+Download the binary of [AzAcSnap 10a for Linux](https://aka.ms/azacsnap-10a-linux)([signature file](https://aka.ms/azacsnap-10a-linux-signature)) or [AzAcSnap 10a for Windows](https://aka.ms/azacsnap-10a-windows).
+ ## Jul-2024 ### AzAcSnap 10 (Build: 1B55F1*)
azure-netapp-files Azure Netapp Files Performance Metrics Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
Previously updated : 05/08/2023 Last updated : 10/31/2024 # Performance benchmark test recommendations for Azure NetApp Files
This article provides benchmark testing recommendations for volume performance a
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate various workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume. > [!IMPORTANT]
-> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for DB2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
+> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for Db2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
-### VM instance sizing
+### Virtual machine (VM) instance sizing
For best results, ensure that you are using a virtual machine (VM) instance that is appropriately sized to perform the tests. The following examples use a Standard_D32s_v3 instance. For more information about VM instance sizes, see [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-network%2ftoc.json) for Windows-based VMs, and [Sizes for Linux virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) for Linux-based VMs.
Follow the Getting started section in the SSB README file to install for the pla
### FIO
-Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification.
+Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification. FIO is available in binary format for both Linux and Windows.
-FIO is available in binary format for both Linux and Windows.
-
-#### Installation of FIO
-
-Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
-
-#### FIO examples for IOPS
-
-The FIO examples in this section use the following setup:
-* VM instance size: D32s_v3
-* Capacity pool service level and size: Premium / 50 TiB
-* Volume quota size: 48 TiB
-
-The following examples show the FIO random reads and writes.
-
-##### FIO: 8k block size 100% random reads
-
-`fio --name=8krandomreads --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
-
-##### FIO: 8k block size 100% random writes
-
-`fio --name=8krandomwrites --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
-
-##### Benchmark results
-
-For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
-
-#### FIO examples for bandwidth
-
-The examples in this section show the FIO sequential reads and writes.
-
-##### FIO: 64k block size 100% sequential reads
-
-`fio --name=64kseqreads --rw=read --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
-
-##### FIO: 64k block size 100% sequential writes
-
-`fio --name=64kseqwrites --rw=write --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
-
-##### Benchmark results
-
-For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
+For more information, see [Understand Azure NetApp Files testing methodology](testing-methodology.md).
## Volume metrics
You can access Azure NetApp Files counters by using REST API calls. See [Support
The following example shows a GET URL for viewing logical volume size: `#get ANF volume usage`
-`curl -X GET -H "Authorization: Bearer TOKENGOESHERE" -H "Content-Type: application/json" https://management.azure.com/subscriptions/SUBIDGOESHERE/resourceGroups/RESOURCEGROUPGOESHERE/providers/Microsoft.NetApp/netAppAccounts/ANFACCOUNTGOESHERE/capacityPools/ANFPOOLGOESHERE/Volumes/ANFVOLUMEGOESHERE/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=VolumeLogicalSize`
+`curl -X GET -H "Authorization: Bearer TOKENGOESHERE" -H "Content-Type: application/json" https://management.azure.com/subscriptions/<subscritionID>/resourceGroups/<resourceGroup>/providers/Microsoft.NetApp/netAppAccounts/<AzureNetAppFilesAccount>/capacityPools/<CapacityPool>/Volumes/<volume>/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=VolumeLogicalSize`
## Next steps - [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) - [Performance benchmarks for Linux](performance-benchmarks-linux.md)
+- [Understand Azure NetApp Files testing methodology](testing-methodology.md)
azure-netapp-files Data Protection Disaster Recovery Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/data-protection-disaster-recovery-options.md
Using snapshot technology, you can replicate your Azure NetApp Files across desi
- Data availability and redundancy for remote data processing and user access - Efficient storage-based data replication without load on compute infrastructure
-To learn more, see [How volumes and snapshots are replicated cross-region for DR](snapshots-introduction.md#how-volumes-and-snapshots-are-replicated-cross-region-for-dr). To get started with cross-region replication, see [Create cross-region replication for Azure NetApp Files](cross-region-replication-create-peering.md).
+To learn more, see [How volumes and snapshots are replicated cross-region for DR](snapshots-introduction.md#how-volumes-and-snapshots-are-replicated-cross-region-for-disaster-recovery). To get started with cross-region replication, see [Create cross-region replication for Azure NetApp Files](cross-region-replication-create-peering.md).
## Cross-zone replication
azure-netapp-files Large Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes.md
Large volumes allow workloads to extend beyond the current limitations of regula
| Volume type | Primary use cases | | - | -- | | Regular volumes | <ul><li>General file shares</li><li>SAP HANA and databases (Oracle, SQL Server, Db2, and others)</li><li>VDI/Azure VMware Service</li><li>Capacities less than 50 TiB</li></ul> |
-| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, FSI)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
+| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, financial services)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
## More information * [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
+* [Understand workload types in Azure NetApp Files](workload-types.md)
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
Previously updated : 03/24/2024 Last updated : 11/08/2024 # Azure NetApp Files regular volume performance benchmarks for Linux This article describes performance benchmarks Azure NetApp Files delivers for Linux with a [regular volume](azure-netapp-files-understand-storage-hierarchy.md#volumes).
-## Linux scale-out
-This section describes performance benchmarks of Linux workload throughput and workload IOPS.
+## Whole file streaming workloads (scale-out benchmark tests)
-### Linux workload throughput
+The intent of a scale-out test is to show the performance of an Azure NetApp File volume when scaling out (or increasing) the number of clients generating simultaneous workload to the same volume. These tests are generally able to push a volume to the edge of its performance limits and are indicative of workloads such as media rendering, AI/ML, and other workloads that utilize large compute farms to perform work.
-This graph represents a 64 kibibyte (KiB) sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
+## High I/OP scale-out benchmark configuration
-The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
+These benchmarks used the following:
+- A single Azure NetApp Files 100-TiB regular volume with a 1-TiB dataset using the Ultra performance tier
+- [FIO (with and without setting randrepeat=0)](testing-methodology.md)
+- 4-KiB and 8-KiB block sizes
+- 6 D32s_v5 virtual machines running RHEL 9.3
+- NFSv3
+- [Manual QoS](manage-manual-qos-capacity-pool.md)
+- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
-![Linux workload throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-throughput.png)
+## High throughput scale-out benchmark configuration
-### Linux workload IOPS
+These benchmarks used the following:
-The following graph represents a 4-KiB random workload and a 1 TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
+- A single Azure NetApp Files regular volume with a 1-TiB dataset using the Ultra performance tier
+FIO (with and without setting randrepeat=0)
+- [FIO (with and without setting randrepeat=0)](testing-methodology.md)
+- 64-KiB and 256-KiB block size
+- 6 D32s_v5 virtual machines running RHEL 9.3
+- NFSv3
+- [Manual QoS](manage-manual-qos-capacity-pool.md)
+- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
-This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
+## Parallel network connection (`nconnect`) benchmark configuration
-![Linux workload IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-iops.png)
+These benchmarks used the following:
+- A single Azure NetApp Files regular volume with a 1-TiB dataset using the Ultra performance tier
+- FIO (with and without setting randrepeat=0)
+- 4-KiB and 64-KiB wsize/rsize
+- A single D32s_v4 virtual machine running RHEL 9.3
+- NFSv3 with and without `nconnect`
+- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
-## Linux scale-up
+## Scale-up benchmark tests
-The graphs in this section show the validation testing results for the client-side mount option with NFSv3. For more information, see [`nconnect` section of Linux mount options](performance-linux-mount-options.md#nconnect).
+The scale-up testΓÇÖs intent is to show the performance of an Azure NetApp File volume when scaling up (or increasing) the number of jobs generating simultaneous workload across multiple TCP connections on a single client to the same volume (such as with [`nconnect`](performance-linux-mount-options.md#nconnect)).
-The graphs compare the advantages of `nconnect` to a non-`connected` mounted volume. In the graphs, FIO generated the workload from a single D32s_v4 instance in the us-west2 Azure region using a 64-KiB sequential workload ΓÇô the largest I/O size supported by Azure NetApp Files at the time of the testing represented here. Azure NetApp Files now supports larger I/O sizes. For more information, see [`rsize` and `wsize` section of Linux mount options](performance-linux-mount-options.md#rsize-and-wsize).
+Without `nconnect`, these workloads can't push the limits of a volumeΓÇÖs maximum performance, since the client can't generate enough IO or network throughput. These tests are generally indicative of what a single userΓÇÖs experience might be in workloads such as media rendering, databases, AI/ML, and general file shares.
-### Linux read throughput
+## High I/OP scale-out benchmarks
-The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with `nconnect`, roughly 2.3X non-`nconnect`.
+The following benchmarks show the performance achieved for Azure NetApp Files with a high I/OP workload using:
-![Linux read throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-throughput.png)
+- 32 clients
+- 4-KiB and 8-KiB random reads and writes
+- 1-TiB dataset
+- Read/write ratios as follows: 100%:0%, 90%:10%, 80%:20%, and so on
+- With and without filesystem caching involved (using `randrepeat=0` in FIO)
-### Linux write throughput
+For more information, see [Testing methodology](testing-methodology.md).
-The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. The sequential write volume upper limit is approximately 1,500 MiB/s; the D32s_v4 instance egress limit is also approximately 1,500 MiB/s.
+## Results: 4 KiB, random, client caching included
-![Linux write throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-throughput.png)
+In this benchmark, FIO ran without the `randrepeat` option to randomize data. Thus, an indeterminate amount of caching came into play. This configuration results in slightly better overall performance numbers than tests run without caching with the entire IO stack being utilized.
-### Linux read IOPS
+In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 130,000 pure random 4-KiB writes and approximately 460,000 pure random 4 KiB reads during this benchmark. Read-write mix for the workload adjusted by 10% for each run.
-The following graphs show 4-KiB random reads of ~200,000 read IOPS with `nconnect`, roughly 3X non-`nconnect`.
+As the read-write I/OP mix increases towards write-heavy, the total I/OPS decrease.
-![Linux read IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-iops.png)
-### Linux write IOPS
+## Results: 4 KiB, random, client caching excluded
-The following graphs show 4-KiB random writes of ~135,000 write IOPS with `nconnect`, roughly 3X non-`nconnect`.
+In this benchmark, FIO was run with the setting `randrepeat=0` to randomize data, reducing the caching influence on performance. This resulted in an approximately 8% reduction in write I/OPS and an approximately 17% reduction in read I/OPS, but displays performance numbers more representative of what the storage can actually do.
-![Linux write IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-iops.png)
+In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 120,000 pure random 4-KiB writes and approximately 388,000 pure random 4-KiB reads. Read-write mix for the workload adjusted by 25% for each run.
-## Next steps
+As the read-write I/OP mix increases towards write-heavy, the total I/OPS decrease.
-- [Azure NetApp Files: Getting the Most Out of Your Cloud Storage](https://cloud.netapp.com/hubfs/Resources/ANF%20PERFORMANCE%20TESTING%20IN%20TEMPLATE.pdf?hsCtaTracking=f2f560e9-9d13-4814-852d-cfc9bf736c6a%7C764e9d9c-9e6b-4549-97ec-af930247f22f)+
+## Results: 8 KiB, random, client caching excluded
+
+Larger read and write sizes will result in fewer total I/OPS, as more data can be sent with each operation. An 8-KiB read and write size was used to more accurately simulate what most modern applications use. For instance, many EDA applications utilize 8-KiB reads and writes.
+
+In this benchmark, FIO ran with `randrepeat=0` to randomize data so the client caching impact was reduced. In the following graph, testing shows that an Azure NetApp Files regular volume can handle between approximately 111,000 pure random 8-KiB writes and approximately 293,000 pure random 8-KiB reads. Read-write mix for the workload adjusted by 25% for each run.
+
+As the read-write I/OP mix increases towards write-heavy, the total I/OPS decrease.
++
+## Side-by-side comparisons
+
+To illustrate how caching can influence the performance benchmark tests, the following graph shows total I/OPS for 4-KiB tests with and without caching mechanisms in place. As shown, caching provides a slight performance boost for I/OPS fairly consistent trending.
++
+## Specific offset, streaming random read/write workloads: scale-up tests using parallel network connections (`nconnect`)
+
+The following tests show a high I/OP benchmark using a single client with 4-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the [`nconnect` mount option](performance-linux-mount-options.md#nconnect) was used to improve parallelism in comparison to client mounts without the `nconnect` mount option.
+
+When using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections (such as with `nconnect`) per mount point. When using `nconnect`, the total latency for the operations is generally lower. These tests are also run with `randrepeat=0` to intentionally avoid caching. For more information on this option, see [Testing methodology](testing-methodology.md).
+
+### Results: 4 KiB, random, with and without `nconnect`, caching excluded
+
+The following graphs show a side-by-side comparison of 4-KiB reads and writes with and without `nconnect` to highlight the performance improvements seen when using `nconnect`: higher overall I/OPS, lower latency.
+++
+## High throughput benchmarks
+
+The following benchmarks show the performance achieved for Azure NetApp Files with a high throughput workload.
+
+High throughput workloads are more sequential in nature and often are read/write heavy with low metadata. Throughput is generally more important than I/OPS. These workloads typically leverage larger read/write sizes (64K to 256K), which generate higher latencies than smaller read/write sizes, since larger payloads will naturally take longer to be processed.
+
+Examples of high throughput workloads include:
+
+- Media repositories
+- High performance compute
+- AI/ML/LLP
+
+The following tests show a high throughput benchmark using both 64-KiB and 256-KiB sequential workloads and a 1-TiB dataset. The workload mix generated decreases a set percentage at a time and demonstrates what you can expect when using varying read/write ratios (for instance, 100%:0%, 90%:10%, 80%:20%, and so on).
+
+### Results: 64 KiB sequential I/O, caching included
+
+In this benchmark, FIO ran using looping logic that more aggressively populated the cache, so an indeterminate amount of caching influenced the results. This results in slightly better overall performance numbers than tests run without caching.
+
+In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 4,500MiB/s pure sequential 64-KiB reads and approximately 1,600MiB/s pure sequential 64-KiB writes. The read-write mix for the workload was adjusted by 10% for each run.
++
+### Results: 64 KiB sequential I/O, caching excluded
+
+In this benchmark, FIO ran using looping logic that less aggressively populated the cache. Client caching didn't influence the results. This configuration results in slightly better write performance numbers, but lower read numbers than tests without caching.
+
+In the following graph, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600MiB/s pure sequential 64-KiB reads and approximately 2,400MiB/s pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
+
+The read-write mix for the workload was adjusted by 25% for each run.
++
+### Results: 256 KiB sequential I/O, caching excluded
+
+In this benchmark, FIO ran using looping logic that less aggressively populated the cache, so caching didn't influence the results. This configuration results in slightly less write performance numbers than 64-KiB tests, but higher read numbers than the same 64-KiB tests run without caching.
+
+In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 3,500MiB/s pure sequential 256-KiB reads and approximately 2,500MiB/s pure sequential 256-KiB writes. During the tests, a 50/50 mix showed total throughput peaked higher than a pure sequential read workload.
+
+The read-write mix for the workload was adjusted in 25% increments for each run.
++
+### Side-by-side comparison
+
+To better show how caching can influence the performance benchmark tests, the following graph shows total MiB/s for 64-KiB tests with and without caching mechanisms in place. Caching provides an initial slight performance boost for total MiB/s because caching generally improves reads more so than writes. As the read/write mix changes, the total throughput without caching exceeds the results that utilize client caching.
+++
+## Parallel network connections (`nconnect`)
+
+The following tests show a high I/OP benchmark using a single client with 64-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the `nconnect` mount option was leveraged for better parallelism in comparison to client mounts that didn't use the `nconnect` mount option. These tests were run only with caching excluded.
+
+### Results: 64 KiB, sequential, caching excluded, with and without `nconnect`
+
+The following results show a scale-up testΓÇÖs results when reading and writing in 4-KiB chunks on a NFSv3 mount on a single client with and without parallelization of operations (`nconnect`). The graphs show that as the I/O depth grows, the I/OPS also increase. But when using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections per mount point. In addition, the total latency for the operations is generally lower when using `nconnect`.
+++
+### Side-by-side comparison (with and without `nconnect`)
+
+The following graphs show a side-by-side comparison of 64-KiB sequential reads and writes with and without `nconnect` to highlight the performance improvements seen when using `nconnect`: higher overall throughput, lower latency.
++
+## More information
+
+- [Testing methodology](testing-methodology.md)
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
Previously updated : 06/03/2024 Last updated : 11/08/2024 # How Azure NetApp Files snapshots work
-This article explains how Azure NetApp Files snapshots work. Azure NetApp Files snapshot technology delivers stability, scalability, and faster recoverability, with no impact to performance. It provides the foundation for data protection solutions, including single-file restores, volume restores and clones, cross-region replication, and long-term retention.
+This article explains how Azure NetApp Files snapshots work. Azure NetApp Files snapshot technology delivers stability, scalability, and faster recoverability, with no impact to performance. Snapshots provide the foundation for data protection solutions, including single-file restores, volume restores and clones, cross-region replication, cross-zone replication, and long-term retention.
-For steps about using volume snapshots, see [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md). For considerations about snapshot management in cross-region replication, see [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md).
+To create volume snapshots, see [Manage snapshots using Azure NetApp Files](azure-netapp-files-manage-snapshots.md). For considerations about snapshot management in cross-region replication, see [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md). For cross-zone replication, see [Requirements and considerations for using cross-zone replication](cross-zone-replication-requirements-considerations.md).
## What volume snapshots are
You can use several methods to create and maintain snapshots:
* Snapshot policies, via the [Azure portal](snapshots-manage-policy.md), [REST API](/rest/api/netapp/snapshotpolicies), [Azure CLI](/cli/azure/netappfiles/snapshot/policy), or [PowerShell](/powershell/module/az.netappfiles/new-aznetappfilessnapshotpolicy) tools * Application consistent snapshot tooling, like [AzAcSnap](azacsnap-introduction.md)
-## How volumes and snapshots are replicated cross-region for DR
+## How volumes and snapshots are replicated cross-region for disaster recovery
-Azure NetApp Files supports [cross-region replication](cross-region-replication-introduction.md) for disaster-recovery (DR) purposes. Azure NetApp Files cross-region replication uses SnapMirror technology. Only changed blocks are sent over the network in a compressed, efficient format. After a cross-region replication is initiated between volumes, the entire volume contents (that is, the actual stored data blocks) are transferred only once. This operation is called a *baseline transfer*. After the initial transfer, only changed blocks (as captured in snapshots) are transferred. The result is an asynchronous 1:1 replica of the source volume, including all snapshots. This behavior follows a full and incremental-forever replication mechanism. This technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time. You can achieve a smaller Recovery Point Objective (RPO), because more snapshots can be created and transferred more frequently with minimal data transfers. Further, it takes away the need for host-based replication mechanisms, avoiding virtual machine and software license cost.
+Azure NetApp Files supports [cross-region replication](cross-region-replication-introduction.md) for disaster-recovery (DR) purposes and [cross-zone replication](cross-zone-replication-introduction.md) for business continuity. Azure NetApp Files cross-region replication and cross-zone replication both use SnapMirror technology. Only changed blocks are sent over the network in a compressed, efficient format. After replication is initiated between volumes, the entire volume contents (that is, the actual stored data blocks) are transferred only once. This operation is called a *baseline transfer*. After the initial transfer, only changed blocks (as captured in snapshots) are transferred. The result is an asynchronous 1:1 replica of the source volume, including all snapshots. This behavior follows a full and incremental-forever replication mechanism. This technology minimizes the amount of data required for replication, therefore saving data transfer costs. It also shortens the replication time. You can achieve a smaller Recovery Point Objective (RPO), because more snapshots can be created and transferred more frequently with minimal data transfers. Further, it takes away the need for host-based replication mechanisms, avoiding virtual machine and software license cost.
-The following diagram shows snapshot traffic in cross-region replication scenarios:
+The following diagram shows snapshot traffic in replication scenarios:
-[ ![Diagram that shows snapshot traffic in cross-region replication scenarios](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png)](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png#lightbox)
+[ ![Diagram that shows snapshot traffic in replication scenarios.](./media/snapshots-introduction/snapshot-traffic-replication.png)](./media/snapshots-introduction/snapshot-traffic-replication.png#lightbox)
## How snapshots can be vaulted for long-term retention and cost savings
Vaulted snapshot history is managed automatically by the applied snapshot policy
* [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md) * [Monitor volume and snapshot metrics](azure-netapp-files-metrics.md#volumes)
+* [Recommendations for using availability zones and regions](/azure/well-architected/reliability/regions-availability-zones)
+* [Azure Well-Architected Framework perspective on Azure NetApp Files](/azure/well-architected/service-guides/azure-netapp-files)
* [Restore individual files using single-file snapshot restore](snapshots-restore-file-single.md) * [Restore a file from a snapshot using a client](snapshots-restore-file-client.md) * [Troubleshoot snapshot policies](troubleshoot-snapshot-policies.md)
azure-netapp-files Testing Methodology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/testing-methodology.md
+
+ Title: Understand performance testing methodology in Azure NetApp Files
+description: Learn how Azure NetApp Files benchmark tests are conducted.
++++ Last updated : 10/31/2024+++
+# Understand performance testing methodology in Azure NetApp Files
+
+The benchmark tool used in these tests is called [Flexible I/O Tester (FIO)](https://fio.readthedocs.io/en/latest/fio_doc.html).
+
+When testing the edges of performance limits for storage, workload generation must be **highly parallelized** to achieve the maximum results possible.
+
+That means:
+- one, to many clients
+- multiple CPUs
+- multiple threads
+- performing I/O to multiple files
+- multi-threaded network connections (such as nconnect)
+
+The end goal is to push the storage system as far as it can go before operations must begin to wait for other operations to finish. Use of a single client traversing a single network flow, or reading/writing from/to a single file (for instance, using dd or diskspd on a single client) doesn't deliver results indicative of Azure NetApp Files' capability. Instead, these setups show the performance of a single file, which generally trends with line speed and/or the Azure NetApp File [QoS settings](azure-netapp-files-understand-storage-hierarchy.md#qos_types).
+
+In addition, caching must be minimized as much as possible to achieve accurate, representative results of what the storage can accomplish. However, caching is a very real tool for modern applications to perform at their absolute best. These cover scenarios with some caching and with caching bypassed for random I/O workloads by using randomization of the workload via FIO options (specifically, `randrepeat=0` to prevent caching on the storage and [directio](performance-linux-direct-io.md) to prevent client caching).
+
+## About Flexible I/O tester
+
+Flexible I/O tester (FIO) is an open source workload generation tool commonly used for storage benchmarking due to its ease of use and flexibility in defining workload patterns. For information about its use with Azure NetApp Files, see [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md).
+
+### Installation of FIO
+
+Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
+
+### FIO examples for IOPS
+
+The FIO examples in this section use the following setup:
+* VM instance size: D32s_v3
+* Capacity pool service level and size: Premium / 50 TiB
+* Volume quota size: 48 TiB
+
+The following examples show the FIO random reads and writes.
+
+#### FIO: 8k block size 100% random reads
+
+`fio --name=8krandomreads --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
+
+#### FIO: 8k block size 100% random writes
+
+`fio --name=8krandomwrites --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
+
+#### Benchmark results
+
+For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
+
+### FIO examples for bandwidth
+
+The examples in this section show the FIO sequential reads and writes.
+
+#### FIO: 64k block size 100% sequential reads
+
+`fio --name=64kseqreads --rw=read --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
+
+#### FIO: 64k block size 100% sequential writes
+
+`fio --name=64kseqwrites --rw=write --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
+
+#### Benchmark results
+
+For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
+
+## Caching with FIO
+
+FIO can be run with specific options to control how a performance benchmark reads and writes files. In the benchmarks tests with caching excluded, the FIO flag `randrepeat=0` was used to avoid caching by running a true random workload rather than a repeated pattern.
+
+**[`randrepeat`]https://fio.readthedocs.io/en/latest/fio_doc.html#i-o-type)**
+
+By default, when `randrepeat` isn't defined, the FIO tool sets the value to "true," meaning that the data produced in the files isn't truly random. Thus, filesystem caches aren't utilized to improve overall performance of the workload.
+
+In earlier benchmarks for Azure NetApp Files, `randrepeat` wasn't defined, so some filesystem caching was implemented. In more up-to-date tests, this option is set to ΓÇ£0ΓÇ¥ (false) to ensure there is adequate randomness in the data to avoid filesystem caches in the Azure NetApp Files service. This modification results in slightly lower overall numbers, but is a more accurate representation of what the storage service is capable of when caching is bypassed.
+
+## Next steps
+
+* [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
+* [Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
+* [Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
+
azure-netapp-files Workload Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/workload-types.md
+
+ Title: Understand workload types in Azure NetApp Files
+description: Choose the correct volume type depending on your Azure NetApp Files workload.
++++ Last updated : 10/31/2024+++
+# Understand workload types in Azure NetApp Files
+
+When considering use cases for cloud storage, industry silos can often be broken down into workload types, since there can be commonalities across industries in specific workloads. For instance, a media workload can have a similar workload profile to an AI/ML training set with heavy sequential reads and writes.
+
+Azure NetApp Files is well suited for any type of workload from the low to high I/O, low to high throughput ΓÇô from home directory to electronic design automation (EDA). Learn about the different workload types and develop an understanding of which Azure NetApp Files [volumes types](azure-netapp-files-understand-storage-hierarchy.md) are best suited for those workloads.
+
+For more information, see [Understand large volumes in Azure NetApp Files](large-volumes.md)
+
+## Workload types
+
+* **Specific offset, streaming random read/write workloads:** Online transaction processing (OLTP) databases are typical here. A signature of an OLTP workload is a dependence on random read to find the desired file offset (such as a database table row) and write performance against a small number of files. With this type of workload, tens of thousands to hundreds of thousands of I/O operations are common. Application vendors and database administrators typically have specific latency targets for these workloads. In most cases, Azure NetApp Files regular volumes are best suited for this workload.
+
+* **Whole file streaming workloads:** Examples include post-production media rendering of media repositories, high-performance computing suites such as those seen in computer-aided engineering/design suites (for example, computational fluid dynamics), oil and gas suites, and machine learning fine-tuning frameworks. A hallmark of this type of workload is larger files read or written in a continuous manner. For these workloads, storage throughput is the most critical attribute as it has the biggest impact on time to completion. Latency sensitivity is common here as workloads typically use a fixed amount of concurrency, thus throughput is determined by latency. Workloads typical of post-production are latency sensitive to the degree that framerate is only achieved when specific latency values are met. Both Azure NetApp Files regular volumes and Azure NetApp Files large volumes are appropriate for these workloads, with large volumes providing [more capacity](azure-netapp-files-resource-limits.md) and [higher file count possibilities](maxfiles-concept.md).
++
+* **Metadata rich, high file count workloads:** Examples include software development, EDA, and financial services (FSI) applications. In these workloads, typically millions of smaller files are created followed by information displayed independently or being subjected to reads or writes. In high file count workload, remote procedure calls (RPC) other than read and write typically represent the majority of I/O. I/O rate (I/OPS) is typically the most important attribute for these workloads. Latency is often less important as concurrency might be controlled by scaling out at the application. Some customers have latency expectations of 1 ms, while others might expect 10 ms. As long as the I/O rate is achieved, so is satisfaction. This type of workload is ideally suited for _Azure NetApp Files large volumes_.
+
+For more information on EDA workloads in Azure NetApp Files, see [Benefits of using Azure NetApp Files for Electronic Design Automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md).
+
+## More information
+
+* [General performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md)
+* [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
+* [Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
+* [Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
+* [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md)
+* [Oracle database performance on Azure NetApp Files multiple volumes](performance-oracle-multiple-volumes.md)
+* [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](performance-azure-vmware-solution-datastore.md)
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure.md
Your Azure SQL Edge configuration changes and database files are persisted in th
The first option is to mount a directory on your host as a data volume in your container. To do that, use the `docker run` command with the `-v <host directory>:/var/opt/mssql` flag. This allows the data to be restored between container executions. ```bash
-docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 -v <host directory>/data:/var/opt/mssql/data -v <host directory>/log:/var/opt/mssql/log -v <host directory>/secrets:/var/opt/mssql/secrets -d mcr.microsoft.com/azure-sql-edge
+docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<password>' -p 1433:1433 -v <host directory>/data:/var/opt/mssql/data -v <host directory>/log:/var/opt/mssql/log -v <host directory>/secrets:/var/opt/mssql/secrets -d mcr.microsoft.com/azure-sql-edge
``` ```PowerShell
-docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 1433:1433 -v <host directory>/data:/var/opt/mssql/data -v <host directory>/log:/var/opt/mssql/log -v <host directory>/secrets:/var/opt/mssql/secrets -d mcr.microsoft.com/azure-sql-edge
+docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<password>" -p 1433:1433 -v <host directory>/data:/var/opt/mssql/data -v <host directory>/log:/var/opt/mssql/log -v <host directory>/secrets:/var/opt/mssql/secrets -d mcr.microsoft.com/azure-sql-edge
``` This technique also enables you to share and view the files on the host outside of Docker.
This technique also enables you to share and view the files on the host outside
The second option is to use a data volume container. You can create a data volume container by specifying a volume name instead of a host directory with the `-v` parameter. The following example creates a shared data volume named **sqlvolume**. ```bash
-docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge
+docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<password>' -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge
``` ```PowerShell
-docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge
+docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<password>" -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge
``` > [!NOTE]
azure-sql-edge Deploy Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-kubernetes.md
Create an SA password in the Kubernetes cluster. Kubernetes can manage sensitive
The following command creates a password for the SA account: ```azurecli
- kubectl create secret generic mssql --from-literal=MSQL_SA_PASSWORD="MyC0m9l&xP@ssw0rd" -n <namespace name>
+ kubectl create secret generic mssql --from-literal=MSQL_SA_PASSWORD="<password>" -n <namespace name>
``` Replace `MyC0m9l&xP@ssw0rd` with a complex password.
azure-sql-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/troubleshoot.md
If the SQL Edge container fails to run, try the following tests:
- If you get an error such as `failed to create endpoint CONTAINER_NAME on network bridge. Error starting proxy: listen tcp 0.0.0.0:1433 bind: address already in use.`, you're attempting to map the container port 1433 to a port that is already in use. This can happen if you're running SQL Edge locally on the host machine. It can also happen if you start two SQL Edge containers and try to map them both to the same host port. If this happens, use the `-p` parameter to map the container port 1433 to a different host port. For example: ```bash
- sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge-developer.
+ sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=<password>' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge-developer.
``` - If you get an error such as `Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.30tdout=1&tail=all: dial unix /var/run/docker.sock: connect: permission denied` when trying to start a container, then add your user to the docker group in Ubuntu. Then sign out and sign back in again, as this change affects new sessions.
bastion Session Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/session-recording.md
In this section, you set up and specify the container for session recordings.
1. Within the storage account, create a **Container**. This is the container you'll use to store your Bastion session recordings. We recommend that you create an exclusive container for session recordings. For steps, see [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). 1. On the page for your storage account, in the left pane, expand **Settings**. Select **Resource sharing (CORS)**.
-1. Create a new policy under Blob service.
- * For **Allowed origins**, type `HTTPS://` followed by the DNS name of your bastion.
- * For **Allowed Methods**, select GET.
- * For **Max Age**, use ***86400***.
- * You can leave the other fields blank.
-
- :::image type="content" source="./media/session-recording/service.png" alt-text="Screenshot shows the Resource sharing page for Blob service configuration." lightbox="./media/session-recording/service.png":::
-1. **Save** your changes at the top of the page.
+1. Create a new policy under Blob service and save your changes at the top of the page.
+
+| Name | Value |
+|||
+ |Allowed origins | `https://` followed by the full DNS name of your bastion, starting with `bst-`. Keep in mind, these values are case-sensitive. |
+|Allowed methods | GET|
+|Allowed headers |*|
+|Exposed headers|*|
+|Max age| 86400|
+++ ## Add or update the SAS URL
batch Pool Endpoint Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/pool-endpoint-configuration.md
Title: Configure node endpoints in Azure Batch pool
-description: How to configure or disable access to SSH or RDP ports on compute nodes in an Azure Batch pool.
+description: How to configure node endpoints such as access to SSH or RDP ports on compute nodes in an Azure Batch pool.
Previously updated : 06/13/2024 Last updated : 11/08/2024
-# Configure or disable remote access to compute nodes in an Azure Batch pool
+# Configure remote access to compute nodes in an Azure Batch pool
-By default, Batch allows a [node user](/rest/api/batchservice/computenode/adduser) with network connectivity to connect externally to a compute node in a Batch pool. For example, a user can connect by Remote Desktop (RDP) on port 3389 to a compute node in a Windows pool. Similarly, by default, a user can connect by Secure Shell (SSH) on port 22 to a compute node in a Linux pool.
+If configured, you can allow a [node user](/rest/api/batchservice/computenode/adduser) with network connectivity to connect
+externally to a compute node in a Batch pool. For example, a user can connect by Remote Desktop (RDP) on port 3389 to a
+compute node in a Windows pool. Similarly, by default, a user can connect by Secure Shell (SSH) on port 22 to a compute
+node in a Linux pool.
-In your environment, you might need to restrict or disable these default external access settings. You can modify these settings by using the Batch APIs to set the [PoolEndpointConfiguration](/rest/api/batchservice/pool/add#poolendpointconfiguration) property.
+> [!TIP]
+> As of API version `2024-07-01`, Batch no longer automatically maps common remote access ports for SSH and RDP.
+> If you wish to allow remote access to your Batch compute nodes with pools created with API version `2024-07-01` or later,
+> then you must manually configure the pool endpoint configuration to enable such access.
-## About the pool endpoint configuration
-The endpoint configuration consists of one or more [network address translation (NAT) pools](/rest/api/batchservice/pool/add#inboundnatpool) of frontend ports. (Do not confuse a NAT pool with the Batch pool of compute nodes.) You set up each NAT pool to override the default connection settings on the pool's compute nodes.
+In your environment, you might need to enable, restrict, or disable external access settings or any other ports you wish
+on the Batch pool. You can modify these settings by using the Batch APIs to set the
+[PoolEndpointConfiguration](/rest/api/batchservice/pool/add#poolendpointconfiguration) property.
+
+## Batch pool endpoint configuration
+The endpoint configuration consists of one or more [network address translation (NAT) pools](/rest/api/batchservice/pool/add#inboundnatpool)
+of frontend ports. Don't confuse a NAT pool with the Batch pool of compute nodes. You set up each NAT pool to override
+the default connection settings on the pool's compute nodes.
Each NAT pool configuration includes one or more [network security group (NSG) rules](/rest/api/batchservice/pool/add#networksecuritygrouprule). Each NSG rule allows or denies certain network traffic to the endpoint. You can choose to allow or deny all traffic, traffic identified by a [service tag](../virtual-network/network-security-groups-overview.md#service-tags) (such as "Internet"), or traffic from specific IP addresses or subnets.
Each NAT pool configuration includes one or more [network security group (NSG) r
* The pool endpoint configuration is part of the pool's [network configuration](/rest/api/batchservice/pool/add#networkconfiguration). The network configuration can optionally include settings to join the pool to an [Azure virtual network](batch-virtual-network.md). If you set up the pool in a virtual network, you can create NSG rules that use address settings in the virtual network. * You can configure multiple NSG rules when you configure a NAT pool. The rules are checked in the order of priority. Once a rule applies, no more rules are tested for matching.
+## Example: Allow RDP traffic from a specific IP address
-## Example: Deny all RDP traffic
-
-The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to deny all network traffic. The endpoint uses a frontend pool of ports in the range *60000 - 60099*.
+The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to allow RDP access only from IP address *198.168.100.7*. The second NSG rule denies traffic that doesn't match the IP address.
```csharp using Microsoft.Azure.Batch;
using Microsoft.Azure.Batch.Common;
namespace AzureBatch { public void SetPortsPool()
- {
+ {
pool.NetworkConfiguration = new NetworkConfiguration {
- EndpointConfiguration = new PoolEndpointConfiguratio(new InboundNatPool[]
+ EndpointConfiguration = new PoolEndpointConfiguration(new InboundNatPool[]
{
- new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 60000, 60099, new NetworkSecurityGroupRule[]
+ new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 7500, 8000, new NetworkSecurityGroupRule[]
{
- new NetworkSecurityGroupRule(162, NetworkSecurityGroupRuleAccess.Deny, "*"),
+ new NetworkSecurityGroupRule(179, NetworkSecurityGroupRuleAccess.Allow, "198.168.100.7"),
+ new NetworkSecurityGroupRule(180, NetworkSecurityGroupRuleAccess.Deny, "*")
})
- })
+ })
}; } } ```
-## Example: Deny all SSH traffic from the internet
+## Example: Allow SSH traffic from a specific subnet
-The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to deny all internet traffic. The endpoint uses a frontend pool of ports in the range *4000 - 4100*.
+The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to allow access only from the subnet *192.168.1.0/24*. The second NSG rule denies traffic that doesn't match the subnet.
```python from azure.batch import models as batchmodels
class AzureBatch(object):
network_security_group_rules=[ batchmodels.NetworkSecurityGroupRule( priority=170,
- access=batchmodels.NetworkSecurityGroupRuleAccess.deny,
- source_address_prefix='Internet'
+ access='allow',
+ source_address_prefix='192.168.1.0/24'
+ ),
+ batchmodels.NetworkSecurityGroupRule(
+ priority=175,
+ access='deny',
+ source_address_prefix='*'
) ] )
class AzureBatch(object):
) ```
-## Example: Allow RDP traffic from a specific IP address
-The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to allow RDP access only from IP address *198.51.100.7*. The second NSG rule denies traffic that does not match the IP address.
+
+## Example: Deny all RDP traffic
+
+The following C# snippet shows how to configure the RDP endpoint on compute nodes in a Windows pool to deny all network traffic. The endpoint uses a frontend pool of ports in the range *60000 - 60099*.
+
+> [!NOTE]
+> As of Batch API version `2024-07-01`, port 3389 typically associated with RDP is no longer mapped by default.
+> Creating an explicit deny rule is no longer required if access is not needed from the Internet for Batch pools
+> created with this API version or later. You may still need to specify explicit deny rules to restrict access
+> from other sources.
```csharp using Microsoft.Azure.Batch;
namespace AzureBatch
{ pool.NetworkConfiguration = new NetworkConfiguration {
- EndpointConfiguration = new PoolEndpointConfiguration(new InboundNatPool[]
+ EndpointConfiguration = new PoolEndpointConfiguratio(new InboundNatPool[]
{
- new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 7500, 8000, new NetworkSecurityGroupRule[]
- {
- new NetworkSecurityGroupRule(179, NetworkSecurityGroupRuleAccess.Allow, "198.51.100.7"),
- new NetworkSecurityGroupRule(180, NetworkSecurityGroupRuleAccess.Deny, "*")
+ new InboundNatPool("RDP", InboundEndpointProtocol.Tcp, 3389, 60000, 60099, new NetworkSecurityGroupRule[]
+ {
+ new NetworkSecurityGroupRule(162, NetworkSecurityGroupRuleAccess.Deny, "*"),
})
- })
+ })
}; } } ```
-## Example: Allow SSH traffic from a specific subnet
+## Example: Deny all SSH traffic from the internet
+
+The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to deny all internet traffic. The endpoint uses a frontend pool of ports in the range *4000 - 4100*.
-The following Python snippet shows how to configure the SSH endpoint on compute nodes in a Linux pool to allow access only from the subnet *192.168.1.0/24*. The second NSG rule denies traffic that does not match the subnet.
+> [!NOTE]
+> As of Batch API version `2024-07-01`, port 22 typically associated with SSH is no longer mapped by default.
+> Creating an explicit deny rule is no longer required if access is not needed from the Internet for Batch pools
+> created with this API version or later. You may still need to specify explicit deny rules to restrict access
+> from other sources.
```python from azure.batch import models as batchmodels
class AzureBatch(object):
network_security_group_rules=[ batchmodels.NetworkSecurityGroupRule( priority=170,
- access='allow',
- source_address_prefix='192.168.1.0/24'
- ),
- batchmodels.NetworkSecurityGroupRule(
- priority=175,
- access='deny',
- source_address_prefix='*'
+ access=batchmodels.NetworkSecurityGroupRuleAccess.deny,
+ source_address_prefix='Internet'
) ] )
class AzureBatch(object):
## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.-- For more information about NSG rules in Azure, see [Filter network traffic with network security groups](../virtual-network/network-security-groups-overview.md).
+- For more information about NSG rules in Azure, see [Filter network traffic with network security groups](../virtual-network/network-security-groups-overview.md).
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
- devx-track-azurecli - ignite-2023 Previously updated : 09/09/2024 Last updated : 11/07/2024
-zone_pivot_groups: container-apps-code-to-cloud-segmemts
- # Quickstart: Build and deploy from local source code to Azure Container Apps This article demonstrates how to build and deploy a microservice to Azure Container Apps from local source code using the programming language of your choice. In this quickstart, you create a backend web API service that returns a static collection of music albums.
-> [!NOTE]
-> This sample application is available in two versions. One version where the source contains a Dockerfile. The other version has no Dockerfile. Select the version that best reflects your source code. If you are new to containers, select the **No Dockerfile** option at the top.
- The following screenshot shows the output from the album API service you deploy. :::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint.":::
To complete this project, you need the following items:
| Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml?tabs=current) for details. |
+| Git | Install [Git](https://git-scm.com/downloads). |
| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+## Setup
-## Create environment variables
-
-Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
# [Bash](#tab/bash)
-Define the following variables in your bash shell.
-
-```azurecli
-export RESOURCE_GROUP="album-containerapps"
-export LOCATION="canadacentral"
-export ENVIRONMENT="env-album-containerapps"
-export API_NAME="album-api"
+```bash
+az login
```
-# [Azure PowerShell](#tab/azure-powershell)
-
-Define the following variables in your PowerShell console.
+# [PowerShell](#tab/powershell)
```powershell
-$RESOURCE_GROUP="album-containerapps"
-$LOCATION="canadacentral"
-$ENVIRONMENT="env-album-containerapps"
-$API_NAME="album-api"
+az login
```
-## Get the sample code
+To ensure you're running the latest version of the CLI, run the upgrade command.
-Download and extract the API sample application in the language of your choice.
+# [Bash](#tab/bash)
+```bash
+az upgrade
+```
-# [C#](#tab/csharp)
+# [PowerShell](#tab/powershell)
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-csharp/zip/refs/heads/main) to your machine.
+```powershell
+az upgrade
+```
-Extract the download and change into the *containerapps-albumapi-csharp-main/src* folder.
+
+Next, install or update the Azure Container Apps extension for the CLI.
-# [Java](#tab/java)
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-java/zip/refs/heads/main) to your machine.
+# [Bash](#tab/bash)
-Extract the download and change into the *containerapps-albumapi-java-main* folder.
+```bash
+az extension add --name containerapp --upgrade --allow-preview true
+```
+# [PowerShell](#tab/powershell)
-# [JavaScript](#tab/javascript)
+```powershell
+az extension add --name containerapp --upgrade --allow-preview true
+```
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-javascript/zip/refs/heads/main) to your machine.
+
-Extract the download and change into the *containerapps-albumapi-javascript-main/src* folder.
+Now that the current extension is installed, register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces.
+# [Bash](#tab/bash)
-# [Python](#tab/python)
+```bash
+az provider register --namespace Microsoft.App
+az provider register --namespace Microsoft.OperationalInsights
+```
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-python/zip/refs/heads/main) to your machine.
+# [PowerShell](#tab/powershell)
-Extract the download and change into the *containerapps-albumapi-python-main/src* folder.
+```powershell
+az provider register --namespace Microsoft.App
+az provider register --namespace Microsoft.OperationalInsights
+```
+
-# [Go](#tab/go)
+## Create environment variables
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-go/zip/refs/heads/main) to your machine.
+Now that your CLI setup is complete, you can define the environment variables that are used throughout this article.
+Now that your CLI setup is complete, you can define the environment variables that are used throughout this article.
+# [Bash](#tab/bash)
-Extract the download and navigate into the *containerapps-albumapi-go-main/src* folder.
+Define the following variables in your bash shell.
+```bash
+export RESOURCE_GROUP="album-containerapps"
+export LOCATION="canadacentral"
+export ENVIRONMENT="env-album-containerapps"
+export API_NAME="album-api"
+```
+# [PowerShell](#tab/powershell)
+Define the following variables in your PowerShell console.
-# [C#](#tab/csharp)
+```powershell
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
+```
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-csharp/zip/refs/heads/buildpack) to your machine.
+
-Extract the download and change into the *containerapps-albumapi-csharp-buildpack/src* folder.
+## Get the sample code
+Run the following command to clone the sample application in the language of your choice and change into the project source folder.
-# [Java](#tab/java)
+# [C#](#tab/csharp)
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-java/zip/refs/heads/buildpack) to your machine.
+```bash
+git clone https://github.com/azure-samples/containerapps-albumapi-csharp.git
+cd containerapps-albumapi-csharp/src
+```
-Extract the download and change into the *containerapps-albumapi-java-buildpack* folder.
+# [Java](#tab/java)
-> [!NOTE]
-> The Java Buildpack uses [Maven](https://maven.apache.org/what-is-maven.html) with default settings to build your application. Alternatively, you can the [use `--build-env-vars` parameter to configure the image build from source code](java-build-environment-variables.md).
+```bash
+git clone https://github.com/azure-samples/containerapps-albumapi-java.git
+cd containerapps-albumapi-java
+```
# [JavaScript](#tab/javascript)
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-javascript/zip/refs/heads/buildpack) to your machine.
-
-Extract the download and change into the *containerapps-albumapi-javascript-buildpack/src* folder.
+```bash
+git clone https://github.com/azure-samples/containerapps-albumapi-javascript.git
+cd containerapps-albumapi-javascript/src
+```
# [Python](#tab/python)
-[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-python/zip/refs/heads/buildpack) to your machine.
-
-Extract the download and change into the *containerapps-albumapi-python-buildpack/src* folder.
+```bash
+git clone https://github.com/azure-samples/containerapps-albumapi-python.git
+cd containerapps-albumapi-python/src
+```
# [Go](#tab/go)
-Azure Container Apps cloud build doesn't currently support Buildpacks for Go.
-
+```bash
+git clone https://github.com/azure-samples/containerapps-albumapi-go.git
+cd containerapps-albumapi-go/src
+```
## Build and deploy the container app
-Build and deploy your first container app with the `containerapp up` command. This command will:
+First, run the following command to create the resource group that will contain the resources you create in this quickstart.
-- Create the resource group-- Create an Azure Container Registry-- Build the container image and push it to the registry-- Create the Container Apps environment with a Log Analytics workspace-- Create and deploy the container app using the built container image
+# [Bash](#tab/bash)
-- Create the resource group-- Create a default registry as part of your environment-- Detect the language and runtime of your application and build the image using the appropriate Buildpack-- Push the image into the Azure Container Apps default registry-- Create the Container Apps environment with a Log Analytics workspace-- Create and deploy the container app using the built container image
+```bash
+az group create --name $RESOURCE_GROUP --location $LOCATION
+```
+# [PowerShell](#tab/powershell)
-The `up` command uses the Dockerfile in the root of the repository to build the container image. The `EXPOSE` instruction in the Dockerfile defined the target port, which is the port used to send ingress traffic to the container.
+```powershell
+az group create --name $RESOURCE_GROUP --location $LOCATION
+```
+
-If the `up` command doesn't find a Dockerfile, it automatically uses Buildpacks to turn your application source into a runnable container. Since the Buildpack is trying to run the build on your behalf, you need to tell the `up` command which port to send ingress traffic to.
+Build and deploy your first container app with the `containerapp up` command. This command will:
+- Create the resource group
+- Create an Azure Container Registry
+- Build the container image and push it to the registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using the built container image
+The `up` command uses the Dockerfile in project folder to build the container image. The `EXPOSE` instruction in the Dockerfile defines the target port, which is the port used to send ingress traffic to the container.
-In the following code example, the `.` (dot) tells `containerapp up` to run in the current directory of the extracted sample API application.
+In the following code example, the `.` (dot) tells `containerapp up` to run in the current directory of the project that also contains the Dockerfile.
# [Bash](#tab/bash) -
-```azurecli
-az containerapp up \
- --name $API_NAME \
- --location $LOCATION \
- --environment $ENVIRONMENT \
- --source .
-```
--
-```azurecli
+```bash
az containerapp up \ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
--location $LOCATION \ --environment $ENVIRONMENT \
- --ingress external \
- --target-port 8080 \
--source . ```
-> [!IMPORTANT]
-> In order to deploy your container app to an existing resource group, include `--resource-group yourResourceGroup` to the `containerapp up` command.
---
-# [Azure PowerShell](#tab/azure-powershell)
--
-```powershell
-az containerapp up `
- --name $API_NAME `
- --resource-group $RESOURCE_GROUP `
- --location $LOCATION `
- --environment $ENVIRONMENT `
- --source .
-```
+# [PowerShell](#tab/powershell)
```powershell az containerapp up `
az containerapp up `
--resource-group $RESOURCE_GROUP ` --location $LOCATION ` --environment $ENVIRONMENT `
- --ingress external `
- --target-port 8080 `
--source . ``` --
+> [!NOTE]
+> If the command returns an error with the message "AADSTS50158: External security challenge not satisfied", run `az login --scope https://graph.microsoft.com//.default` to log in with the required permissions and then run the `az containerapp up` command again.
## Verify deployment
-Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
+Locate the container app's URL in the output of the `az containerapp up` command. Navigate to the URL in your browser. Add `/albums` to the end of the URL to see the response from the API.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint.":::
If you're not going to continue on to the [Deploy a frontend](communicate-betwee
# [Bash](#tab/bash)
-```azurecli
+```bash
az group delete --name $RESOURCE_GROUP ```
-# [Azure PowerShell](#tab/azure-powershell)
+# [PowerShell](#tab/powershell)
```powershell az group delete --name $RESOURCE_GROUP
cost-management-billing Direct Ea Billing Invoice Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-billing-invoice-documents.md
The transactions file is a CSV file that includes the same information as the in
| Extended Amount | The quantity multiplied by the unit price. | | Commitment Usage | The amount of monetary commitment that has been used. | | Net Amount | The extended amount minus the commitment usage. |
-| Tax Rate | The tax rate applicable to the product based on the country of billing. |
+| Tax Rate | The tax rate applicable to the product based on the country/region of billing. |
| Tax Amount | The net amount multiplied by tax rate. | | Total | The sum of the net amount and tax amount. | | Is Third Party | Indicates whether the product or service is a third-party product. | ## Related content -- Learn how to download your Direct EA billing invoice documents at [View your Azure usage summary details and download reports for direct EA enrollments](direct-ea-azure-usage-charges-invoices.md).
+- Learn how to download your Direct EA billing invoice documents at [View your Azure usage summary details and download reports for direct EA enrollments](direct-ea-azure-usage-charges-invoices.md).
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Perform the following steps in the Azure portal to order a device.
|Subscription | Select an EA, CSP, or Azure sponsorship subscription for Data Box service. <br> The subscription is linked to your billing account. | |Resource group | Select an existing resource group. <br> A resource group is a logical container for the resources that can be managed or deployed together. | |Source Azure region | Select the Azure region where your data currently is. |
- |Destination country | Select the country where you want to ship the device. |
+ |Destination country | Select the country/region where you want to ship the device. |
![Select your Data Box settings](media/data-box-deploy-export-ordered/azure-data-box-export-order-data-box-settings.png)
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
To order a device, perform the following steps:
|street-address2| The secondary address information, such as apartment number or building number. | "Building 123" | |city| The city to which the device is shipped. | "Redmond" | |state-or-province| The state to which the device is shipped.| "WA" |
- |country| The country to which the device is shipped. | "United States" |
+ |country| The country/region to which the device is shipped. | "United States" |
|postal-code| The zip code or postal code associated with the shipping address.| "98052"| |company-name| The name of your company you work for.| "Contoso, LTD" | |storage account| The Azure Storage account from where you want to import data.| "mystorageaccount"|
To order a device, perform the following steps:
2. In your command-prompt of choice or terminal, run [az data box job create](/cli/azure/databox/job#az-databox-job-create) to create your Azure Data Box order. ```azurecli
- az databox job create --resource-group <resource-group> --name <order-name> --location <azure-location> --sku <databox-device-type> --contact-name <contact-name> --phone <phone-number> --email-list <email-list> --street-address1 <street-address-1> --street-address2 <street-address-2> --city "contact-city" --state-or-province <state-province> --country <country> --postal-code <postal-code> --company-name <company-name> --storage-account "storage-account"
+ az databox job create --resource-group <resource-group> --name <order-name> --location <azure-location> --sku <databox-device-type> --contact-name <contact-name> --phone <phone-number> --email-list <email-list> --street-address1 <street-address-1> --street-address2 <street-address-2> --city "contact-city" --state-or-province <state-province> --country <country/region> --postal-code <postal-code> --company-name <company-name> --storage-account "storage-account"
``` The following sample command illustrates the command's usage:
Do the following steps using Azure PowerShell to order a device:
|StreetAddress3| The tertiary address information. | | |City [Required]| The city to which the device is shipped. | "Redmond" | |StateOrProvinceCode [Required]| The state to which the device is shipped.| "WA" |
- |CountryCode [Required]| The country to which the device is shipped. | "United States" |
+ |CountryCode [Required]| The country/region to which the device is shipped. | "United States" |
|PostalCode [Required]| The zip code or postal code associated with the shipping address.| "98052"| |CompanyName| The name of your company you work for.| "Contoso, LTD" | |StorageAccountResourceId [Required]| The Azure Storage account ID from where you want to import data.| &lt;AzstorageAccount&gt;.id |
databox Data Box Heavy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-overview.md
Data Box Heavy is designed to move massive amounts of data to Azure with little
Previous releases of Data Box, Data Box Disk, and Data Box Heavy didnΓÇÖt support cross-region data transfer. With the exception of transfers both originating and terminating between the United Kingdom (UK) and the European Union (EU), data couldnΓÇÖt cross commerce boundaries.
-Data Box cross-region data transfer capabilities, now in preview, support offline seamless cross-region data transfers between many regions. This capability allows you to copy your data from a local source and transfer it to a destination within a different country, region, or boundary. It's important to note that the Data Box device isn't shipped across commerce boundaries. Instead, it's transported to an Azure data center within the originating country or region. Data transfer between the source country and the destination region takes place using the Azure network and incurs no additional cost.
+Data Box cross-region data transfer capabilities, now in preview, support offline seamless cross-region data transfers between many regions. This capability allows you to copy your data from a local source and transfer it to a destination within a different country, region, or boundary. It's important to note that the Data Box device isn't shipped across commerce boundaries. Instead, it's transported to an Azure data center within the originating country or region. Data transfer between the source country/region and the destination region takes place using the Azure network and incurs no additional cost.
Although cross-region data transfer doesn't incur additional costs, the functionality is currently in preview and subject to change. Note, too, that some data transfer scenarios take place over large geographic areas. Higher than normal latencies might be encountered during such transfers. Cross-region transfers are currently supported between the following countries and regions:
-| Source Country | Destination Region |
+| Source Country/Region | Destination Region |
|-|| | US<sup>1</sup> | EU<sup>2</sup> | | EU<sup>2</sup> | US<sup>1</sup> |
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Data Box can transfer data based on the region in which service is deployed, the
## Data resiliency
-The Data Box service is geographical in nature and has a single active deployment in one region within each country or commerce boundary. For data resiliency, a passive instance of the service is maintained in a different region, usually within the same country or commerce boundary. In a few cases, the paired region is outside the country or commerce boundary.
+The Data Box service is geographical in nature and has a single active deployment in one region within each country/region or commerce boundary. For data resiliency, a passive instance of the service is maintained in a different region, usually within the same country/region or commerce boundary. In a few cases, the paired region is outside the country/region or commerce boundary.
In the extreme event of any Azure region being affected by a disaster, the Data Box service will be made available through the corresponding paired region. Both ongoing and new orders will be tracked and fulfilled through the service via the paired region. Failover is automatic, and is handled by Microsoft.
-For regions paired with a region within the same country or commerce boundary, no action is required. Microsoft is responsible for recovery, which could take up to 72 hours.
+For regions paired with a region within the same country/region or commerce boundary, no action is required. Microsoft is responsible for recovery, which could take up to 72 hours.
For regions that donΓÇÖt have a paired region within the same geographic or commerce boundary, the customer will be notified to create a new Data Box order from a different, available region and copy their data to Azure in the new region. New orders would be required for the Brazil South, Southeast Asia, and East Asia regions.
expressroute Expressroute Howto Reset Peering Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering-portal.md
You can reset the Microsoft peering and the Azure private peering on an ExpressR
:::image type="content" source="./media/expressroute-howto-reset-peering-portal/expressroute-circuit.png" alt-text="Screenshot that shows choosing a peering in the ExpressRoute circuit overview.":::
-1. Clear the **Enable Peering** check box, and then select **Save** to disable the peering configuration.
+1. Uncheck the **Enable IPv4 Peering** or **Enable IPv6 Peering** check box, and then select **Save** to disable the peering configuration.
:::image type="content" source="./media/expressroute-howto-reset-peering-portal/disable-peering.png" alt-text="Screenshot that shows clearing the Enable Peering check box.":::
-1. Select the **Enable Peering** check box, and then select **Save** to re-enable the peering configuration.
+1. Select the **Enable IPv4 Peering** or **Enable IPv6 Peering** check box, and then select **Save** to re-enable the peering configuration.
## Next steps
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal-policy.md
Deploy the firewall into the VNet.
| Choose a virtual network | Select **Use existing**, and then select **Test-FW-VN**. | | Public IP address | Select **Add new**, and enter **fw-pip** for the **Name**. |
+1. Clear the **Enable Firewall Management NIC** check box.
5. Accept the other default values, then select **Next: Tags**. 1. Select **Next : Review + create**. 1. Review the summary, and then select **Create** to create the firewall.
firmware-analysis Quickstart Upload Firmware Using Azure Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/quickstart-upload-firmware-using-azure-command-line-interface.md
The output of this command includes a `name` property, which is your firmware ID
3. Upload your firmware image to Azure Storage. Replace `pathToFile` with the path to your firmware image on your local machine. ```azurecli
- az storage blob upload -f pathToFile --blob-url %sasURL%
+ az storage blob upload -f "pathToFile" --blob-url %sasURL%
``` Here's an example workflow of how you could use these commands to create and upload a firmware image. To learn more about using variables in CLI commands, visit [How to use variables in Azure CLI commands](/cli/azure/azure-cli-variables?tabs=bash):
governance Definition Structure Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-basics.md
The following Resource Provider modes are currently supported as a [preview](htt
- `Microsoft.ManagedHSM.Data` for managing [Managed Hardware Security Module (HSM)](/azure/key-vault/managed-hsm/azure-policy) keys using Azure Policy. - `Microsoft.DataFactory.Data` for using Azure Policy to deny [Azure Data Factory](../../../data-factory/introduction.md) outbound traffic domain names not specified in an allowlist. This Resource Provider mode is enforcement only and doesn't report compliance in public preview. - `Microsoft.MachineLearningServices.v2.Data` for managing [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning) model deployments. This Resource Provider mode reports compliance for newly created and updated components. During public preview, compliance records remain for 24 hours. Model deployments that exist before these policy definitions are assigned don't report compliance.
+- `Microsoft.LoadTestService.Data` for restricting [Azure Load Testing](../../../load-testing/how-to-use-azure-policy.md) instances to private endpoints.
> [!NOTE] > Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/overview.md
The de-identification service (preview) offers two ways to interact with the RES
## Input requirements and service limits
-The de-identification service (preview) is designed to receive unstructured text. To de-identify data stored in the FHIR&reg; service, see [Export deidentified data](/azure/healthcare-apis/fhir/deidentified-export).
+The de-identification service (preview) is designed to receive unstructured text. To de-identify data stored in the FHIR&reg; service, see [Export de-identified data](/azure/healthcare-apis/fhir/deidentified-export).
The following service limits are applicable during preview: - Requests can't exceed 50 KB.
When you choose to store documents in Azure Blob Storage, you are charged based
An AI system includes the technology, the people who use it, the people affected by it, and the environment where you deploy it. Read the transparency note for the de-identification service (preview) to learn about responsible AI use and deployment in your systems.
-## Related content
+## Next steps
-[De-identification quickstart](quickstart.md)
+> [!div class="nextstepaction"]
+> [Quickstart: Deploy the de-identification service (preview)](quickstart.md)
-[Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
-
-[Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
+- [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
+- [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=%2Fazure%2Fai-services%2Flanguage-service%2Fcontext%2Fcontext)
healthcare-apis Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart-bicep.md
+
+ Title: "Quickstart: deploy the Azure Health Data Services de-identification service with Bicep"
+description: "Quickstart: deploy the Azure Health Data Services de-identification service with Bicep."
++++++ Last updated : 11/06/2024++
+# Quickstart: Deploy the Azure Health Data Services de-identification service (preview) with Bicep
+
+In this quickstart, you use a Bicep definition to deploy a de-identification service (preview).
++
+If your environment meets the prerequisites and you're familiar with using Bicep, select the
+**Deploy to Azure** button. The template opens in the Azure portal.
++
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from
+[Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/deidentification-service-create/).
++
+The following Azure resources are defined in the Bicep file:
+
+- [Microsoft.HealthDataAIServices/deidServices](/azure/templates)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+
+1. Deploy the Bicep file by using either Azure CLI or Azure PowerShell, replacing `<deid-service-name>` with a name for your de-identification service.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ This command requires Azure CLI version 2.6 or later. You can check the currently installed version by running `az --version`.
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters deidServiceName=<deid-service-name>
+ ```
+
+ # [Azure PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -deidServiceName "<deid-service-name>"
+ ```
++
+## Review deployed resources
+
+Use the Azure portal, the Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az resource list --resource-group exampleRG
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When you no longer need the resources, use the Azure portal, the Azure CLI, or Azure PowerShell to delete the resource group.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group delete --name exampleRG
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Azure Health De-identification client library for .NET](quickstart-sdk-net.md)
healthcare-apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/quickstart.md
After you complete the configuration, you can deploy the de-identification servi
If you no longer need them, delete the resource group and de-identification service (preview). To do so, select the resource group and select **Delete**.
-## Related content
+## Next steps
> [!div class="nextstepaction"] > [Tutorial: Configure Azure Storage to de-identify documents](configure-storage.md)
iot Concepts Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-iot-device-selection.md
Previously updated : 04/04/2024 Last updated : 11/08/2024 # IoT device selection list
Following is a comparison table of MCUs in alphabetical order. The list isn't no
| - | - | - | -| - | - | - | - | - | - | - | - | - | - | - | | [Azure Sphere MT3620 Dev Kit](https://aka.ms/IotDeviceList/Sphere) | ~$40 - $100 | Highly secure applications | C/C++, VS Code, VS | 500 MHz & 200 MHz | MT3620 (tri-core--1 x Cortex A7, 2 x Cortex M4) | 4-MB RAM + 2 x 64-KB RAM | Certifications: CE/FCC/MIC/RoHS | 4 x Digital IO, 1 x I2S, 4 x ADC, 1 x RTC | - | Dual-band 802.11 b/g/n with antenna diversity | - | 5 V | 1. [Azure Sphere Samples Gallery](https://github.com/Azure/azure-sphere-gallery#azure-sphere-gallery), 2. [Azure Sphere Weather Station](https://www.hackster.io/gatoninja236/azure-sphere-weather-station-d5a2bc)| N/A | | [Adafruit HUZZAH32 – ESP32 Feather Board](https://aka.ms/IotDeviceList/AdafruitFeather) | ~$20 - $25 | Monitoring; Beginner IoT; Home automation | Arduino IDE, VS Code | 240 MHz | 32-Bit ESP32 (dual-core Tensilica LX6) | 4 MB SPI Flash, 520 KB SRAM | Hall sensor, 10x capacitive touch IO pins, 50+ add-on boards | 3 x UARTs, 3 x SPI, 2 x I2C, 12 x ADC inputs, 2 x I2S Audio, 2 x DAC | - | 802.11b/g/n HT40 Wi-Fi transceiver, baseband, stack and LWIP, Bluetooth and BLE | √ | 3.3 V | 1. [Scientific freezer monitor](https://www.hackster.io/adi-azulay/azure-edge-impulse-scientific-freezer-monitor-5448ee), 2. [Azure IoT SDK Arduino samples](https://github.com/Azure/azure-sdk-for-c-arduino) | [Arduino Uno WiFi Rev 2 (~$50 - $60)](https://aka.ms/IotDeviceList/ArduinoUnoWifi) |
-| [Arduino Nano 33 BLE Sense](https://aka.ms/IotDeviceList/ArduinoNanoBLE) | ~$30 - $35 | Monitoring; ML; Game controller; Beginner IoT | Arduino IDE, VS Code | 64 MHz | 32-bit Nordic nRF52840 (Cortex M4F) | 1 MB Flash, 256 KB SRAM | 9-axis inertial sensor, Humidity and temp sensor, Barometric sensor, Microphone, Gesture, proximity, light color and light intensity sensor | 14 x Digital IO, 1 x UART, 1 x SPI, 1 x I2C, 8 x ADC input | - | Bluetooth and BLE | - | 3.3 V ΓÇô 21 V | 1. [Connect Nano BLE to Azure IoT Hub](https://create.arduino.cc/projecthub/Arduino_Genuino/securely-connecting-an-arduino-nb-1500-to-azure-iot-hub-af6470), 2. [Monitor beehive with Azure Functions](https://www.hackster.io/clementchamayou/how-to-monitor-a-beehive-with-arduino-nano-33ble-bluetooth-eabc0d) | [Seeed XIAO BLE sense (~$15 - $20)](https://aka.ms/IotDeviceList/SeeedXiao) |
| [Arduino Nano RP2040 Connect](https://aka.ms/IotDeviceList/ArduinoRP2040Nano) | ~$20 - $25 | Remote control; Monitoring | Arduino IDE, VS Code, C/C++, MicroPython | 133 MHz | 32-bit RP2040 (dual-core Cortex M0+) | 16 MB Flash, 264-kB RAM | Microphone, Six-axis IMU with AI capabilities | 22 x Digital IO, 20 x PWM, 8 x ADC | - | WiFi, Bluetooth | - | 3.3 V | - |[Adafruit Feather RP2040 (NOTE: also need a FeatherWing for WiFi)](https://aka.ms/IotDeviceList/AdafruitRP2040) | | [ESP32-S2 Saola-1](https://aka.ms/IotDeviceList/ESPSaola) | ~$10 - $15 | Home automation; Beginner IoT; ML; Monitoring; Mesh networking | Arduino IDE, Circuit Python, ESP IDF | 240 MHz | 32-bit ESP32-S2 (single-core Xtensa LX7) | 128 kB Flash, 320 kB SRAM, 16 kB SRAM (RTC) | 14 x capacitive touch IO pins, Temp sensor | 43 x Digital pins, 8 x PWM, 20 x ADC, 2 x DAC | Serial LCD, Parallel PCD | Wi-Fi 802.11 b/g/n (802.11n up to 150 Mbps) | - | 3.3 V | 1. [Secure face detection with Azure ML](https://www.hackster.io/achindra/microsoft-azure-machine-learning-and-face-detection-in-iot-2de40a), 2. [Azure Cost Monitor](https://www.hackster.io/jenfoxbot/azure-cost-monitor-31811a) | [ESP32-DevKitC (~$10 - $15)](https://aka.ms/IotDeviceList/ESPDevKit) | | [Wio Terminal (Seeed Studio)](https://aka.ms/IotDeviceList/WioTerminal) | ~$40 - $50 | Monitoring; Home Automation; ML | Arduino IDE, VS Code, MicroPython, ArduPy | 120 MHz | 32-bit ATSAMD51 (single-core Cortex-M4F) | 4 MB SPI Flash, 192-kB RAM | On-board screen, Microphone, IMU, buzzer, microSD slot, light sensor, IR emitter, Raspberry Pi GPIO mount (as child device) | 26 x Digital Pins, 5 x PWM, 9 x ADC | 2.4" 320x420 Color LCD | dual-band 2.4Ghz/5Ghz (Realtek RTL8720DN) | - | 3.3 V | [Monitor plants with Azure IoT](https://github.com/microsoft/IoT-For-Beginners/tree/main/2-farm/lessons/4-migrate-your-plant-to-the-cloud) | [Adafruit FunHouse (~$30 - $40)](https://aka.ms/IotDeviceList/AdafruitFunhouse) |
iot Tutorial Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-devkit-stm-b-l475e-iot-hub.md
ms.devlang: c Previously updated : 06/11/2024 Last updated : 11/08/2024 #Customer intent: As a device builder, I want to see a working IoT device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
You can use the **Termite** app to monitor communication and confirm that your d
SUCCESS: Connected to IoT Hub ``` > [!IMPORTANT]
- > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the [Inventek ISM 43362 Wi-Fi module firmware update](https://www.st.com/resource/en/utilities/inventek_fw_updater.zip). Then press the **Reset** button on the device to recheck your connection, and continue with this tutorial.
+ > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the Inventek ISM 43362 Wi-Fi module firmware update from [STMicroelectronics](https://www.st.com/). Then press the **Reset** button on the device to recheck your connection, and continue with this tutorial.
Keep Termite open to monitor device output in the following steps.
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Previously updated : 10/23/2024 Last updated : 11/08/2024
Using Azure DNS, you can host and resolve public domains, manage DNS resolution
### <a name="nat"></a>NAT Gateway
-Virtual Network NAT(network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
+NAT Gateway simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
For more information, see [What is Azure NAT gateway](../../virtual-network/nat-gateway/nat-overview.md)? :::image type="content" source="./media/networking-overview/flow-map.png" alt-text="Diagram of virtual network NAT gateway.":::
Azure DDoS Protection consists of two tiers:
:::image type="content" source="./media/networking-overview/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application.":::
+### <a name="container-security"></a> Container network security
+
+Container network security is part of [Advanced Container Networking Services (ACNS)](/azure/aks/advanced-container-networking-services-overview). It provides enhanced control over AKS network security. With features like fully qualified domain name (FQDN) filtering, clusters using Azure CNI Powered by Cilium can implement FQDN-based network policies to achieve a Zero Trust security architecture in AKS.
+ ## <a name="management"></a>Network Management and monitoring This section describes network management and monitoring services in Azure - Network Watcher, Azure Monitor, and Azure Virtual Network Manager.
This section describes network management and monitoring services in Azure - Net
[Azure Monitor](/azure/azure-monitor/overview?toc=%2fazure%2fnetworking%2ftoc.json) maximizes the availability and performance of your applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. -- ### <a name="avnm"></a>Azure Virtual Network Manager [Azure Virtual Network Manager](../../virtual-network-manager/overview.md) is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define [network groups](../../virtual-network-manager/concept-network-groups.md) to identify and logically segment your virtual networks. Then you can determine the [connectivity](../../virtual-network-manager/concept-connectivity-configuration.md) and [security configurations](../../virtual-network-manager/concept-security-admins.md) you want and apply them across all the selected virtual networks in network groups at once. :::image type="content" source="../../virtual-network-manager/media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager.":::
+### <a name="container-monitoring"></a> Container network observability
+
+Container network observability is part of [Advanced Container Networking Services (ACNS)](/azure/aks/advanced-container-networking-services-overview). ACNS uses HubbleΓÇÖs control plane to provide comprehensive visibility into AKS networking and performance. It offers real-time, detailed insights across node-level, pod-level, TCP, and DNS metrics, ensuring thorough monitoring of your network infrastructure.
++ ## Next steps - Create your first virtual network, and connect a few virtual machines to it, by completing the steps in the [Create your first virtual network](../../virtual-network/quick-create-portal.md?toc=%2fazure%2fnetworking%2ftoc.json) article.
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Title: Reliability in Azure App Service
-description: Find out about reliability in Azure App Service
+description: Find out about reliability in Azure App Service, including availability zones and multi-region deployments.
-+ Previously updated : 09/26/2023 Last updated : 11/08/2024
+zone_pivot_groups: app-service-sku
# Reliability in Azure App Service
-This article describes reliability support in [Azure App Service](../app-service/overview.md), and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure App Service](../app-service/overview.md), covering both intra-regional resiliency with [availability zones](#availability-zone-support) and information on [multi-region deployments](#multi-region-support).
-Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends; and adds the power of Microsoft Azure to your application, such as:
+Resiliency is a shared responsibility between you and Microsoft, and so article also covers ways for you to build a resilient solution that meets your needs.
-- Security-- Load balancing-- Autoscaling-- Automated management
+Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. Azure App Service adds the power of Microsoft Azure to your application, with capabilities for security, load balancing, autoscaling, and automated management. To explore how Azure App Service can bolster the reliability and resiliency of your application workload, see [Why use App Service?](../app-service/overview.md#why-use-app-service)
-To explore how Azure App Service can bolster the reliability and resiliency of your application workload, see [Why use App Service?](../app-service/overview.md#why-use-app-service)
+When you deploy Azure App Service, you can create multiple instances of an [App Service plan](/azure/app-service/overview-hosting-plans), which represents the compute workers that run your application code. Although the platform makes an effort to deploy the instances across different fault domains, it doesn't automatically spread the instances across availability zones.
+## Production deployment recommendations
+For production deployments, you should:
-## Availability zone support
--
-Azure App Service can be deployed across [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
-
-When you configure App Service as zone redundant, the platform automatically spreads the instances of the Azure App Service plan across three zones in the selected region.
-
-Instance spreading with a zone-redundant deployment is determined inside the following rules, even as the app scales in and out:
--- The minimum App Service Plan instance count is three. -- If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. -- Any instance counts beyond 3*N are spread across the remaining one or two zones.-
-Availability zone support is a property of the App Service plan. App Service plans can be created on managed multi-tenant environment or dedicated environment using App Service Environment v3. To Learn more regarding App Service Environment v3, see [App Service Environment v3 overview](../app-service/environment/overview.md).
-
-For App Services that aren't configured to be zone redundant, VM instances are not zone resilient and can experience downtime during an outage in any zone in that region.
-
-For information on enterprise deployment architecture, see [High availability enterprise deployment using App Service Environment](/azure/architecture/web-apps/app-service-environment/architectures/ase-high-availability-deployment).
-
-### Prerequisites
-
-The current requirements/limitations for enabling availability zones are:
--- Both Windows and Linux are supported.--- Availability zones are only supported on the newer App Service footprint. Even if you're using one of the supported regions, you'll receive an error if availability zones aren't supported for your resource group. To ensure your workloads land on a stamp that supports availability zones, you may need to create a new resource group, App Service plan, and App Service.--- Your App Services plan must be one of the following plans that support availability zones:-
- - In a multi-tenant environment using App Service Premium v2 or Premium v3 plans.
- - In a dedicated environment using App Service Environment v3, which is used with Isolated v2 App Service plans.
-- For dedicated environments, your App Service Environment must be v3.
+- Use premium v3 App Service plans.
+- [Enable zone redundancy](#availability-zone-support), which requires your App Service plan to use a minimum of three instances.
- >[!IMPORTANT]
- >[App Service Environment v2 and v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). App Service Environment v3 is easier to use and runs on more powerful infrastructure. To learn more about App Service Environment v3, see [App Service Environment overview](../app-service/environment/overview.md). If you're currently using App Service Environment v2 or v1 and you want to upgrade to v3, please follow the [steps in this article](../app-service/environment/migration-alternatives.md) to migrate to the new version.
-
-- Minimum instance count of three zones is enforced. The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three. -- Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones.
-
-- The following regions support Azure App Services running on multi-tenant environments:
- - Australia East
- - Brazil South
- - Canada Central
- - Central India
- - Central US
- - East Asia
- - East US
- - East US 2
- - France Central
- - Germany West Central
- - Israel Central
- - Italy North
- - Japan East
- - Korea Central
- - Mexico Central
- - North Europe
- - Norway East
- - Poland Central
- - Qatar Central
- - South Africa North
- - South Central US
- - Southeast Asia
- - Spain Central
- - Sweden Central
- - Switzerland North
- - UAE North
- - UK South
- - West Europe
- - West US 2
- - West US 3
- - Microsoft Azure operated by 21Vianet - China North 3
- - Azure Government - US Gov Virginia
+- [Enable zone redundancy](#availability-zone-support), which requires your App Service plan to use a minimum of three instances.
-- To see which regions support availability zones for App Service Environment v3, see [Regions](../app-service/environment/overview.md#regions).
+## Transient faults
-### Create a resource with availability zone enabled
+Transient faults are short, intermittent failures in components. They occur frequently in a distributed environment like the cloud, and they're a normal part of operations. They correct themselves after a short period of time. It's important that your applications handle transient faults, usually by retrying affected requests.
-#### To deploy a multi-tenant zone-redundant App Service
+All cloud-hosted applications should follow Azure's transient fault handling guidance when communicating with any cloud-hosted APIs, databases, and other components. To learn more about handling transient faults, see [Recommendations for handing transient faults](/azure/well-architected/reliability/handle-transient-faults).
-# [Azure CLI](#tab/cli)
+Although Microsoft-provided SDKs usually handle transient faults, because you host your own applications on Azure App Service, you need to consider how to avoid causing transient faults by making sure that you:
-To enable availability zones using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service plan. You can also include the `--number-of-workers` parameter to specify capacity. If you don't specify a capacity, the platform defaults to three. Capacity should be set based on the workload requirement, but no less than three. A good rule of thumb to choose capacity is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
-
-```azurecli
-az appservice plan create --resource-group MyResourceGroup --name MyPlan --sku P1v2 --zone-redundant --number-of-workers 6
-```
-
-> [!TIP]
-> To decide instance capacity, you can use the following calculation:
->
-> Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
->
-
-# [Azure portal](#tab/portal)
---
-To create an App Service with availability zones using the Azure portal, enable the zone redundancy option during the "Create Web App" or "Create App Service Plan" experiences.
--
-The capacity/number of workers/instance count can be changed once the App Service Plan is created by navigating to the **Scale out (App Service plan)** settings.
----
-# [Azure Resource Manager (ARM)](#tab/arm)
---
-The only changes needed in an Azure Resource Manager template to specify an App Service with availability zones are the ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the same conditions described previously.
-
-The Azure Resource Manager template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2018-02-01",
- "name": "your-appserviceplan-name-here",
- "location": "West US 3",
- "sku": {
- "name": "P1v3",
- "tier": "PremiumV3",
- "size": "P1v3",
- "family": "Pv3",
- "capacity": 3
- },
- "kind": "app",
- "properties": {
- "zoneRedundant": true
- }
- }
-]
-```
-
+- **Deploy multiple instances of your plan.** Azure App Service performs automated updates and other forms of maintenance on instances of your plan. If an instance becomes unhealthy, the service can automatically replace that instance with a new healthy instance. During the replacement process, there can be a short period of time where the previous instance is unavailable and a new instance isn't yet ready to serve traffic. You can mitigate the impact of this behavior by deploying multiple instances of your App Service plan.
-#### Deploy a zone-redundant App Service using a dedicated environment
+- **Use deployment slots.** Azure App Service [deployment slots](/azure/app-service/deploy-staging-slots) allow for zero-downtime deployments of your applications. Use deployment slots to minimize the impact of deployments and configuration changes on your users. Using deployment slots also reduces the likelihood that your application restarts, which causes a transient fault.
-To learn how to create an App Service Environment v3 on the Isolated v2 plan, see [Create an App Service Environment](../app-service/environment/creation.md).
+- **Avoid scaling up or down.** Instead, select a tier and instance size that meet your performance requirements under typical load. Only scale out instances to handle changes in traffic volume. Consider that scaling up and down may trigger an application restart.
-#### Troubleshooting
-
-|Error message |Description |Recommendation |
-|||-|
-|Zone redundancy is not available for resource group 'RG-NAME'. Please deploy app service plan 'ASP-NAME' to a new resource group. |Availability zones are only supported on the newer App Service footprint. Even if you're using one of the supported regions, you'll receive an error if availability zones aren't supported for your resource group. |To ensure your workloads land on a stamp that supports availability zones, create a new resource group, App Service plan, and App Service. |
-
-### Fault tolerance
-
-To prepare for availability zone failure, you should over-provision capacity of service to ensure that the solution can tolerate 1/3 loss of capacity and continue to function without degraded performance during zone-wide outages. Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
-
-### Zone down experience
-
-Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](/azure/azure-monitor/autoscale/autoscale-overview) and that your autoscale instance count specification doesn't need to be a multiple of three.
-
->[!NOTE]
->There's no guarantee that requests for additional instances in a zone-down scenario will succeed. The back filling of lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
-
-Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
-
-When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
-
-### Availability zone migration
-
-You cannot migrate existing App Service instances or environment resources from non-availability zone support to availability zone support. To get support for availability zones, you'll need to [create your resources with availability zones enabled](#create-a-resource-with-availability-zone-enabled).
-
-### Pricing
-
-For multi-tenant environments using App Service Premium v2 or Premium v3 plans, there's no additional cost associated with enabling availability zones as long as you have three or more instances in your App Service plan. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances. App Service Environment v3 has a different pricing model for availability zones. For pricing information for App Service Environment v3, see [Pricing](../app-service/environment/overview.md#pricing).
-
-## Cross-region disaster recovery and business continuity
--
-This section covers some common strategies for web apps deployed to App Service.
-
-When you create a web app in App Service and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region using a multi-region geography architecture, your application becomes less susceptible to a single-region disaster, which guarantees business continuity. Any data replication across the regions lets you recover your last application state.
-
-For IT, business continuity plans are largely driven by Recovery Time Objective (RTO) and Recovery Point Objective (RPO). For more information on RTO and RPO, see [Recovery objectives](./disaster-recovery-overview.md#recovery-objectives).
-
-Normally, maintaining an SLA around RTO is impractical for regional disasters, and you would typically design your disaster recovery strategy around RPO alone (i.e. focus on recovering data and not on minimizing interruption). With Azure, however, it's not only practical but can even be straightforward to deploy App Service for automatic geo-failovers. This lets you disaster-proof your applications further by taking care of both RTO and RPO.
-
-Depending on your desired RTO and RPO metrics, three disaster recovery architectures are commonly used for both App Service multitenant and App Service Environments. Each architecture is described in the following table:
-
-|Metric| [Active-Active](#active-active-architecture) | [Active-Passive](#active-passive-architecture) | [Passive/Cold](#passive-cold-architecture)|
-|-|-|-|-|
-|RTO| Real-time or seconds| Minutes| Hours |
-|RPO| Real-time or seconds| Minutes| Hours |
-|Cost | $$$| $$| $|
-|Scenarios| Mission-critical apps| High-priority apps| Low-priority apps|
-|Ability to serve multi-region user traffic| Yes| Yes/maybe| No|
-|Code deployment | CI/CD pipelines preferred| CI/CD pipelines preferred| Backup and restore |
-|Creation of new App Service resources during downtime | Not required | Not required| Required |
--
->[!NOTE]
->Your application most likely depends on other data services in Azure, such as Azure SQL Database and Azure Storage accounts. It's recommended that you develop disaster recovery strategies for each of these dependent Azure Services as well. For SQL Database, see [Active geo-replication for Azure SQL Database](/azure/azure-sql/database/active-geo-replication-overview). For Azure Storage, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
---
-### Disaster recovery in multi-region geography
-
-There are multiple ways to replicate your web apps content and configurations across Azure regions in an active-active or active-passive architecture, such as using [App service backup and restore](../app-service/manage-backup.md). However, backup and restore create point-in-time snapshots and eventually lead to web app versioning challenges across regions. See the following table below for a comparison between back and restore guidance vs. diaster recovery guidance:
--
-To avoid the limitations of backup and restore methods, configure your CI/CD pipelines to deploy code to both Azure regions. Consider using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [GitHub Actions](https://docs.github.com/actions). For more information, see [Continuous deployment to Azure App Service](../app-service/deploy-continuous-deployment.md).
+## Availability zone support
+Azure App Service can be configured as *zone redundant*, which means that your resources are spread across multiple [availability zones](../reliability/availability-zones-overview.md). Spreading across multiple zones helps your production workloads achieve resiliency and reliability. <!-- Not sure what this means --> Availability zone support is a property of the App Service plan.
-#### Outage detection, notification, and management
+Instance spreading with a zone-redundant deployment is determined inside the following rules, even as the app scales in and out:
-- It's recommended that you set up monitoring and alerts for your web apps to for timely notifications during a disaster. For more information, see [Application Insights availability tests](/azure/azure-monitor/app/availability-overview).
+- The minimum App Service plan instance count is three.
+- If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly.
+- Any instance counts beyond 3*N are spread across the remaining one or two zones.
-- To manage your application resources in Azure, use an infrastructure-as-Code (IaC) mechanism. In a complex deployment across multiple regions, to manage the regions independently and to keep the configuration synchronized across regions in a reliable manner requires a predictable, testable, and repeatable process. Consider an IaC tool such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview).
+When the App Service platform allocates instances for a zone-redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones#zone-balancing). An App Service plan is "balanced" if each zone has either the same number of VMs, or +/- one VM, in all of the other zones used by the App Service plan.
+For App Service plans that aren't configured as zone redundant, VM instances are not resilient to availability zone failures. They can experience downtime during an outage in any zone in that region.
-#### Set up disaster recovery and outage detection
+### Requirements
-To prepare for disaster recovery in a multi-region geography, you can use either an active-active or active-passive architecture.
-##### Active-Active architecture
+- You must use either the [Premium v2 or Premium v3 plan types](/azure/app-service/overview-hosting-plans).
+<!-- Is footprint a technical term here?-->
+- Availability zones are only supported on the newer App Service footprint. Even if you're using one of the supported regions, you'll receive an error if availability zones aren't supported for your resource group. To ensure your workloads land on a stamp that supports availability zones, you may need to create a new resource group, App Service plan, and App Service.
-In active-active disaster recovery architecture, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to both the active regions.
+- You must deploy a minimum of three instances of your plan.
-With this example architecture:
+### Regions supported
-- Identical App Service apps are deployed in two separate regions, including pricing tier and instance count. -- Public traffic directly to the App Service apps is blocked. -- Azure Front Door is used to route traffic to both the active regions.-- During a disaster, one of the regions becomes offline, and Azure Front Door routes traffic exclusively to the region that remains online. The RTO during such a geo-failover is near-zero.-- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero. -- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](../app-service/configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
+Zone-redundant App Service plans can be deployed in [any region that supports availability zones](./availability-zones-service-support.md#azure-regions-with-availability-zone-support).
-Steps to create an active-active architecture for your web app in App Service are summarized as follows:
-1. Create two App Service plans in two different Azure regions. Configure the two App Service plans identically.
-1. Create two instances of your web app, with one in each App Service plan.
+To see which regions support availability zones for App Service Environment v3, see [Regions](../app-service/environment/overview.md#regions).
-1. Create an Azure Front Door profile with:
- - An endpoint.
- - Two origin groups, each with a priority of 1. The equal priority tells Azure Front Door to route traffic to both regions equally (thus active-active).
- - A route.
-1. [Limit network traffic to the web apps only from the Azure Front Door instance](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
+### Considerations
-1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
+Applications that are deployed in a zone-redundant App Service plan continue to run and serve traffic even if multiple zones in the region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted during an availability zone outage. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
-1. Deploy code to both the web apps with [continuous deployment](../app-service/deploy-continuous-deployment.md).
+### Cost
-[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture. The same steps with minimal changes (setting priority to ΓÇ£1ΓÇ¥ for both origins in the origin group in Azure Front Door) give you an active-active architecture.
+When you're using App Service Premium v2 or Premium v3 plans, there's no additional cost associated with enabling availability zones as long as you have three or more instances in your App Service plan. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform enforces a minimum instance count of three and charges you for those three instances.
-##### Active-passive architecture
-In this disaster recovery approach, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to one region only (the *active* region).
+App Service Environment v3 has a specific pricing model for zone redundancy. For pricing information for App Service Environment v3, see [Pricing](../app-service/environment/overview.md#pricing).
-With this example architecture:
-- Identical App Service apps are deployed in two separate regions.
+### Configure availability zone support
-- Public traffic directly to the App Service apps is blocked. -- Azure Front Door is used to route traffic to the primary region.
+To use zone redundancy, switch to a supported App Service plan type.
-- To save cost, the secondary App Service plan is configured to have fewer instances and/or be in a lower pricing tier. There are three possible approaches:
- - **Preferred** The secondary App Service plan has the same pricing tier as the primary, with the same number of instances or fewer. This approach ensures parity in both feature and VM sizing for the two App Service plans. The RTO during a geo-failover only depends on the time to scale out the instances.
- - **Less preferred** The secondary App Service plan has the same pricing tier type (such as PremiumV3) but smaller VM sizing, with lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in P1V3 tier. This approach still ensures feature parity for the two App Service plans, but the lack of size parity may require a manual scale-up when the secondary region becomes the active region. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
+To deploy a new zone-redundant Azure App Service plan, select the *Zone redundant* option when you deploy the plan.
- - **Least-preferred** The secondary App Service plan has a different pricing tier than the primary and lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in S1 tier. Make sure that the secondary App Service plan has all the features your application needs in order to run. Differences in features availability between the two may cause delays to your web app recovery. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
-- Autoscale is configured on the secondary region in the event the active region becomes inactive. ItΓÇÖs advisable to have similar autoscale rules in both active and passive regions. -- During a disaster, the primary region becomes inactive, and the secondary region starts receiving traffic and becomes the active region.
+To deploy a new zone-redundant Azure App Service Environment, see [Create an App Service Environment](/azure/app-service/environment/creation).
-- Once the secondary region becomes active, the network load triggers preconfigured autoscale rules to scale out the secondary web app. -- You may need to scale up the pricing tier for the secondary region manually, if it doesn't already have the needed features to run as the active region. For example, [autoscaling requires Standard tier or higher](https://azure.microsoft.com/pricing/details/app-service/windows/). -- When the primary region is active again, Azure Front Door automatically directs traffic back to it, and the architecture is back to active-passive as before.
+Zone redundancy can only be configured when creating a new App Service plan. If you have an existing App Service plan that isn't zone-redundant, you need to replace it with a new zone-redundant plan. You can't convert an existing App Service plan to use availability zones. Similarly, you can't disable zone redundancy on an existing App Service plan.
-- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero. -- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](../app-service/configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
+### Capacity planning and management
+To prepare for availability zone failure, you should over-provision capacity of service to ensure that the solution can tolerate 1/3 loss of capacity and continue to function without degraded performance during zone-wide outages. Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
-Steps to create an active-passive architecture for your web app in App Service are summarized as follows:
+### Traffic routing between zones
-1. Create two App Service plans in two different Azure regions. The secondary App Service plan may be provisioned using one of the approaches mentioned previously.
-1. Configure autoscaling rules for the secondary App Service plan so that it scales to the same instance count as the primary when the primary region becomes inactive.
-1. Create two instances of your web app, with one in each App Service plan.
-1. Create an Azure Front Door profile with:
- - An endpoint.
- - An origin group with a priority of 1 for the primary region.
- - A second origin group with a priority of 2 for the secondary region. The difference in priority tells Azure Front Door to prefer the primary region when it's online (thus active-passive).
- - A route.
-1. [Limit network traffic to the web apps only from the Azure Front Door instance](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
-1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
-1. Deploy code to both the web apps with [continuous deployment](../app-service/deploy-continuous-deployment.md).
+During normal operations, traffic is routed between all of your available App Service plan instances across all availability zones.
-[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture.
+### Zone-down experience
-##### Passive-cold architecture
+**Detection and response:** The App Service platform is responsible for detecting a failure in an availability zone and responding. You don't need to do anything to initiate a zone failover.
-Use a passive/cold architecture to create and maintain regular backups of your web apps in an Azure Storage account that's located in another region.
+**Active requests:** When an availability zone is unavailable, any requests in progress that are connected to an App Service plan instance in the faulty availability zone are terminated and need to be retried.
-With this example architecture:
+**Traffic rerouting:** When a zone is unavailable, Azure App Service detects the lost instances from that zone. It automatically attempts to find new replacement instances. Then, it spreads traffic across the new instances as needed.
-- A single web app is deployed to a single region.
+If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale also issues a request to App Service to add more instances.
-- The web app is regularly backed up to an Azure Storage account in the same region.
+>[!NOTE]
+> [Autoscale behavior is independent of App Service platform behavior](/azure/azure-monitor/autoscale/autoscale-overview). Your autoscale instance count specification doesn't need to be a multiple of three.
-- The cross-region replication of your backups depends on the data redundancy configuration in the Azure storage account. You should set your Azure Storage account as [GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) if possible. GZRS offers both synchronous zone redundancy within a region and asynchronous in a secondary region. If GZRS isn't available, configure the account as [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage). Both GZRS and GRS have an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
+> [!IMPORTANT]
+> There's no guarantee that requests for additional instances in a zone-down scenario succeed. The back filling of lost instances occurs on a best-effort basis. If you need guaranteed capacity when an availability zone is lost, you should create and configure your App Service plans to account for losing a zone. You can do that by [overprovisioning the capacity of your App Service plan](#capacity-planning-and-management).
-- To ensure that you can retrieve backups when the storage account's primary region becomes unavailable, [**enable read only access to secondary region**](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region) (making the storage account **RA-GZRS** or **RA-GRS**, respectively). For more information on designing your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](../storage/common/geo-redundant-design.md).
+### Failback
-- During a disaster in the web app's region, you must manually deploy all required App Service dependent resources by using the backups from the Azure Storage account, most likely from the secondary region with read access. The RTO may be hours or days.
+When the availability zone recovers, Azure App Service automatically creates instances in the recovered availability zone, removes any temporary instances created in the other availability zones, and routes traffic between your instances as normal.
-- To minimize RTO, it's highly recommended that you have a comprehensive playbook outlining all the steps required to restore your web app backup to another Azure Region.
+### Testing for zone failures
-Steps to create a passive-cold region for your web app in App Service are summarized as follows:
+Azure App Service platform manages traffic routing, failover, and failback for zone-redundant App Service plans. Because this feature is fully managed, you don't need to initiate or validate availability zone failure processes.
-1. Create an Azure storage account in the same region as your web app. Choose Standard performance tier and select redundancy as Geo-redundant storage (GRS) or Geo-Zone-redundant storage (GZRS).
+## Multi-region support
-1. Enable RA-GRS or RA-GZRS (read access for the secondary region).
+Azure App Service is a single-region service. If the region becomes unavailable, your application is also unavailable.
-1. [Configure custom backup](../app-service/manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
+### Alternative multi-region solutions
-1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
+To ensure that your application becomes less susceptible to a single-region failure, you'll need to deploy your application to multiple regions. To do this, you should:
->[!TIP]
->Aside from Azure Front Door, Azure provides other load balancing options, such as Azure Traffic Manager. For a comparison of the various options, see [Load-balancing options - Azure Architecture Center](/azure/architecture/guide/technology-choices/load-balancing-overview).
+- Deploy your application to the instances in each region.
+- Configure load balancing and failover policies.
+- Replicate your data across the regions so that you can recover your last application state.
-### Disaster recovery in single-region geography
+For example architectures that illustrates this approach, see:
-If your web app's region doesn't have GZRS or GRS storage or if you are in an [Azure region that isn't one of a regional pair](cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair), you'll need to utilize zone-redundant storage (ZRS) or locally redundant storage (LRS) to create a similar architecture. For example, you can manually create a secondary region for the storage account as follows:
+- [Reference architecture: Highly available multi-region web application](/azure/architecture/web-apps/app-service/architectures/multi-region).
+- [Multi-region App Service apps for disaster recovery](/azure/architecture/web-apps/guides/multi-region-app-service/multi-region-app-service) <!-- TODO Can't publish until this is ready -->
+To follow along with a tutorial that creates a multi-region app, see [Tutorial: Create a highly available multi-region app in Azure App Service](/azure/app-service/tutorial-multi-region-app).
-Steps to create a passive-cold region without GRS and GZRS are summarized as follows:
-1. Create an Azure storage account in the same region of your web app. Choose Standard performance tier and select redundancy as zone-redundant storage (ZRS).
-1. [Configure custom backup](../app-service/manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
+For an example approach that illustrates this architecture, see [High availability enterprise deployment using App Service Environment](/azure/architecture/web-apps/app-service-environment/architectures/ase-high-availability-deployment).
-1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
-1. Create a second Azure storage account in a different region. Choose Standard performance tier and select redundancy as locally redundant storage (LRS).
+## Backups
-1. By using a tool like [AzCopy](../storage/common/storage-use-azcopy-v10.md#use-in-a-script), replicate your custom backup (Zip, XML and log files) from primary region to the secondary storage. For example:
+When you use Basic tier or higher, you can back up your App Service app to a file by using the [App Service backup and restore capabilities](../app-service/manage-backup.md). This feature is useful if it's hard to redeploy your code, or if you store state on disk. However, for most solutions, you shouldn't rely on App Service backups, and should instead use the other methods described in this article to support your resiliency requirements.
- ```
- azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'
- ```
- You can use [Azure Automation with a PowerShell Workflow runbook](../automation/learn/automation-tutorial-runbook-textual.md) to run your replication script [on a schedule](../automation/shared-resources/schedules.md). Make sure that the replication schedule follows a similar schedule to the web app backups.
+<!--To learn more about backups and how they contribute to a resiliency strategy, see [new conceptual article]. TODO We should omit this until we've got the conceptual article ready -->
-## Next steps
-- [Tutorial: Create a highly available multi-region app in Azure App Service](/azure/app-service/tutorial-multi-region-app)-- [Reliability in Azure](/azure/availability-zones/overview)
+## Service-level agreement (SLA)
+The service-level agreement (SLA) for Azure App Service describes the expected availability of the service. It also describes the conditions that must be met to achieve that availability expectation. To understand those conditions, it's important that you review the [Service Level Agreements (SLA) for Online Services](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
+When you deploy a zone-redundant App Service plan, the uptime percentage defined in the SLA increases.
+## Related content
+- [Reliability in Azure](./overview.md)
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
This article provides a comprehensive look at the security available with Azure.
Azure is a public cloud service platform that supports a broad selection of operating systems, programming languages, frameworks, tools, databases, and devices. It can run Linux containers with Docker integration; build apps with JavaScript, Python, .NET, PHP, Java, and Node.js; build back-ends for iOS, Android, and Windows devices.
-Azure public cloud services support the same technologies millions of developers and IT professionals already rely on and trust. When you build on, or migrate IT assets to, a public cloud service provider you are relying on that organizationΓÇÖs abilities to protect your applications and data with the services and the controls they provide to manage the security of your cloud-based assets.
+Azure public cloud services support the same technologies millions of developers and IT professionals already rely on and trust. When you build on or migrate IT assets to a public cloud service provider, you rely on that organizationΓÇÖs ability to protect your applications and data. They provide services and controls to manage the security of your cloud-based assets.
-AzureΓÇÖs infrastructure is designed from facility to applications for hosting millions of customers simultaneously, and it provides a trustworthy foundation upon which businesses can meet their security requirements.
+Azure's infrastructure is meticulously crafted from the ground up, encompassing everything from physical facilities to applications, to securely host millions of customers simultaneously. This robust foundation empowers businesses to confidently meet their security requirements.
In addition, Azure provides you with a wide array of configurable security options and the ability to control them so that you can customize security to meet the unique requirements of your organizationΓÇÖs deployments. This document helps you understand how Azure security capabilities can help you fulfill these requirements.
In addition, Azure provides you with a wide array of configurable security optio
## Summary of Azure security capabilities
-Depending on the cloud service model, there is variable responsibility for who is responsible for managing the security of the application or service. There are capabilities available in the Azure Platform to assist you in meeting these responsibilities through built-in features, and through partner solutions that can be deployed into an Azure subscription.
+Depending on the cloud service model, there's variable responsibility for who is responsible for managing the security of the application or service. There are capabilities available in the Azure Platform to assist you in meeting these responsibilities through built-in features, and through partner solutions that can be deployed into an Azure subscription.
-The built-in capabilities are organized in six functional areas: Operations, Applications, Storage, Networking, Compute, and Identity. Additional detail on the features and capabilities available in the Azure Platform in these six areas are provided through summary information.
+The built-in capabilities are organized in six functional areas: Operations, Applications, Storage, Networking, Compute, and Identity. More detail on the features and capabilities available in the Azure Platform in these six areas are provided through summary information.
## Operations
This section provides additional information regarding key features in security
### Microsoft Sentinel
-[Microsoft Sentinel](../../sentinel/overview.md) is a scalable, cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
+[Microsoft Sentinel](../../sentinel/overview.md) is a scalable, cloud-native, security information, and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise. Microsoft Sentinel provides a single solution for attack detection, threat visibility, proactive hunting, and threat response.
### Microsoft Defender for Cloud
-[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
+[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. Microsoft Defender for Cloud provides integrated security monitoring and policy management across your Azure subscriptions. Microsoft Defender for Cloud helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
-In addition, Defender for Cloud helps with security operations by providing you a single dashboard that surfaces alerts and recommendations that can be acted upon immediately. Often, you can remediate issues with a single click within the Defender for Cloud console.
+In addition, Defender for Cloud helps with security operations by providing you with a single dashboard that surfaces alerts and recommendations that can be acted upon immediately. Often, you can remediate issues with a single selection within the Defender for Cloud console.
### Azure Resource Manager [Azure Resource Manager](../../azure-resource-manager/management/overview.md) enables you to work with the resources in your solution as a group. You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use an [Azure Resource Manager template](../../azure-resource-manager/templates/overview.md) for deployment and that template can work for different environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.
-Azure Resource Manager template-based deployments help improve the security of solutions deployed in Azure because standard security control settings and can be integrated into standardized template-based deployments. This reduces the risk of security configuration errors that might take place during manual deployments.
+Azure Resource Manager template-based deployments help improve the security of solutions deployed in Azure because standard security control settings and can be integrated into standardized template-based deployments. Templates reduce the risk of security configuration errors that might take place during manual deployments.
### Application Insights
-[Application Insights](/azure/azure-monitor/app/app-insights-overview) is an extensible Application Performance Management (APM) service for web developers. With Application Insights, you can monitor your live web applications and automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your apps. It monitors your application all the time it's running, both during testing and after you've published or deployed it.
+[Application Insights](/azure/azure-monitor/app/app-insights-overview) is a flexible Application Performance Management (APM) service designed for web developers. It enables you to monitor your live web applications and automatically detect performance issues. With powerful analytics tools, you can diagnose problems and gain insights into user interactions with your apps. Application Insights monitors your application continuously, from development through testing and into production.
-Application Insights creates charts and tables that show you, for example, what times of day you get most users, how responsive the app is, and how well it is served by any external services that it depends on.
+Application Insights generates insightful charts and tables that reveal peak user activity times, app responsiveness, and the performance of any external services it relies on.
-If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose the cause. And the service sends you emails if there are any changes in the availability and performance of your app. Application Insight thus becomes a valuable security tool because it helps with the availability in the confidentiality, integrity, and availability security triad.
+If there are crashes, failures, or performance issues, you can search through the data in detail to diagnose the cause. And the service sends you emails if there are any changes in the availability and performance of your app. Application Insight thus becomes a valuable security tool because it helps with the availability in the confidentiality, integrity, and availability security triad.
### Azure Monitor
If there are crashes, failures or performance issues, you can search through the
### Azure Monitor logs
-[Azure Monitor logs](/azure/azure-monitor/logs/log-query-overview) ΓÇô Provides an IT management solution for both on-premises and third-party cloud-based infrastructure (such as AWS) in addition to Azure resources. Data from Azure Monitor can be routed directly to Azure Monitor logs so you can see metrics and logs for your entire environment in one place.
+[Azure Monitor logs](/azure/azure-monitor/logs/log-query-overview) ΓÇô Provides an IT management solution for both on-premises and non-Microsoft cloud-based infrastructure (such as Amazon Web Services) in addition to Azure resources. Data from Azure Monitor can be routed directly to Azure Monitor logs so you can see metrics and logs for your entire environment in one place.
Azure Monitor logs can be a useful tool in forensic and other security analysis, as the tool enables you to quickly search through large amounts of security-related entries with a flexible query approach. In addition, on-premises [firewall and proxy logs can be exported into Azure and made available for analysis using Azure Monitor logs.](/azure/azure-monitor/agents/agent-windows) ### Azure Advisor
-[Azure Advisor](/azure/advisor/advisor-overview) is a personalized cloud consultant that helps you to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry. It then recommends solutions to help improve the [performance](/azure/advisor/advisor-performance-recommendations), [security](/azure/advisor/advisor-security-recommendations), and [reliability](/azure/advisor/advisor-high-availability-recommendations) of your resources while looking for opportunities to [reduce your overall Azure spend](/azure/advisor/advisor-cost-recommendations). Azure Advisor provides security recommendations, which can significantly improve your overall security posture for solutions you deploy in Azure. These recommendations are drawn from security analysis performed by [Microsoft Defender for Cloud.](../../security-center/security-center-introduction.md)
+[Azure Advisor](/azure/advisor/advisor-overview) is a personalized cloud consultant that helps you to optimize your Azure deployments. It analyzes your resource configuration and usage data. It then recommends solutions to help improve the [performance](/azure/advisor/advisor-performance-recommendations), [security](/azure/advisor/advisor-security-recommendations), and [reliability](/azure/advisor/advisor-high-availability-recommendations) of your resources while looking for opportunities to [reduce your overall Azure spend](/azure/advisor/advisor-cost-recommendations). Azure Advisor provides security recommendations, which can significantly improve your overall security posture for solutions you deploy in Azure. These recommendations are drawn from security analysis performed by [Microsoft Defender for Cloud.](../../security-center/security-center-introduction.md)
## Applications
The section provides additional information regarding key features in applicatio
### Penetration Testing
-We donΓÇÖt perform [penetration testing](./pen-testing.md) of your application for you, but we do understand that you want and need to perform testing on your own applications. ThatΓÇÖs a good thing, because when you enhance the security of your applications you help make the entire Azure ecosystem more secure. While notifying Microsoft of pen testing activities is no longer required customers must still comply with the [Microsoft Cloud Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+We donΓÇÖt perform [penetration testing](./pen-testing.md) of your application for you, but we do understand that you want and need to perform testing on your own applications. Notification of Microsoft of pen testing activities is no longer required customers must still comply with the [Microsoft Cloud Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
### Web Application firewall
The web application firewall (WAF) in [Azure Application Gateway](../../applicat
### Layered Security Architecture
-Since [App Service Environments](../../app-service/environment/app-service-app-service-environment-intro.md) provide an isolated runtime environment deployed into an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), developers can create a layered security architecture providing differing levels of network access for each application tier. A common desire is to hide API back-ends from general Internet access, and only allow APIs to be called by upstream web apps. [Network Security groups (NSGs)](../../virtual-network/virtual-network-vnet-plan-design-arm.md) can be used on Azure Virtual Network subnets containing App Service Environments to restrict public access to API applications.
+Since [App Service Environments](../../app-service/environment/app-service-app-service-environment-intro.md) provide an isolated runtime environment deployed into an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), developers can create a layered security architecture providing differing levels of network access for each application tier. It's common to hide API back-ends from general Internet access, and only permit APIs to be called by upstream web apps. [Network Security groups (NSGs)](../../virtual-network/virtual-network-vnet-plan-design-arm.md) can be used on Azure Virtual Network subnets containing App Service Environments to restrict public access to API applications.
-### Web server diagnostics and application diagnostics
-[App Service web apps](../../app-service/troubleshoot-diagnostic-logs.md) provide diagnostic functionality for logging information from both the web server and the web application. These are logically separated into web server diagnostics and application diagnostics. Web server includes two major advances in diagnosing and troubleshooting sites and applications.
+[App Service web apps](../../app-service/troubleshoot-diagnostic-logs.md) offer robust diagnostic capabilities for capturing logs from both the web server and the web application. These diagnostics are categorized into web server diagnostics and application diagnostics. Web server diagnostics include significant advancements for diagnosing and troubleshooting sites and applications.
The first new feature is real-time state information about application pools, worker processes, sites, application domains, and running requests. The second new advantages are the detailed trace events that track a request throughout the complete request-and-response process.
-To enable the collection of these trace events, IIS 7 can be configured to automatically capture full trace logs, in XML format, for any particular request based on elapsed time or error response codes.
+To enable the collection of these trace events, IIS 7 can be configured to automatically capture comprehensive trace logs in XML format for specific requests. The collection can be based on elapsed time or error response codes.
## Storage The section provides additional information regarding key features in Azure storage security and summary information about these capabilities.
You can secure your storage account with [Azure role-based access control (Azure
A [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) provides delegated access to resources in your storage account. The SAS means that you can grant a client limited permissions to objects in your storage account for a specified period and with a specified set of permissions. You can grant these limited permissions without having to share your account access keys. ### Encryption in Transit
-Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage, you can secure data using:
+Encryption in transit is a mechanism of protecting data when it's transmitted across networks. With Azure Storage, you can secure data using:
+ - [Transport-level encryption](../../storage/blobs/security-recommendations.md), such as HTTPS when you transfer data into or out of Azure Storage. - [Wire encryption](../../storage/blobs/security-recommendations.md), such as [SMB 3.0 encryption](../../storage/blobs/security-recommendations.md) for [Azure File shares](../../storage/files/storage-dotnet-how-to-use-files.md). -- Client-side encryption, to encrypt the data before it is transferred into storage and to decrypt the data after it is transferred out of storage.
+- Client-side encryption, to encrypt the data before it's transferred into storage and to decrypt the data after it's transferred out of storage.
### Encryption at rest
-For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure storage security features that provide encryption of data that is ΓÇ£at restΓÇ¥:
+For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure storage security features that provide encryption of data that is at rest:
- [Storage Service Encryption](../../storage/common/storage-service-encryption.md) allows you to request that the storage service automatically encrypt data when writing it to Azure Storage.
For many organizations, data encryption at rest is a mandatory step towards data
[Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/fileservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) is a mechanism that allows domains to give each other permission for accessing each otherΓÇÖs resources. The User Agent sends extra headers to ensure that the JavaScript code loaded from a certain domain is allowed to access resources located at another domain. The latter domain then replies with extra headers allowing or denying the original domain access to its resources.
-Azure storage services now support CORS so that once you set the CORS rules for the service, a properly authenticated request made against the service from a different domain is evaluated to determine whether it is allowed according to the rules you have specified.
+Azure storage services now support CORS so that once you set the CORS rules for the service, a properly authenticated request made against the service from a different domain is evaluated to determine whether it's allowed according to the rules you have specified.
## Networking
Network access control is the act of limiting connectivity to and from specific
#### Network Security Groups
-A [Network Security Group (NSG)](../../virtual-network/virtual-network-vnet-plan-design-arm.md#security) is a basic stateful packet filtering firewall and it enables you to control access based on a 5-tuple. NSGs do not provide application layer inspection or authenticated access controls. They can be used to control traffic moving between subnets within an Azure Virtual Network and traffic between an Azure Virtual Network and the Internet.
+A [Network Security Group (NSG)](../../virtual-network/virtual-network-vnet-plan-design-arm.md#security) is a basic stateful packet filtering firewall and it enables you to control access based on a five-tuple. NSGs don't provide application layer inspection or authenticated access controls. They can be used to control traffic moving between subnets within an Azure Virtual Network and traffic between an Azure Virtual Network and the Internet.
#### Azure Firewall
Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Sta
The ability to control routing behavior on your Azure Virtual Networks is a critical network security and access control capability. For example, if you want to make sure that all traffic to and from your Azure Virtual Network goes through that virtual security appliance, you need to be able to control and customize routing behavior. You can do this by configuring User-Defined Routes in Azure.
-[User-Defined Routes](../../virtual-network/virtual-networks-udr-overview.md#custom-routes) allow you to customize inbound and outbound paths for traffic moving into and out of individual virtual machines or subnets to ensure the most secure route possible. [Forced tunneling](../../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) is a mechanism you can use to ensure that your services are not allowed to initiate a connection to devices on the Internet.
+[User-Defined Routes](../../virtual-network/virtual-networks-udr-overview.md#custom-routes) allow you to customize inbound and outbound paths for traffic moving into and out of individual virtual machines or subnets to ensure the most secure route possible. [Forced tunneling](../../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) is a mechanism you can use to ensure that your services aren't allowed to initiate a connection to devices on the Internet.
This is different from being able to accept incoming connections and then responding to them. Front-end web servers need to respond to requests from Internet hosts, and so Internet-sourced traffic is allowed inbound to these web servers and the web servers can respond.
Forced tunneling is commonly used to force outbound traffic to the Internet to g
#### Virtual Network Security Appliances
-While Network Security Groups, User-Defined Routes, and forced tunneling provide you a level of security at the network and transport layers of the [OSI model](https://en.wikipedia.org/wiki/OSI_model), there may be times when you want to enable security at higher levels of the stack. You can access these enhanced network security features by using an Azure partner network security appliance solution. You can find the most current Azure partner network security solutions by visiting the [Azure Marketplace](https://azure.microsoft.com/marketplace/) and searching for ΓÇ£securityΓÇ¥ and ΓÇ£network security.ΓÇ¥
+While Network Security Groups, User-Defined Routes, and forced tunneling provide you with a level of security at the network and transport layers of the [OSI model](https://en.wikipedia.org/wiki/OSI_model), there might be times when you want to enable security at higher levels of the stack. You can access these enhanced network security features by using an Azure partner network security appliance solution. You can find the most current Azure partner network security solutions by visiting the [Azure Marketplace](https://azure.microsoft.com/marketplace/) and searching for **security** and **network security**.
### Azure Virtual Network
-An Azure virtual network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the Azure network fabric dedicated to your subscription. You can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. You can segment your VNet into subnets and place Azure IaaS virtual machines (VMs) and/or [Cloud services (PaaS role instances)](../../cloud-services/cloud-services-choose-me.md) on Azure Virtual Networks.
+An Azure virtual network (VNet) is a representation of your own network in the cloud. It's a logical isolation of the Azure network fabric dedicated to your subscription. You can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. You can segment your VNet into subnets and place Azure IaaS virtual machines (VMs) and/or [Cloud services (PaaS role instances)](../../cloud-services/cloud-services-choose-me.md) on Azure Virtual Networks.
Additionally, you can connect the virtual network to your on-premises network using one of the [connectivity options](../../vpn-gateway/index.yml) available in Azure. In essence, you can expand your network to Azure, with complete control on IP address blocks with the benefit of enterprise scale Azure provides.
Azure networking supports various secure remote access scenarios. Some of these
### Azure Virtual Network Manager
-[Azure Virtual Network Manager](../../virtual-network-manager/overview.md) provides a centralized solution for protecting your virtual networks at scale. It uses [security admin rules](../../virtual-network-manager/concept-security-admins.md) to centrally define and enforce security policies for your virtual networks across your entire organization. Security admin rules takes precedence over network security group(NSGs) rules and are applied on the virtual network. This allows organizations to enforce core policies with security admin rules, while still enabling downstream teams to tailor NSGs according to their specific needs at the subnet and NIC levels. Depending on the needs of your organization, you can use **Allow**, **Deny**, or **Always Allow** rule actions to enforce security policies.
+[Azure Virtual Network Manager](../../virtual-network-manager/overview.md) provides a centralized solution for protecting your virtual networks at scale. It uses [security admin rules](../../virtual-network-manager/concept-security-admins.md) to centrally define and enforce security policies for your virtual networks across your entire organization. Security admin rules take precedence over network security group(NSGs) rules and are applied on the virtual network. This allows organizations to enforce core policies with security admin rules, while still enabling downstream teams to tailor NSGs according to their specific needs at the subnet and NIC levels. Depending on the needs of your organization you can use **Allow**, **Deny**, or **Always Allow** rule actions to enforce security policies.
| Rule Action | Description | |-|-|
-| **Allow** | Allows the specified traffic by default. Downstream NSGs still receive this traffic and may deny it.|
-| **Always Allow** | Always allow the specified traffic, regardless of other rules with lower priority or NSGs. This can be used to ensure that monitoring agent, domain controller, or management traffic is not blocked. |
-| **Deny** | Block the specified traffic. Downstream NSGs will not evaluate this traffic after being denied by a security admin rule, ensuring your high-risk ports for existing and new virtual networks are protected by default. |
+| **Allow** | Allows the specified traffic by default. Downstream NSGs still receive this traffic and might deny it.|
+| **Always Allow** | Always allow the specified traffic, regardless of other rules with lower priority or NSGs. This can be used to ensure that monitoring agent, domain controller, or management traffic isn't blocked. |
+| **Deny** | Block the specified traffic. Downstream NSGs won't evaluate this traffic after being denied by a security admin rule, ensuring your high-risk ports for existing and new virtual networks are protected by default. |
In Azure Virtual Network Manager, [network groups](../../virtual-network-manager/concept-network-groups.md) allow you to group virtual networks together for centralized management and enforcement of security policies. Network groups are a logical grouping of virtual networks based on your needs from a topology and security perspective. You can manually update the virtual network membership of your network groups or you can [define conditional statements with Azure Policy](../../virtual-network-manager/concept-azure-policy-integration.md) to dynamically update network groups to automatically update your network group membership.
Microsoft Azure [ExpressRoute](../../expressroute/expressroute-introduction.md)
![Express Route](./media/overview/azure-security-figure-1.png)
-With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility.
+With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a colocation facility.
-ExpressRoute connections do not go over the public Internet and thus can be considered more secure than VPN-based solutions. This allows ExpressRoute connections to offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the Internet.
+ExpressRoute connections don't go over the public Internet and thus can be considered more secure than VPN-based solutions. This allows ExpressRoute connections to offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the Internet.
### Application Gateway
Microsoft [Azure Application Gateway](../../application-gateway/overview.md) pro
![Application Gateway](./media/overview/azure-security-figure-2.png)
-It allows you to optimize web farm productivity by offloading CPU intensive TLS termination to the Application Gateway (also known as ΓÇ£TLS offloadΓÇ¥ or ΓÇ£TLS bridgingΓÇ¥). It also provides other Layer 7 routing capabilities including round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and the ability to host multiple websites behind a single Application Gateway. Azure Application Gateway is a layer-7 load balancer.
+It allows you to optimize web farm productivity by offloading CPU intensive TLS termination to the Application Gateway (also known as **TLS offload** or **TLS bridging**). It also provides other Layer 7 routing capabilities including round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and the ability to host multiple websites behind a single Application Gateway. Azure Application Gateway is a layer-7 load balancer.
It provides failover, performance-routing HTTP requests between different servers, whether they are on the cloud or on-premises.
Web Application Firewall is a feature of [Azure Application Gateway](../../appli
- Detection of common application misconfigurations (that is, Apache, IIS, etc.)
-A centralized web application firewall to protect against web attacks makes security management much simpler and gives better assurance to the application against the threats of intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to an application gateway with web application firewall easily.
+A centralized web application firewall to protect against web attacks makes security management simpler and gives better assurance to the application against the threats of intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to an application gateway with web application firewall easily.
### Traffic Manager
Traffic Manager provides a range of traffic-routing methods to suit different ap
### Azure Load Balancer
-[Azure Load Balancer](../../load-balancer/load-balancer-overview.md) delivers high availability and network performance to your applications. It is a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances of services defined in a load-balanced set. Azure Load Balancer can be configured to:
+[Azure Load Balancer](../../load-balancer/load-balancer-overview.md) delivers high availability and network performance to your applications. It's a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances of services defined in a load-balanced set. Azure Load Balancer can be configured to:
- Load balance incoming Internet traffic to virtual machines. This configuration is known as [public load balancing](../../load-balancer/components.md#frontend-ip-configurations).
Traffic Manager provides a range of traffic-routing methods to suit different ap
### Internal DNS
-You can manage the list of DNS servers used in a VNet in the Management Portal, or in the network configuration file. Customer can add up to 12 DNS servers for each VNet. When specifying DNS servers, it's important to verify that you list customerΓÇÖs DNS servers in the correct order for customerΓÇÖs environment. DNS server lists do not work round-robin. They are used in the order that they are specified. If the first DNS server on the list is able to be reached, the client uses that DNS server regardless of whether the DNS server is functioning properly or not. To change the DNS server order for customerΓÇÖs virtual network, remove the DNS servers from the list and add them back in the order that customer wants. DNS supports the availability aspect of the ΓÇ£CIAΓÇ¥ security triad.
+You can manage the list of DNS servers used in a VNet in the Management Portal, or in the network configuration file. Customer can add up to 12 DNS servers for each VNet. When specifying DNS servers, it's important to verify that you list customerΓÇÖs DNS servers in the correct order for customerΓÇÖs environment. DNS server lists don't work round-robin. They're used in the order that they're specified. If the first DNS server on the list is able to be reached, the client uses that DNS server regardless of whether the DNS server is functioning properly or not. To change the DNS server order for customerΓÇÖs virtual network, remove the DNS servers from the list and add them back in the order that customer wants. DNS supports the availability aspect of the ΓÇ£CIAΓÇ¥ security triad.
### Azure DNS
You can enable the following diagnostic log categories for NSGs:
[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) continuously analyzes the security state of your Azure resources for network security best practices. When Defender for Cloud identifies potential security vulnerabilities, it creates [recommendations](../../security-center/security-center-recommendations.md) that guide you through the process of configuring the needed controls to harden and protect your resources.
+### Advanced Container Networking Services (ACNS)
+
+[Advanced Container Networking Services (ACNS)](/azure/security/fundamentals/overview#networking) is a comprehensive suite designed to elevate the operational efficiency of your Azure Kubernetes Service (AKS) clusters. It provides advanced security and observability features, addressing the complexities of managing microservices infrastructure at scale.
+
+These features are divided into two main pillars:
+
+- **Security**: For clusters using Azure CNI Powered by Cilium, network policies include fully qualified domain name (FQDN) filtering for solving the complexities of maintaining configuration.
+
+- **Observability**: This feature of the Advanced Container Networking Services suite brings the power of HubbleΓÇÖs control plane to both Cilium and non-Cilium Linux data planes, providing enhanced visibility into networking and performance.
+ ## Compute The section provides additional information regarding key features in this area and summary information about these capabilities. ### Azure confidential computing
-[Azure confidential computing](../../confidential-computing/overview-azure-products.md) provides the final, missing piece, of the data protection protection puzzle. It allows you to keep your data encrypted at all times. While at rest, when in motion through the network, and now, even while loaded in memory and in use. Additionally, by making [Remote Attestion](/azure/attestation/overview) possible, it allows you to cryptographically verify that the VM you provision has booted securely and is configured correctly, prior to unlocking your data.
+[Azure confidential computing](../../confidential-computing/overview-azure-products.md) provides the final, missing piece, of the data protection puzzle. It allows you to keep your data encrypted always. While at rest, when in motion through the network, and now, even while loaded in memory and in use. Additionally, by making [Remote Attestation](/azure/attestation/overview) possible, it allows you to cryptographically verify that the VM you deploy booted securely and is configured correctly, before unlocking your data.
The spectrum of option ranges from enabling "lift and shift" scenarios of existing applications, to a full control of security features. For Infrastructure as a Service (IaaS), you can use [confidential virtual machines powered by AMD SEV-SNP](../../confidential-computing/confidential-vm-overview.md) or confidential application enclaves for virtual machines that run [Intel Software Guard Extensions (SGX)](../../confidential-computing/application-development.md). For Platform as a Service, we have multiple [container based](../../confidential-computing/choose-confidential-containers-offerings.md) options, including integrations with [Azure Kubernetes Service (AKS)](../../confidential-computing/confidential-nodes-aks-overview.md).
The spectrum of option ranges from enabling "lift and shift" scenarios of existi
With Azure IaaS, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro, McAfee, and Kaspersky to protect your virtual machines from malicious files, adware, and other threats. [Microsoft Antimalware](antimalware.md) for Azure Cloud Services and Virtual Machines is a protection capability that helps identify and remove viruses, spyware, and other malicious software. Microsoft Antimalware provides configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems. Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud ### Hardware Security Module
-Encryption and authentication do not improve security unless the keys themselves are protected. You can simplify the management and security of your critical secrets and keys by storing them in [Azure Key Vault](/azure/key-vault/general/overview). Key Vault provides the option to store your keys in hardware Security modules (HSMs) certified to [FIPS 140 validated](/azure/key-vault/keys/about-keys#compliance) standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Microsoft Entra ID](../../active-directory/index.yml).
+Encryption and authentication don't improve security unless the keys themselves are protected. You can simplify the management and security of your critical secrets and keys by storing them in [Azure Key Vault](/azure/key-vault/general/overview). Key Vault provides the option to store your keys in hardware Security modules (HSMs) certified to [FIPS 140 validated](/azure/key-vault/keys/about-keys#compliance) standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Microsoft Entra ID](../../active-directory/index.yml).
### Virtual machine backup [Azure Backup](../../backup/backup-overview.md) is a solution that protects your application data with zero capital investment and minimal operating costs. Application errors can corrupt your data, and human errors can introduce bugs into your applications that can lead to security issues. With Azure Backup, your virtual machines running Windows and Linux are protected. ### Azure Site Recovery
-An important part of your organization's [business continuity/disaster recovery (BCDR)](../../availability-zones/cross-region-replication-azure.md) strategy is figuring out how to keep corporate workloads and apps up and running when planned and unplanned outages occur. [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) helps orchestrate replication, failover, and recovery of workloads and apps so that they are available from a secondary location if your primary location goes down.
+An important part of your organization's [business continuity/disaster recovery (BCDR)](../../availability-zones/cross-region-replication-azure.md) strategy is figuring out how to keep corporate workloads and apps up and running when planned and unplanned outages occur. [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) helps orchestrate replication, failover, and recovery of workloads and apps so that they're available from a secondary location if your primary location goes down.
### SQL VM TDE Transparent data encryption (TDE) and column level encryption (CLE) are SQL server encryption features. This form of encryption requires customers to manage and store the cryptographic keys you use for encryption. The Azure Key Vault (AKV) service is designed to improve the security and management of these keys in a secure and highly available location. The SQL Server Connector enables SQL Server to use these keys from Azure Key Vault.
-If you are running SQL Server with on-premises machines, there are steps you can follow to access Azure Key Vault from your on-premises SQL Server instance. But for SQL Server in Azure VMs, you can save time by using the Azure Key Vault Integration feature. With a few Azure PowerShell cmdlets to enable this feature, you can automate the configuration necessary for a SQL VM to access your key vault.
+If you're running SQL Server with on-premises machines, there are steps you can follow to access Azure Key Vault from your on-premises SQL Server instance. But for SQL Server in Azure VMs, you can save time by using the Azure Key Vault Integration feature. With a few Azure PowerShell cmdlets to enable this feature, you can automate the configuration necessary for a SQL VM to access your key vault.
### VM Disk Encryption [Azure Disk Encryption for Linux VMs](/azure/virtual-machines/linux/disk-encryption-overview) and [Azure Disk Encryption for Windows VMs](/azure/virtual-machines/linux/disk-encryption-overview) helps you encrypt your IaaS virtual machine disks. It applies the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your Key Vault subscription. The solution also ensures that all data on the virtual machine disks are encrypted at rest in your Azure storage. ### Virtual networking
-Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical Azure network fabric. Each logical [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) is isolated from all other Azure Virtual Networks. This isolation helps ensure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
+Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical Azure network fabric. Each logical [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) is isolated from all other Azure Virtual Networks. This isolation helps ensure that network traffic in your deployments isn't accessible to other Microsoft Azure customers.
### Patch Updates Patch Updates provide the basis for finding and fixing potential problems and simplify the software update management process, both by reducing the number of software updates you must deploy in your enterprise and by increasing your ability to monitor compliance.
Securing systems, applications, and data begins with identity-based access contr
### Secure Identity Microsoft uses multiple security practices and technologies across its products and services to manage identity and access. -- [Multi-Factor Authentication](https://azure.microsoft.com/services/multi-factor-authentication/) requires users to use multiple methods for access, on-premises and in the cloud. It provides strong authentication with a range of easy verification options, while accommodating users with a simple sign-in process.
+- [Multifactor authentication](https://azure.microsoft.com/services/multi-factor-authentication/) requires users to use multiple methods for access, on-premises and in the cloud. It provides strong authentication with a range of easy verification options, while accommodating users with a simple sign-in process.
-- [Microsoft Authenticator](https://aka.ms/authenticator) provides a user-friendly Multi-Factor Authentication experience that works with both Microsoft Entra ID and Microsoft accounts, and includes support for wearables and fingerprint-based approvals.
+- [Microsoft Authenticator](https://aka.ms/authenticator) provides a user-friendly multifactor authentication experience that works with both Microsoft Entra ID and Microsoft accounts, and includes support for wearables and fingerprint-based approvals.
- [Password policy enforcement](../../active-directory/authentication/concept-sspr-policy.md) increases the security of traditional passwords by imposing length and complexity requirements, forced periodic rotation, and account lockout after failed authentication attempts.
Microsoft uses multiple security practices and technologies across its products
- [Integrated identity management (hybrid identity)](../../active-directory/hybrid/plan-hybrid-identity-design-considerations-overview.md) enables you to maintain control of usersΓÇÖ access across internal datacenters and cloud platforms, creating a single user identity for authentication and authorization to all resources. ### Secure Apps and data
-[Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/), a comprehensive identity and access management cloud solution, helps secure access to data in applications on site and in the cloud, and simplifies the management of users and groups. It combines core directory services, advanced identity governance, security, and application access management, and makes it easy for developers to build policy-based identity management into their apps. To enhance your Microsoft Entra ID, you can add paid capabilities using the Microsoft Entra Basic, Premium P1, and Premium P2 editions.
+[Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/), a comprehensive identity, and access management cloud solution, helps secure access to data in applications on site and in the cloud, and simplifies the management of users and groups. It combines core directory services, advanced identity governance, security, and application access management, and makes it easy for developers to build policy-based identity management into their apps. To enhance your Microsoft Entra ID, you can add paid capabilities using the Microsoft Entra Basic, Premium P1, and Premium P2 editions.
| Free / Common Features | Basic Features |Premium P1 Features |Premium P2 Features | Microsoft Entra join ΓÇô Windows 10 only related features| | :- | :- |:- |:- |:- |
-| [Directory Objects](../../active-directory/fundamentals/active-directory-whatis.md), [User/Group Management (add/update/delete)/ User-based provisioning, Device registration](../../active-directory/fundamentals/active-directory-whatis.md), [single sign-on (SSO)](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Change for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Connect (Sync engine that extends on-premises directories to Microsoft Entra ID)](../../active-directory/fundamentals/active-directory-whatis.md), [Security / Usage Reports](../../active-directory/fundamentals/active-directory-whatis.md) | [Group-based access management / provisioning](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Company Branding (Logon Pages/Access Panel customization)](../../active-directory/fundamentals/active-directory-whatis.md), [Application Proxy](../../active-directory/fundamentals/active-directory-whatis.md), [SLA 99.9%](../../active-directory/fundamentals/active-directory-whatis.md) | [Self-Service Group and app Management/Self-Service application additions/Dynamic Groups](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset/Change/Unlock with on-premises write-back](../../active-directory/fundamentals/active-directory-whatis.md), [Multi-Factor Authentication (Cloud and On-premises (MFA Server))](../../active-directory/fundamentals/active-directory-whatis.md), [MIM CAL + MIM Server](../../active-directory/fundamentals/active-directory-whatis.md), [Cloud App Discovery](../../active-directory/fundamentals/active-directory-whatis.md), [Connect Health](../../active-directory/fundamentals/active-directory-whatis.md), [Automatic password rollover for group accounts](../../active-directory/fundamentals/active-directory-whatis.md)| [Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md), [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md)| [Join a device to Microsoft Entra ID, Desktop SSO, Microsoft Passport for Microsoft Entra ID, Administrator BitLocker recovery](../../active-directory/fundamentals/active-directory-whatis.md), [MDM auto-enrollment, Self-Service BitLocker recovery, Additional local administrators to Windows 10 devices via Microsoft Entra join](../../active-directory/fundamentals/active-directory-whatis.md)|
+| [Directory Objects](../../active-directory/fundamentals/active-directory-whatis.md), [User/Group Management (add/update/delete)/ User-based provisioning, Device registration](../../active-directory/fundamentals/active-directory-whatis.md), [single sign-on (SSO)](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Change for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Connect (Sync engine that extends on-premises directories to Microsoft Entra ID)](../../active-directory/fundamentals/active-directory-whatis.md), [Security / Usage Reports](../../active-directory/fundamentals/active-directory-whatis.md) | [Group-based access management / provisioning](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Company Branding (sign in Pages/Access Panel customization)](../../active-directory/fundamentals/active-directory-whatis.md), [Application Proxy](../../active-directory/fundamentals/active-directory-whatis.md), [SLA 99.9%](../../active-directory/fundamentals/active-directory-whatis.md) | [Self-Service Group and app Management/Self-Service application additions/Dynamic Groups](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset/Change/Unlock with on-premises write-back](../../active-directory/fundamentals/active-directory-whatis.md), [multifactor authentication (Cloud and On-premises (MFA Server))](../../active-directory/fundamentals/active-directory-whatis.md), [MIM CAL + MIM Server](../../active-directory/fundamentals/active-directory-whatis.md), [Cloud App Discovery](../../active-directory/fundamentals/active-directory-whatis.md), [Connect Health](../../active-directory/fundamentals/active-directory-whatis.md), [Automatic password rollover for group accounts](../../active-directory/fundamentals/active-directory-whatis.md)| [Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md), [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md)| [Join a device to Microsoft Entra ID, Desktop SSO, Microsoft Passport for Microsoft Entra ID, Administrator BitLocker recovery](../../active-directory/fundamentals/active-directory-whatis.md), [MDM autoenrollment, Self-Service BitLocker recovery, extra local administrators to Windows 10 devices via Microsoft Entra join](../../active-directory/fundamentals/active-directory-whatis.md)|
- [Cloud App Discovery](/cloud-app-security/set-up-cloud-discovery) is a premium feature of Microsoft Entra ID that enables you to identify cloud applications that are used by the employees in your organization.
Microsoft uses multiple security practices and technologies across its products
- [Microsoft Entra Domain Services](https://azure.microsoft.com/services/active-directory-ds/) enables you to join Azure VMs to a domain without the need to deploy domain controllers. Users sign in to these VMs by using their corporate Active Directory credentials, and can seamlessly access resources. -- [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) is a highly available, global identity management service for consumer-facing apps that can scale to hundreds of millions of identities and integrate across mobile and web platforms. Your customers can sign in to all your apps through customizable experiences that use existing social media accounts, or you can create new standalone credentials.
+- [Microsoft Entra B2C](https://azure.microsoft.com/services/active-directory-b2c/) is a highly available, global identity management service for consumer-facing apps that can scale to hundreds of millions of identities and integrate across mobile and web platforms. Your customers can sign in to all your apps through customizable experiences that use existing social media accounts, or you can create new standalone credentials.
- [Microsoft Entra B2B Collaboration](../../active-directory/external-identities/what-is-b2b.md) is a secure partner integration solution that supports your cross-company relationships by enabling partners to access your corporate applications and data selectively by using their self-managed identities.
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 07/23/2024 Last updated : 11/08/2024
Azure Files and Azure File Sync are updated regularly to offer new features and
## What's new in 2024
+### 2024 quarter 4 (October, November, December)
+
+#### Azure File Sync v19 release
+
+The Azure File Sync v19 release improves performance, security, and adds support for Windows Server 2025:
+- Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints
+- Sync performance improvements
+- Preview: Managed Identity support for Azure File Sync service and servers
+- Azure File Sync agent support for Windows Server 2025
+
+To learn more, see the [Azure File Sync release notes](../file-sync/file-sync-release-notes.md#version-19100).
++ ### 2024 quarter 3 (July, August, September) #### Soft delete for NFS Azure file shares is generally available
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
description: Learn how to use continuous integration and continuous delivery (CI
- Previously updated : 01/25/2024+ Last updated : 11/06/2024 - # Continuous integration and delivery for an Azure Synapse Analytics workspace Continuous integration (CI) is the process of automating the build and testing of code every time a team member commits a change to version control. Continuous delivery (CD) is the process of building, testing, configuring, and deploying from multiple testing or staging environments to a production environment.
-In an Azure Synapse Analytics workspace, CI/CD moves all entities from one environment (development, test, production) to another environment. Promoting your workspace to another workspace is a two-part process. First, use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/overview.md) to create or update workspace resources (pools and workspace). Then, migrate artifacts like SQL scripts and notebooks, Spark job definitions, pipelines, datasets, and other artifacts by using **Synapse Workspace Deployment** tools in Azure DevOps or on GitHub.
+In an Azure Synapse Analytics workspace, CI/CD moves all entities from one environment (development, test, production) to another environment. Promoting your workspace to another workspace is a two-part process. First, use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/overview.md) to create or update workspace resources (pools and workspace). Then, migrate artifacts like SQL scripts and notebooks, Spark job definitions, pipelines, datasets, and other artifacts by using **Synapse Workspace Deployment** tools in Azure DevOps or on GitHub.
This article outlines how to use an Azure DevOps release pipeline and GitHub Actions to automate the deployment of an Azure Synapse workspace to multiple environments. ## Prerequisites
-To automate the deployment of an Azure Synapse workspace to multiple environments, the following prerequisites and configurations must be in place. Note that you may choose to use **either** Azure DevOps **or** GitHub, according to your preference or existing setup.
-
+To automate the deployment of an Azure Synapse workspace to multiple environments, the following prerequisites and configurations must be in place. You can choose to use **either** Azure DevOps **or** GitHub, according to your preference or existing setup.
### Azure DevOps
-If you are using Azure DevOps:
+If you're using Azure DevOps:
- Prepare an Azure DevOps project for running the release pipeline. - [Grant any users who will check in code Basic access at the organization level](/azure/devops/organizations/accounts/add-organization-users?view=azure-devops&tabs=preview-page&preserve-view=true), so they can see the repository. - Grant Owner permission to the Azure Synapse repository.-- Make sure that you've created a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
+- Make sure that you've created a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
- Grant permissions to [create an Azure Resource Manager service connection for the resource group](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml&preserve-view=true). - A Microsoft Entra administrator must [install the Azure DevOps Synapse Workspace Deployment Agent extension in the Azure DevOps organization](/azure/devops/marketplace/install-extension). - Create or nominate an existing service account for the pipeline to run as. You can use a personal access token instead of a service account, but your pipelines won't work after the user account is deleted. ### GitHub
-If you are using GitHub:
+If you're using GitHub:
-- Create a GitHub repository that contains the Azure Synapse workspace artifacts and the workspace template.
+- Create a GitHub repository that contains the Azure Synapse workspace artifacts and the workspace template.
- Make sure that you've created a self-hosted runner or use a GitHub-hosted runner. <a name='azure-active-directory'></a> ### Microsoft Entra ID -- If you're using a service principal, in Microsoft Entra ID, create a service principal to use for deployment.
+- If you're using a service principal, in Microsoft Entra ID, create a service principal to use for deployment.
- If you're using a managed identity, enable the system-assigned managed identity on your VM in Azure as the agent or runner, and then add it to Azure Synapse Studio as Synapse admin. - Use the Microsoft Entra admin role to complete these actions.
If you are using GitHub:
- Set up a blank workspace to deploy to: 1. Create a new Azure Synapse workspace.
- 2. Grant the service principal the following permissions to the new Synapse workspace:
+ 1. Grant the service principal the following permissions to the new Synapse workspace:
- Microsoft.Synapse/workspaces/integrationruntimes/write - Microsoft.Synapse/workspaces/operationResults/read - Microsoft.Synapse/workspaces/read
- 3. In the workspace, don't configure the Git repository connection.
- 4. In the Azure Synapse workspace, go to **Studio** > **Manage** > **Access Control**. 4. In the Azure Synapse workspace, go to Studio > Manage > Access Control. Assign the ΓÇ£Synapse Artifact PublisherΓÇ¥ to the service principal. If the deployment pipeline will need to deploy managed private endpoints, then assign the ΓÇ£Synapse AdministratorΓÇ¥ instead.
- 5. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.
+ 1. In the workspace, don't configure the Git repository connection.
+ 1. In the Azure Synapse workspace, go to **Studio** > **Manage** > **Access Control**. Assign the ΓÇ£Synapse Artifact PublisherΓÇ¥ to the service principal. If the deployment pipeline will need to deploy managed private endpoints, then assign the ΓÇ£Synapse AdministratorΓÇ¥ instead.
+ 1. When you use linked services whose connection information is stored in Azure Key Vault, it's recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.
### Other prerequisites
-
+ - Spark pools and self-hosted integration runtimes aren't created in a workspace deployment task. If you have a linked service that uses a self-hosted integration runtime, manually create the runtime in the new workspace. - If the items in the development workspace are attached with the specific pools, make sure that you create or parameterize the same names for the pools in the target workspace in the parameter file. - If your provisioned SQL pools are paused when you attempt to deploy, the deployment might fail.
-For more information, see [CI/CD in Azure Synapse Analytics Part 4 - The release pipeline](https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-4-the-release-pipeline/ba-p/2034434).
-
+For more information, see [CI/CD in Azure Synapse Analytics Part 4 - The release pipeline](https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-4-the-release-pipeline/ba-p/2034434).
## Set up a release pipeline in Azure DevOps
-In this section, you'll learn how to deploy an Azure Synapse workspace in Azure DevOps.
+In this section, you'll learn how to deploy an Azure Synapse workspace in Azure DevOps.
1. In [Azure DevOps](https://dev.azure.com/), open the project you created for the release. 1. On the left menu, select **Pipelines** > **Releases**.
- :::image type="content" source="media/create-release-pipeline.png" alt-text="Screenshot that shows selecting Pipelines and then Releases on the Azure DevOps menu.":::
-
+ :::image type="content" source="media/create-release-pipeline.png" alt-text="Screenshot that shows selecting Pipelines and then Releases on the Azure DevOps menu.":::
+ 1. Select **New pipeline**. If you have existing pipelines, select **New** > **New release pipeline**. 1. Select the **Empty job** template.
In this section, you'll learn how to deploy an Azure Synapse workspace in Azure
:::image type="content" source="media/release-creation-arm-template-branch.png" lightbox="media/release-creation-arm-template-branch.png" alt-text="Screenshot that shows setting the resource ARM template branch.":::
-1. For the artifacts **Default branch**, select the repository [publish branch](source-control.md#configure-publishing-settings) or other non-publish branches which include Synapse artifacts. By default, the publish branch is `workspace_publish`. For the **Default version**, select **Latest from default branch**.
+1. For the artifacts **Default branch**, select the repository [publish branch](source-control.md#configure-publishing-settings) or other nonpublish branches which include Synapse artifacts. By default, the publish branch is `workspace_publish`. For the **Default version**, select **Latest from default branch**.
:::image type="content" source="media/release-creation-publish-branch.png" alt-text="Screenshot that shows setting the artifacts branch.":::
-### Set up a stage task for an ARM template to create and update a resource
+### Set up a stage task for an ARM template to create and update a resource
If you have an ARM template that deploys a resource, such as an Azure Synapse workspace, a Spark and SQL pool, or a key vault, add an Azure Resource Manager deployment task to create or update those resources:
If you have an ARM template that deploys a resource, such as an Azure Synapse wo
:::image type="content" source="media/pools-resource-deploy.png" lightbox="media/pools-resource-deploy.png" alt-text="Screenshot that shows the: workspace and pools deploy.":::
-1. For **Override template parameters**, select **…**, and then enter the parameter values you want to use for the workspace.
+1. For **Override template parameters**, select **…**, and then enter the parameter values you want to use for the workspace.
1. For **Deployment mode**, select **Incremental**.
-1. (Optional) Add **Azure PowerShell** for the grant and update the workspace role assignment. If you use a release pipeline to create an Azure Synapse workspace, the pipelineΓÇÖs service principal is added as the default workspace admin. You can run PowerShell to grant other accounts access to the workspace.
+1. (Optional) Add **Azure PowerShell** for the grant and update the workspace role assignment. If you use a release pipeline to create an Azure Synapse workspace, the pipelineΓÇÖs service principal is added as the default workspace admin. You can run PowerShell to grant other accounts access to the workspace.
:::image type="content" source="media/release-creation-grant-permission.png" lightbox="media/release-creation-grant-permission.png" alt-text="Screenshot that demonstrates running a PowerShell script to grant permissions."::: > [!WARNING] > In complete deployment mode, resources in the resource group that aren't specified in the new ARM template are *deleted*. For more information, see [Azure Resource Manager deployment modes](../../azure-resource-manager/templates/deployment-modes.md).
-### Set up a stage task for Azure Synapse artifacts deployment
+### Set up a stage task for Azure Synapse artifacts deployment
-Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) extension to deploy other items in your Azure Synapse workspace. Items that you can deploy include datasets, SQL scripts and notebooks, spark job definitions, integration runtime, data flow, credentials, and other artifacts in workspace.
+Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) extension to deploy other items in your Azure Synapse workspace. Items that you can deploy include datasets, SQL scripts and notebooks, spark job definitions, integration runtime, data flow, credentials, and other artifacts in workspace.
-#### Install and add deployment extension
+#### Install and add deployment extension
1. Search for and get the extension from [Visual Studio Marketplace](https://marketplace.visualstudio.com/azuredevops). :::image type="content" source="media/get-extension-marketplace.png" alt-text="Screenshot that shows the Synapse workspace deployment extension as it appears in Visual Studio Marketplace.":::
-1. Select the Azure DevOps organization in which you want to install the extension.
+1. Select the Azure DevOps organization in which you want to install the extension.
:::image type="content" source="media/install-extension.png" alt-text="Screenshot that shows selecting an organization in which to install the Synapse workspace deployment extension.":::
-1. Make sure that the Azure DevOps pipelineΓÇÖs service principal has been granted the Subscription permission and is assigned as the Synapse workspace admin for the workspace.
+1. Make sure that the Azure DevOps pipelineΓÇÖs service principal has been granted the Subscription permission and is assigned as the Synapse workspace admin for the workspace.
1. To create a new task, search for **Synapse workspace deployment**, and then select **Add**. :::image type="content" source="media/add-extension-task.png" alt-text="Screenshot that shows searching for Synapse workspace deployment to create a task.":::
-#### Configure the deployment task
+#### Configure the deployment task
-The deployment task supports 3 types of operations, validate only, deploy and validate and deploy.
+The deployment task supports three types of operations, validate only, deploy and validate and deploy.
> [!NOTE]
- > This workspace deployment extension in is not backward compatible. Please make sure that the latest version is installed and used. You can read the release note in [overview](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy&ssr=false#overview)in Azure DevOps and the [latest version](https://github.com/marketplace/actions/synapse-workspace-deployment) in GitHub action.
+ > This workspace deployment extension in is not backward compatible. Please make sure that the latest version is installed and used. You can read the release note in [overview](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy&ssr=false#overview)in Azure DevOps and the [latest version](https://github.com/marketplace/actions/synapse-workspace-deployment) in GitHub action.
-**Validate** is to validate the Synapse artifacts in non-publish branch with the task and generate the workspace template and parameter template file. The validation operation only works in the YAML pipeline. The sample YAML file is as below:
+**Validate** is to validate the Synapse artifacts in nonpublish branch with the task and generate the workspace template and parameter template file. The validation operation only works in the YAML pipeline. Here's the sample YAML file:
```yaml pool:
The deployment task supports 3 types of operations, validate only, deploy and v
operation: 'validate' ArtifactsFolder: '$(System.DefaultWorkingDirectory)/ArtifactFolder' TargetWorkspaceName: '<target workspace name>'
-```
+```
-**Validate and deploy** can be used to directly deploy the workspace from non-publish branch with the artifact root folder.
+**Validate and deploy** can be used to directly deploy the workspace from nonpublish branch with the artifact root folder.
> [!NOTE] > The deployment task needs to download dependency JS files from this endpoint **web.azuresynapse.net** when the operation type is selected as **Validate** or **Validate and deploy**. Please ensure the endpoint **web.azuresynapse.net** is allowed if network policies are enabled on the VM.
-The validate and deploy operation works in both classic and YAML pipeline. The sample YAML file is as below:
+The validate and deploy operation works in both classic and YAML pipeline. Here's the sample YAML file:
```yaml pool:
The validate and deploy operation works in both classic and YAML pipeline. The s
OverrideArmParameters: > -key1 value1 -key2 value2
-```
+```
-**Deploy** The inputs of the operation deploy include Synapse workspace template and parameter template, which can be created after publishing in the workspace publish branch or after the validation. It is same as the version 1.x.
+**Deploy** The inputs of the operation deploy include Synapse workspace template and parameter template, which can be created after publishing in the workspace publish branch or after the validation. It's same as the version 1.x.
You can choose the operation types based on the use case. Following part is an example of the deploy.
You can choose the operation types based on the use case. Following part is an e
1. Next to **Template parameters**, select **…** to choose the parameters file.
-1. Select a connection, resource group, and name for the workspace.
+1. Select a connection, resource group, and name for the workspace.
1. Next to **Override template parameters**, select **…** . Enter the parameter values you want to use for the workspace, including connection strings and account keys that are used in your linked services. For more information, see [CI/CD in Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-4-the-release-pipeline/ba-p/2034434). :::image type="content" source="media/create-release-artifacts-deployment.png" lightbox="media/create-release-artifacts-deployment.png" alt-text="Screenshot that shows setting up the Synapse deployment task for the workspace.":::
-1. The deployment of managed private endpoint is only supported in version 2.x. please make sure you select the right version and check the **Deploy managed private endpoints in template**.
+1. The deployment of managed private endpoint is only supported in version 2.x. make sure you select the right version and check the **Deploy managed private endpoints in template**.
:::image type="content" source="media/deploy-private-endpoints.png" alt-text="Screenshot that shows selecting version 2.x to deploy private endpoints with synapse deployment task.":::
-1. To manage triggers, you can use trigger toggle to stop the triggers before deployment. And you can also add a task to restart the triggers after the deployment task.
+1. To manage triggers, you can use trigger toggle to stop the triggers before deployment. And you can also add a task to restart the triggers after the deployment task.
:::image type="content" source="media/toggle-trigger.png" alt-text="Screenshot that shows managing triggers before and after deployment."::: > [!IMPORTANT] > In CI/CD scenarios, the integration runtime type in different environments must be the same. For example, if you have a self-hosted integration runtime in the development environment, the same integration runtime must be self-hosted in other environments, such as in test and production. Similarly, if you're sharing integration runtimes across multiple stages, the integration runtimes must be linked and self-hosted in all environments, such as in development, test, and production.
-### Create a release for deployment
+### Create a release for deployment
After you save all changes, you can select **Create release** to manually create a release. To learn how to automate release creation, see [Azure DevOps release triggers](/azure/devops/pipelines/release/triggers). :::image type="content" source="media/release-creation-manually.png" lightbox="media/release-creation-manually.png" alt-text="Screenshot that shows the New release pipeline pane, with Create release highlighted.":::
-## Set up a release in GitHub Actions
+## Set up a release in GitHub Actions
In this section, you'll learn how to create GitHub workflows by using GitHub Actions for Azure Synapse workspace deployment.
The .yml file has two sections:
|Section |Tasks | |||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
+|**Authentication** | 1. Define a service principal. </br> 2. Create a GitHub secret. |
|**Deploy** | Deploy the workspace artifacts. | ### Configure GitHub Actions secrets
GitHub Actions secrets are environment variables that are encrypted. Anyone who
:::image type="content" source="media/create-secret-new.png" lightbox="media/create-secret-new.png" alt-text="Screenshot that shows the GitHub elements to select to create a new repository secret.":::
-1. Add a new secret for the client ID, and add a new client secret if you use the service principal for deployment. You can also choose to save the subscription ID and tenant ID as secrets.
+1. Add a new secret for the client ID, and add a new client secret if you use the service principal for deployment. You can also choose to save the subscription ID and tenant ID as secrets.
### Add your workflow
-In your GitHub repository, go to **Actions**.
+In your GitHub repository, go to **Actions**.
1. Select **Set up your workflow yourself**.
-1. In the workflow file, delete everything after the `on:` section. For example, your remaining workflow might look like this example:
+1. In the workflow file, delete everything after the `on:` section. For example, your remaining workflow might look like this example:
```yaml name: CI
In your GitHub repository, go to **Actions**.
branches: [ master ] ```
-1. Rename your workflow. On the **Marketplace** tab, search for the Synapse workspace deployment action, and then add the action.
+1. Rename your workflow. On the **Marketplace** tab, search for the Synapse workspace deployment action, and then add the action.
:::image type="content" source="media/search-action.png" lightbox="media/search-action.png" alt-text="Screenshot that shows searching for the Synapse workspace deployment task on the Marketplace tab.":::
In your GitHub repository, go to **Actions**.
tenantId: 'tenantId' DeleteArtifactsNotInTemplate: 'true' managedIdentity: 'False'
- ```
+ ```
1. You're ready to commit your changes. Select **Start commit**, enter the title, and then add a description (optional). Then, select **Commit new file**.
In your GitHub repository, go to **Actions**.
1. In your GitHub repository, go to **Actions**. 1. To see detailed logs of your workflow's run, open the first result:
- :::image type="content" source="media/review-deploy-status.png" lightbox="media/review-deploy-status.png" alt-text="Screenshot that shows selecting the workspace deployment log in the repository Actions in GitHub.":::
+ :::image type="content" source="media/review-deploy-status.png" lightbox="media/review-deploy-status.png" alt-text="Screenshot that shows selecting the workspace deployment sign in the repository Actions in GitHub.":::
-## Create custom parameters in the workspace template
+## Create custom parameters in the workspace template
If you use automated CI/CD and want to change some properties during deployment, but the properties aren't parameterized by default, you can override the default parameter template.
To override the default parameter template, create a custom parameter template n
You can use the following guidelines to create a custom parameters file:
-* Enter the property path under the relevant entity type.
-* Setting a property name to `*` indicates that you want to parameterize all properties under the property (only down to the first level, not recursively). You can set exceptions to this configuration.
-* Setting the value of a property as a string indicates that you want to parameterize the property. Use the format `<action>:<name>:<stype>`.
- * `<action>` can be one of these characters:
- * `=` means keep the current value as the default value for the parameter.
- * `-` means don't keep the default value for the parameter.
- * `|` is a special case for secrets from Azure Key Vault for connection strings or keys.
- * `<name>` is the name of the parameter. If it's blank, it takes the name of the property. If the value starts with a `-` character, the name is shortened. For example, `AzureStorage1_properties_typeProperties_connectionString` would be shortened to `AzureStorage1_connectionString`.
- * `<stype>` is the type of parameter. If `<stype>` is blank, the default type is `string`. Supported values: `string`, `securestring`, `int`, `bool`, `object`, `secureobject` and `array`.
-* Specifying an array in the file indicates that the matching property in the template is an array. Azure Synapse iterates through all the objects in the array by using the definition that's specified. The second object, a string, becomes the name of the property, which is used as the name for the parameter for each iteration.
-* A definition can't be specific to a resource instance. Any definition applies to all resources of that type.
-* By default, all secure strings (such as Key Vault secrets) and secure strings (such as connection strings, keys, and tokens) are parameterized.
-
-### Parameter template definition example
+- Enter the property path under the relevant entity type.
+- Setting a property name to `*` indicates that you want to parameterize all properties under the property (only down to the first level, not recursively). You can set exceptions to this configuration.
+- Setting the value of a property as a string indicates that you want to parameterize the property. Use the format `<action>:<name>:<stype>`.
+ - `<action>` can be one of these characters:
+ - `=` means keep the current value as the default value for the parameter.
+ - `-` means don't keep the default value for the parameter.
+ - `|` is a special case for secrets from Azure Key Vault for connection strings or keys.
+ - `<name>` is the name of the parameter. If it's blank, it takes the name of the property. If the value starts with a `-` character, the name is shortened. For example, `AzureStorage1_properties_typeProperties_connectionString` would be shortened to `AzureStorage1_connectionString`.
+ - `<stype>` is the type of parameter. If `<stype>` is blank, the default type is `string`. Supported values: `string`, `securestring`, `int`, `bool`, `object`, `secureobject` and `array`.
+- Specifying an array in the file indicates that the matching property in the template is an array. Azure Synapse iterates through all the objects in the array by using the definition that's specified. The second object, a string, becomes the name of the property, which is used as the name for the parameter for each iteration.
+- A definition can't be specific to a resource instance. Any definition applies to all resources of that type.
+- By default, all secure strings (such as Key Vault secrets) and secure strings (such as connection strings, keys, and tokens) are parameterized.
+
+### Parameter template definition example
Here's an example of what a parameter template definition looks like:
Here's an explanation of how the preceding template is constructed, by resource
**`notebooks`** -- Any property in the `properties/bigDataPool/referenceName` path is parameterized with its default value. You can parameterize an attached Spark pool for each notebook file.
+- Any property in the `properties/bigDataPool/referenceName` path is parameterized with its default value. You can parameterize an attached Spark pool for each notebook file.
**`sqlscripts`** -- In the `properties/content/currentConnection` path, both the `poolName` and the `databaseName` properties are parameterized as strings without the default values in the template.
+- In the `properties/content/currentConnection` path, both the `poolName` and the `databaseName` properties are parameterized as strings without the default values in the template.
**`pipelines`**
Here's an explanation of how the preceding template is constructed, by resource
**`linkedServices`** -- Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In the preceding example, for all linked services of the `AzureDataLakeStore` type, a specific template is applied. For all others (identified through the use of the `*` character), a different template is applied.
+- Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In the preceding example, for all linked services of the `AzureDataLakeStore` type, a specific template is applied. For all others (identified by using the `*` character), a different template is applied.
- The `connectionString` property is parameterized as a `securestring` value. It doesn't have a default value. The parameter name is shortened and suffixed with `connectionString`. - The `secretAccessKey` property is parameterized as an `AzureKeyVaultSecret` value (for example, in an Amazon S3 linked service). The property is automatically parameterized as an Azure Key Vault secret and fetched from the configured key vault. You also can parameterize the key vault itself.
Here's an explanation of how the preceding template is constructed, by resource
If you're using Git integration with your Azure Synapse workspace and you have a CI/CD pipeline that moves your changes from development to test, and then to production, we recommend these best practices: -- **Integrate only the development workspace with Git**. If you use Git integration, integrate only your *development* Azure Synapse workspace with Git. Changes to test and production workspaces are deployed via CI/CD and don't need Git integration.-- **Prepare pools before you migrate artifacts**. If you have a SQL script or notebook attached to pools in the development workspace, use the same name for pools in different environments. -- **Sync versioning in infrastructure as code scenarios**. To manage infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, use the same versioning that the DevOps team uses for source code. -- **Review Azure Data Factory best practices**. If you use Data Factory, see the [best practices for Data Factory artifacts](../../data-factory/continuous-integration-deployment.md#best-practices-for-cicd).
+- **Integrate only the development workspace with Git**. If you use Git integration, integrate only your *development* Azure Synapse workspace with Git. Changes to test and production workspaces are deployed via CI/CD and don't need Git integration.
+- **Prepare pools before you migrate artifacts**. If you have a SQL script or notebook attached to pools in the development workspace, use the same name for pools in different environments.
+- **Sync versioning in infrastructure as code scenarios**. To manage infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, use the same versioning that the DevOps team uses for source code.
+- **Review Azure Data Factory best practices**. If you use Data Factory, see the [best practices for Data Factory artifacts](../../data-factory/continuous-integration-deployment.md#best-practices-for-cicd).
-## Troubleshoot artifacts deployment
+## Troubleshoot artifacts deployment
### Use the Synapse workspace deployment task to deploy Synapse artifacts
-In Azure Synapse, unlike in Data Factory, artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task to deploy the artifacts, and use ARM deployment task for ARM resources (pools and workspace) deployment. Meanwhile this task only supports Synapse templates where resources have type Microsoft.Synapse. And with this task, users can deploy changes from any branches automatically without manual clicking the publish in Synapse studio. The following are some frequently raised issues.
+In Azure Synapse, unlike in Data Factory, artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task to deploy the artifacts, and use ARM deployment task for ARM resources (pools and workspace) deployment. Meanwhile this task only supports Synapse templates where resources have type Microsoft.Synapse. And with this task, users can deploy changes from any branches automatically without manual clicking the publish in Synapse studio. The following are some frequently raised issues.
-#### 1. Publish failed: workspace arm file is more than 20MB
+### 1. Publish failed: workspace arm file is more than 20 MB
-There is a file size limitation in git provider, for example, in Azure DevOps the maximum file size is 20Mb. Once the workspace template file size exceeds 20Mb, this error happens when you publish changes in Synapse studio, in which the workspace template file is generated and synced to git. To solve the issue, you can use the Synapse deployment task with **validate** or **validate and deploy** operation to save the workspace template file directly into the pipeline agent and without manual publish in synapse studio.
+There's a file size limitation in git provider, for example, in Azure DevOps the maximum file size is 20 Mb. Once the workspace template file size exceeds 20 Mb, this error happens when you publish changes in Synapse studio, in which the workspace template file is generated and synced to git. To solve the issue, you can use the Synapse deployment task with **validate** or **validate and deploy** operation to save the workspace template file directly into the pipeline agent and without manual publish in synapse studio.
-#### 2. Unexpected token error in release
+### 2. Unexpected token error in release
If your parameter file has parameter values that aren't escaped, the release pipeline fails to parse the file and generates an `unexpected token` error. We suggest that you override parameters or use Key Vault to retrieve parameter values. You also can use double escape characters to resolve the issue.
-#### 3. Integration runtime deployment failed
+### 3. Integration runtime deployment failed
+
+If you have the workspace template generated from a managed virtual network enabled workspace and try to deploy to a regular workspace or vice versa, this error happens.
-If you have the workspace template generated from a managed Vnet enabled workspace and try to deploy to a regular workspace or vice versa, this error happens.
-
-#### 4. Unexpected character encountered while parsing value
+### 4. Unexpected character encountered while parsing value
-The template can not be parsed the template file. Try by escaping the back slashes, eg. \\\\Test01\\Test
+The template can't be parsed the template file. Try by escaping the back slashes, for example, \\\\Test01\\Test
-#### 5. Failed to fetch workspace info, Not found
+### 5. Failed to fetch workspace info, Not found
-The target workspace info is not correctly configured. Please make sure the service connection which you have created, is scoped to the resource group which has the workspace.
+The target workspace info isn't correctly configured. Make sure the service connection which you have created, is scoped to the resource group which has the workspace.
-#### 6. Artifact deletion failed
+### 6. Artifact deletion failed
-The extension will compare the artifacts present in the publish branch with the template and based on the difference it will delete them. Please make sure you are not trying to delete any artifact which is present in publish branch and some other artifact has a reference or dependency on it.
+The extension will compare the artifacts present in the publish branch with the template and based on the difference it will delete them. Make sure you aren't trying to delete any artifact which is present in publish branch and some other artifact has a reference or dependency on it.
-#### 8. Deployment failed with error: json position 0
+### 7. Deployment failed with error: json position 0
-If you were trying to manually update the template, this error would happen. Please make sure that you have not manually edited the template.
+If you were trying to manually update the template, this error would happen. Make sure that you haven't manually edited the template.
-#### 9. The document creation or update failed because of invalid reference
+### 8. The document creation or update failed because of invalid reference
-The artifact in synapse can be referenced by another one. If you have parameterized an attribute which is a referenced in an artifact, please make sure to provide correct and non null value to it
+The artifact in synapse can be referenced by another one. If you have parameterized an attribute which is a referenced in an artifact, make sure to provide correct and non null value to it
-#### 10. Failed to fetch the deployment status in notebook deployment
+### 9. Failed to fetch the deployment status in notebook deployment
-The notebook you are trying to deploy is attached to a spark pool in the workspace template file, while in the deployment the pool does not exist in the target workspace. If you don't parameterize the pool name, please make sure that having the same name for the pools between environments.
+The notebook you're trying to deploy is attached to a spark pool in the workspace template file, while in the deployment the pool doesn't exist in the target workspace. If you don't parameterize the pool name, make sure that having the same name for the pools between environments.
synapse-analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started.md
Title: 'Tutorial: Get started with Azure Synapse Analytics'
-description: In this tutorial, you'll learn the basic steps to set up and use Azure Synapse Analytics.
+ Title: Get started with Azure Synapse Analytics
+description: In these tutorials, you'll learn the basic steps to set up and use Azure Synapse Analytics' features.
- Previously updated : 11/18/2022+ Last updated : 11/08/2024 # Get Started with Azure Synapse Analytics
-This tutorial is a step-by-step guide through the major feature areas of Azure Synapse Analytics. The tutorial is the ideal starting point for someone who wants a guided tour through the key scenarios of Azure Synapse Analytics. After following the steps in the tutorial, you will have a Synapse workspace. This tutorial also includes steps to [enable a workspace for your dedicated SQL pool (formerly SQL DW)](./sql-data-warehouse/workspace-connected-create.md). Once your workspace is created, you can start analyzing data using dedicated SQL pool, serverless SQL pool, or serverless Apache Spark pool.
+This tutorial is a step-by-step guide through the major feature areas of Azure Synapse Analytics. The tutorial is the ideal starting point for someone who wants a guided tour through the key scenarios of Azure Synapse Analytics. After following the steps in the tutorial, you'll have a Synapse workspace. This tutorial also includes steps to [enable a workspace for your dedicated SQL pool (formerly SQL DW)](./sql-data-warehouse/workspace-connected-create.md). Once your workspace is created, you can start analyzing data using dedicated SQL pool, serverless SQL pool, or serverless Apache Spark pool.
Follow the steps *in order* as shown below and you'll take a tour through many of the capabilities and learn how to exercise its core features.
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-overview.md
Previously updated : 12/06/2022 Last updated : 11/08/2024
-# Apache Spark in Azure Synapse Analytics
+# What is Apache Spark in Azure Synapse Analytics?
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud. Azure Synapse makes it easy to create and configure a serverless Apache Spark pool in Azure. Spark pools in Azure Synapse are compatible with Azure Storage and Azure Data Lake Generation 2 Storage. So you can use Spark pools to process your data stored in Azure.
Apache Spark is a parallel processing framework that supports in-memory processi
## What is Apache Spark
-Apache Spark provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is much faster than disk-based applications. Spark also integrates with multiple programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations. You can learn more from the [Apache Spark for Synapse video](https://www.youtube.com/watch?v=bTdu3PjXN3o).
+Apache Spark provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is faster than disk-based applications. Spark also integrates with multiple programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations. You can learn more from the [Apache Spark for Synapse video](https://www.youtube.com/watch?v=bTdu3PjXN3o).
![Diagram shows Traditional MapReduce, with disk-based apps and Spark, with cache-based operations.](./media/apache-spark-overview/map-reduce-vs-spark.png)
Spark pools in Azure Synapse offer a fully managed Spark service. The benefits o
| Ease of creation |You can create a new Spark pool in Azure Synapse in minutes using the Azure portal, Azure PowerShell, or the Synapse Analytics .NET SDK. See [Get started with Spark pools in Azure Synapse Analytics](../quickstart-create-apache-spark-pool-studio.md). | | Ease of use |Synapse Analytics includes a custom notebook derived from [nteract](https://nteract.io/). You can use these notebooks for interactive data processing and visualization.| | REST APIs |Spark in Azure Synapse Analytics includes [Apache Livy](https://github.com/cloudera/hue/tree/master/apps/spark/java#welcome-to-livy-the-rest-spark-server), a REST API-based Spark job server to remotely submit and monitor jobs. |
-| Support for Azure Data Lake Storage Generation 2| Spark pools in Azure Synapse can use Azure Data Lake Storage Generation 2 and BLOB storage. For more information on Data Lake Storage, see [Overview of Azure Data Lake Storage](../../data-lake-store/data-lake-store-overview.md). |
+| Support for Azure Data Lake Storage Generation 2| Spark pools in Azure Synapse can use Azure Data Lake Storage Generation 2 and BLOB storage. For more information on Data Lake Storage, see [Overview of Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md) |
| Integration with third-party IDEs | Azure Synapse provides an IDE plugin for [JetBrains' IntelliJ IDEA](https://www.jetbrains.com/idea/) that is useful to create and submit applications to a Spark pool. | | Preloaded Anaconda libraries |Spark pools in Azure Synapse come with Anaconda libraries preinstalled. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, and other technologies. | | Scalability | Apache Spark in Azure Synapse pools can have Auto-Scale enabled, so that pools scale by adding or removing nodes as needed. Also, Spark pools can be shut down with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
Apache Spark includes many language features to support preparation and processi
- Machine Learning
-Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with a variety of packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
+Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with various packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
- Streaming Data
-Synapse Spark supports Spark structured streaming as long as you are running supported version of Azure Synapse Spark runtime release. All jobs are supported to live for seven days. This applies to both batch and streaming jobs, and generally, customers automate restart process using Azure Functions.
+Synapse Spark supports Spark structured streaming as long as you're running supported version of Azure Synapse Spark runtime release. All jobs are supported to live for seven days. This applies to both batch and streaming jobs, and generally, customers automate restart process using Azure Functions.
-
-## Where do I start
+## Related content
Use the following articles to learn more about Apache Spark in Azure Synapse Analytics:
Use the following articles to learn more about Apache Spark in Azure Synapse Ana
- [Tutorial: Machine learning using Apache Spark](./apache-spark-machine-learning-mllib-notebook.md) > [!NOTE]
-> Some of the official Apache Spark documentation relies on using the Spark console, which is not available on Azure Synapse Spark. Use the notebook or IntelliJ experiences instead.
-
-## Next steps
-
-This overview provided a basic understanding of Apache Spark in Azure Synapse Analytics. Advance to the next article to learn how to create a Spark pool in Azure Synapse Analytics:
--- [Create a Spark pool in Azure Synapse](../quickstart-create-apache-spark-pool-portal.md)
+> Some of the official Apache Spark documentation relies on using the Spark console, which is not available on Azure Synapse Spark. Use the notebook or IntelliJ experiences instead.
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Title: Best practices for serverless SQL pool
-description: Recommendations and best practices for working with serverless SQL pool.
+description: Recommendations and best practices for working with serverless SQL pools in Azure Synapse Analytics.
Previously updated : 02/15/2023 Last updated : 11/08/2024 -+ # Best practices for serverless SQL pool in Azure Synapse Analytics
Some generic guidelines are:
- Make sure the storage and serverless SQL pool are in the same region. Storage examples include Azure Data Lake Storage and Azure Cosmos DB. - Try to [optimize storage layout](#prepare-files-for-querying) by using partitioning and keeping your files in the range between 100 MB and 10 GB. - If you're returning a large number of results, make sure you're using SQL Server Management Studio or Azure Data Studio and not Azure Synapse Studio. Azure Synapse Studio is a web tool that isn't designed for large result sets.-- If you're filtering results by string column, try to use a `BIN2_UTF8` collation. For more information on changing collations, refer to [Collation types supported for Synapse SQL](reference-collation-types.md).
+- If you're filtering results by string column, try to use a `BIN2_UTF8` collation. For more information on changing collations, see [Collation types supported for Synapse SQL](reference-collation-types.md).
- Consider caching the results on the client side by using Power BI import mode or Azure Analysis Services, and periodically refresh them. Serverless SQL pools can't provide an interactive experience in Power BI Direct Query mode if you're using complex queries or processing a large amount of data.-- Maximum concurrency is not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data, so in that case consider decreasing concurrency and execute queries over a longer period of time if possible.
+- Maximum concurrency isn't limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data, so in that case consider decreasing concurrency and execute queries over a longer period of time if possible.
## Client applications and network connections
You can use a performance-optimized parser when you query CSV files. For details
### Manually create statistics for CSV files
-Serverless SQL pool relies on statistics to generate optimal query execution plans. Statistics are automatically created for columns using sampling and in most cases sampling percentage will be less than 100%. This flow is the same for every file format. Have in mind that when reading CSV with parser version 1.0 sampling is not supported and automatic creation of statistics will not happen with sampling percentage less than 100%. For small tables with estimated low cardinality (number of rows) automatic statistics creation will be triggered with sampling percentage of 100%. That means that fullscan is triggered and automatic statistics are created even for CSV with parser version 1.0. In case statistics are not automatically created, create statistics manually for columns that you use in queries, particularly those used in DISTINCT, JOIN, WHERE, ORDER BY, and GROUP BY. Check [statistics in serverless SQL pool](develop-tables-statistics.md#statistics-in-serverless-sql-pool) for details.
+Serverless SQL pool relies on statistics to generate optimal query execution plans. Statistics are automatically created for columns using sampling and in most cases sampling percentage will be less than 100%. This flow is the same for every file format. Have in mind that when reading CSV with parser version 1.0 sampling isn't supported and automatic creation of statistics won't happen with sampling percentage less than 100%. For small tables with estimated low cardinality (number of rows) automatic statistics creation will be triggered with sampling percentage of 100%. That means that fullscan is triggered and automatic statistics are created even for CSV with parser version 1.0. In case statistics aren't automatically created, create statistics manually for columns that you use in queries, particularly those used in DISTINCT, JOIN, WHERE, ORDER BY, and GROUP BY. Check [statistics in serverless SQL pool](develop-tables-statistics.md#statistics-in-serverless-sql-pool) for details.
## Data types
For more information, read about the [filename](query-data-storage.md#filename-f
> [!TIP] > Always cast the results of the filepath and filename functions to appropriate data types. If you use character data types, be sure to use the appropriate length.
-Functions used for partition elimination, filepath and filename, aren't currently supported for external tables, other than those created automatically for each table created in Apache Spark for Azure Synapse Analytics.
+Functions used for partition elimination, filepath, and filename, aren't currently supported for external tables, other than those created automatically for each table created in Apache Spark for Azure Synapse Analytics.
If your stored data isn't partitioned, consider partitioning it. That way you can use these functions to optimize queries that target those files. When you [query partitioned Apache Spark for Azure Synapse tables](develop-storage-files-spark-tables.md) from serverless SQL pool, the query automatically targets only the necessary files.
As CETAS generates Parquet files, statistics are automatically created when the
## Query Azure data
-Serverless SQL pools enable you to query data in Azure Storage or Azure Cosmos DB by using [external tables and the OPENROWSET function](develop-storage-files-overview.md). Make sure that you have proper [permission set up](develop-storage-files-overview.md#permissions) on your storage.
+Serverless SQL pools enable you to query data in Azure Storage or Azure Cosmos DB by using [external tables and the OPENROWSET function](develop-storage-files-overview.md). Make sure that you have proper [permission set up](develop-storage-files-overview.md#permissions) on your storage.
### Query CSV data
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-data-analyst.md
Previously updated : 05/25/2022 Last updated : 11/08/2024
# Tutorial: Explore and Analyze data lakes with serverless SQL pool
-In this tutorial, you learn how to perform exploratory data analysis. You combine different Azure Open Datasets using serverless SQL pool. You then visualize the results in Synapse Studio for Azure Synapse Analytics.
+In this tutorial, you learn how to perform exploratory data analysis using existing open datasets, with no storage setup required. You combine different Azure Open Datasets using serverless SQL pool. You then visualize the results in Synapse Studio for Azure Synapse Analytics.
-The `OPENROWSET(BULK...)` function allows you to access files in Azure Storage. `[OPENROWSET](develop-openrowset.md)` reads content of a remote data source, such as a file, and returns the content as a set of rows.
+In this tutorial, you:
-## Automatic schema inference
+> [!div class="checklist"]
+> * Access the built-in serverless SQL pool
+> * Access Azure Open Datasets to use tutorial data
+> * Perform basic data analysis using SQL
-Since data is stored in the Parquet file format, automatic schema inference is available. You can query the data without listing the data types of all columns in the files. You also can use the virtual column mechanism and the `filepath` function to filter out a certain subset of files.
+## Access the serverless SQL pool
-> [!NOTE]
-> The default collation is `SQL_Latin1_General_CP1_CI_ASIf`. For a non-default collation, take into account case sensitivity.
->
-> If you create a database with case sensitive collation when you specify columns, make sure to use correct name of the column.
->
-> A column name `tpepPickupDateTime` would be correct while `tpeppickupdatetime` wouldn't work in a non-default collation.
+Every workspace comes with a preconfigured serverless SQL pool for you to use called *Built-in*. To access it:
+
+1. Open your workspace and select the **Develop** hub.
+1. Select the **+** *Add new resource* button.'
+1. Select SQL script.
+
+You can use this script to explore your data without having to reserve SQL capacity.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Access the tutorial data
+
+All the data we use in this tutorial is housed in the storage account *azureopendatastorage*, which holds Azure Open Datasets for open use in tutorials like this one. You can run all the scripts as-is directly from your workspace as long as your workspace can access a public network.
This tutorial uses a dataset about [New York City (NYC) Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/):
This tutorial uses a dataset about [New York City (NYC) Taxi](https://azure.micr
- Payment types - Driver-reported passenger counts
+The `OPENROWSET(BULK...)` function allows you to access files in Azure Storage. `[OPENROWSET](develop-openrowset.md)` reads content of a remote data source, such as a file, and returns the content as a set of rows.
+ To get familiar with the NYC Taxi data, run the following query: ```sql
SELECT TOP 100 * FROM
) AS [nyc] ```
+### Other accessible datasets
+ Similarly, you can query the Public Holidays dataset by using the following query: ```sql
You can learn more about the meaning of the individual columns in the descriptio
- [Public Holidays](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/) - [Weather Data](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/)
+## Automatic schema inference
+
+Since the data is stored in the Parquet file format, automatic schema inference is available. You can query the data without listing the data types of all columns in the files. You also can use the virtual column mechanism and the `filepath` function to filter out a certain subset of files.
+
+> [!NOTE]
+> The default collation is `SQL_Latin1_General_CP1_CI_ASIf`. For a non-default collation, take into account case sensitivity.
+>
+> If you create a database with case sensitive collation when you specify columns, make sure to use correct name of the column.
+>
+> A column name `tpepPickupDateTime` would be correct while `tpeppickupdatetime` wouldn't work in a non-default collation.
+ ## Time series, seasonality, and outlier analysis You can summarize the yearly number of taxi rides by using the following query:
The results of the query indicate that the drop in the number of taxi rides occu
This tutorial has shown how a data analyst can quickly perform exploratory data analysis. You can combine different datasets by using serverless SQL pool and visualize the results by using Azure Synapse Studio.
-## Next steps
+## Related content
To learn how to connect serverless SQL pool to Power BI Desktop and create reports, see [Connect serverless SQL pool to Power BI Desktop and create reports](tutorial-connect-power-bi-desktop.md).
virtual-network Ip Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md
Title: What is Azure Virtual Network IP Services?
-description: Overview of Azure Virtual Network IP Services. Learn how IP services work and how to use IP resources in Azure.
+ Title: What is Azure virtual network IP Services?
+description: Overview of Azure virtual network IP Services. Learn how IP services work and how to use IP resources in Azure.
Previously updated : 08/24/2023 Last updated : 11/05/2024
-# What is Azure Virtual Network IP Services?
+# What is Azure Virtual network IP Services?
-IP services are a collection of IP address related services that enable communication in an Azure Virtual Network. Public and private IP addresses are used in Azure for communication between resources. The communication with resources can occur in a private Azure Virtual Network and the public Internet.
+IP services are a collection of IP address related services that enable communication in an Azure virtual network. Public and private IP addresses are used in Azure for communication between resources. The communication with resources can occur in a private Azure virtual network and the public Internet.
IP services consist of:
Private IPs allow communication between resources in Azure. Azure assigns privat
Some of the resources that you can associate a private IP address with are:
-* Virtual machines
+* Network Interface (for Virtual machines, Virtual Machine Scale Sets, container pods ...)
+
+ * Network Interfaces can contain one primary and multiple secondary IP configurations.
+
+ * Each primary IP configuration must be a single IP address (a /32 IPv4 address or a /128 IPv6 address).
+
+ * Secondary IP configurations can be a single IP address OR a block of IP addresses (*in preview*). Only IPv4 addresses of block size of /28 are available today for associating with a secondary IP configuration.
* Internal load balancers
virtual-network Private Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/private-ip-addresses.md
description: Learn about private IP addresses in Azure. Previously updated : 12/01/2023 Last updated : 11/05/2024
There are two methods in which a private IP address is given:
Azure assigns the next available unassigned or unreserved IP address in the subnet's address range. While this is normally the next sequentially available address, there's no guarantee that the address will be the next one in the range. For example, if addresses 10.0.0.4-10.0.0.9 are already assigned to other resources, the next IP address assigned is most likely 10.0.0.10. However, it could be any address between 10.0.0.10 and 10.0.0.254. If a specific Private IP address is required for a resource, you should use a static private IP address.
+A private IP address prefix allocation is only successful when the full unallocated block of IP addresses is available. For example, only a valid /28 IPv4 address block will result in a successful prefix allocation.
+ Dynamic is the default allocation method. Once assigned, dynamic IP addresses are released if a network interface is: * Deleted
To assign the network interface to a different subnet, you change the allocation
> [!NOTE] > When requesting a private IP address, the allocation is not deterministic or sequential. There are no guarantees the next allocated IP address will utilize the next sequential IP address or use previously deallocated addresses. If a specific Private IP address is required for a resource, you should consider using a static private IP address.
-## Virtual machines
+## Virtual machine network interfaces
+
+One or more private IP addresses are assigned to one or more **network interfaces** of a Virtual Machine. Network interfaces are assigned to a [Windows](/azure/virtual-machines/windows/overview?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](/azure/virtual-machines/linux/overview?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine, and enable connectivity with other resources within and outside the Virtual Network.
-One or more private IP addresses are assigned to one or more **network interfaces**. The network interfaces are assigned to a [Windows](/azure/virtual-machines/windows/overview?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](/azure/virtual-machines/linux/overview?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. You can specify the allocation method as either dynamic or static for each private IP address.
+Network interfaces are configured with private IP addresses for communication within the Azure virtual network and other Azure resources, and can optionally be configured with public IP addresses for communication outside the Azure (e.g. Internet, customer on-premises).
+A network interface has one primary IP configuration associated with them and an option to attach zero or more secondary private IP configurations. For the total count of private IP configurations on a network interface allowed in your subscription, see [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits). The primary IP configuration on the network interface must have a single IP address (a /32 IPv4 address or a /128 IPv6 address) attached to it, while the secondary IP configurations can have either a single IP address or a block of IP addresses (*in preview*) attached to them. The only allowed blocks are IPv4 addresses of size /28 today.
+
+You can specify the allocation method as either dynamic or static for each private IP address.
### Internal DNS hostname resolution (for virtual machines)
The limits on IP addressing are found in the full set of [limits for networking]
* Learn about [Public IP Addresses in Azure](public-ip-addresses.md)
-* [Deploy a VM with a static private IP address using the Azure portal](./virtual-networks-static-private-ip-arm-pportal.md)
+* [Deploy a VM with a static private IP address using the Azure portal](./virtual-networks-static-private-ip-arm-pportal.md)
+
+* [Deploy a VM that uses private IP address blocks for a larger scale using the Azure portal](./virtual-network-private-ip-address-blocks-portal.md)
virtual-network Virtual Network Private Ip Address Blocks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-private-ip-address-blocks-portal.md
+
+ Title: Assign private IP address prefixes to VMs - Azure portal
+description: Learn how to assign private IP address prefixes to a virtual machine using the Azure portal.
+ Last updated : 11/07/2024+++++++
+# Assign private IP address prefixes to virtual machines using the Azure portal - Preview
+
+This article helps you add secondary IP configurations on a virtual machine NIC with a CIDR block of private IP addresses using the Azure portal. An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. All the NICs have one primary IP configuration and zero or more secondary IP configurations assigned to them. The primary IP configuration has a single private IP Address assigned to it and can optionally have a public IP address assignment as well. Each secondary IP configuration can have the following items:
+
+* A private IP address assignment and (optionally) a public IP address assignment, OR
+* A CIDR block of private IP addresses (IP address prefix).
+
+All the IP addresses can be statically or dynamically assigned from the available IP address ranges. For more information, see [IP addresses in Azure](public-ip-addresses.md). All IP configurations on a single NIC must be associated to the same subnet. If multiple IPs on different subnets are desired, multiple NICs on a VM can be used. For more information, see [Create VM with Multiple NICs](/azure/virtual-machines/windows/multiple-nics).
+
+There's a limit to how many IP configurations can be assigned to a NIC. For more information, see the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) article.
+
+> [!IMPORTANT]
+> The capability to add private IP address prefixes to NIC is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure virtual machine. For more information about creating a virtual machine, see [Create a Windows VM](/azure/virtual-machines/windows/quick-create-portal) or [Create a Linux VM](/azure/virtual-machines/linux/quick-create-portal).
+
+ - The example used in this article is named **myVM**. Replace this value with your virtual machine name.
+
+- To use this feature during Preview, you must first register. To register, complete the [Onboarding Form](https://forms.office.com/r/v1ys2F1xjT).
+
+> [!IMPORTANT]
+> Before proceeding, register for this Preview by completing the [Onboarding Form](https://forms.office.com/r/v1ys2F1xjT).
+
+## Add a dynamic private IP address prefix to a VM
+
+You can add a dynamic private IP address prefix to an Azure network interface by completing the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. In **Virtual machines**, select **myVM** or the name of your virtual machine.
+
+4. Select **Networking** in **Settings**.
+
+5. Select the name of the network interface of the virtual machine. In this example, it's named **myvm237_z1**.
+
+ :::image type="content" source="./media/virtual-network-private-ip-addresses-blocks-portal/select-network-interface.png" alt-text="Screenshot of myVM networking and network interface selection.":::
+
+6. In the network interface, select **IP configurations** in **Settings**.
+
+7. The existing IP configuration is displayed. This configuration is created when the virtual machine is created. To add a private and public IP address to the virtual machine, select **+ Add**.
+
+8. In **Add IP configuration**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **ipconfig2**. |
+ | **Private IP address settings** | |
+ | Private IP Address Type | IP address prefix |
+ | Allocation | Select **Dynamic** |
+
+9. Select **OK**.
+
+ :::image type="content" source="./media/virtual-network-private-ip-addresses-blocks-portal/add-dynamic-ip-prefix-config.png" alt-text="Screenshot of Add IP configuration." lightbox="./media/virtual-network-private-ip-addresses-blocks-portal/add-dynamic-ip-prefix-config-expand.png":::
+
+ > [!NOTE]
+ > Public IP address association is not available for configuration when IP address prefix option is selected.
+
+10. After you change the IP address configuration, you must restart the VM for the changes to take effect in the VM.
+
+## Add a static private IP address prefix to a VM
+
+You can add a static private IP address prefix to a virtual machine by completing the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. In **Virtual machines**, select **myVM** or the name of your virtual machine.
+
+4. Select **Networking** in **Settings**.
+
+5. Select the name of the network interface of the virtual machine. In this example, it's named **myvm237_z1**.
+
+ :::image type="content" source="./media/virtual-network-private-ip-addresses-blocks-portal/select-network-interface.png" alt-text="Screenshot of myVM networking and network interface selection.":::
+
+6. In the network interface, select **IP configurations** in **Settings**.
+
+7. The existing IP configuration is displayed. This configuration is created when the virtual machine is created. To add a private and public IP address to the virtual machine, select **+ Add**.
+
+8. In **Add IP configuration**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **ipconfig2**. |
+ | **Private IP address settings** | |
+ | Private IP Address Type | IP address prefix |
+ | Allocation | Select **Static**. |
+ | IP address | Enter an unused CIDR of size /28 from the subnet for your virtual machine.</br> For the 10.0.0.0/14 subnet in the example, an IP would be **10.0.0.0/80**. |
+
+9. Select **OK**.
+
+ :::image type="content" source="./media/virtual-network-private-ip-addresses-blocks-portal/add-static-ip-prefix-config.png" alt-text="Screenshot of Add static IP configuration for a private IP address block." lightbox="./media/virtual-network-private-ip-addresses-blocks-portal/add-static-ip-prefix-config-expand.png":::
+
+ > [!NOTE]
+ > When adding a static IP address, you must specify an unused, valid private IP address CIDR from the subnet the NIC is connected to.
+10. After you change the IP address configuration, you must restart the VM for the changes to take effect in the VM.
++
+## Next steps
+
+- Learn more about [public IP addresses](public-ip-addresses.md) in Azure.
+- Learn more about [private IP addresses](private-ip-addresses.md) in Azure.
+- Learn how to [Configure IP addresses for an Azure network interface](virtual-network-network-interface-addresses.md).