Updates from: 01/16/2023 02:06:04
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
description: Learn how to secure access to APIs by using client certificates. Yo
documentationcenter: '' - - Previously updated : 06/01/2021 Last updated : 01/12/2023 + # How to secure APIs using client certificate authentication in API Management
-API Management provides the capability to secure access to APIs (i.e., client to API Management) using client certificates. You can validate certificates presented by the connecting client and check certificate properties against desired values using policy expressions.
+API Management provides the capability to secure access to APIs (that is, client to API Management) using client certificates and mutual TLS authentication. You can validate certificates presented by the connecting client and check certificate properties against desired values using policy expressions.
-For information about securing access to the back-end service of an API using client certificates (i.e., API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
+For information about securing access to the backend service of an API using client certificates (that is, API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+## Certificate options
-> [!IMPORTANT]
-> To receive and verify client certificates over HTTP/2 in the Developer, Basic, Standard, or Premium tiers you must turn on the "Negotiate client certificate" setting on the "Custom domains" blade as shown below.
+For certificate validation, API Management can check against certificates managed in your API Management instance. If you choose to use API Management to manage client certificates, you have the following options:
+
+* Reference a certificate managed in [Azure Key Vault](../key-vault/general/overview.md)
+* Add a certificate file directly in API Management
+
+Using key vault certificates is recommended because it helps improve API Management security:
+
+* Certificates stored in key vaults can be reused across services
+* Granular [access policies](../key-vault/general/security-features.md#privileged-access) can be applied to certificates stored in key vaults
+* Certificates updated in the key vault are automatically rotated in API Management. After update in the key vault, a certificate in API Management is updated within 4 hours. You can also manually refresh the certificate using the Azure portal or via the management REST API.
+
+## Prerequisites
+
+* If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md).
+* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in **PFX** format. Self-signed certificates are allowed.
+
+ If you use a self-signed certificate, also install trusted root and intermediate [CA certificates](api-management-howto-ca-certificates.md) in your API Management instance.
+
+ > [!NOTE]
+ > CA certificates for certificate validation are not supported in the Consumption tier.
++
+## Enable API Management instance to receive and verify client certificates
+
+### Developer, Basic, Standard, or Premium tier
+
+To receive and verify client certificates over HTTP/2 in the Developer, Basic, Standard, or Premium tiers, you must enable the **Negotiate client certificate** setting on the **Custom domain** blade as shown below.
![Negotiate client certificate](./media/api-management-howto-mutual-certificates-for-clients/negotiate-client-certificate.png)
-> [!IMPORTANT]
-> To receive and verify client certificates in the Consumption tier you must turn on the "Request client certificate" setting on the "Custom domains" blade as shown below.
+### Consumption tier
+To receive and verify client certificates in the Consumption tier, you must enable the **Request client certificate** setting on the **Custom domains** blade as shown below.
![Request client certificate](./media/api-management-howto-mutual-certificates-for-clients/request-client-certificate.png)
Use the [validate-client-certificate](validate-client-certificate-policy.md) pol
Configure the policy to validate one or more attributes including certificate issuer, subject, thumbprint, whether the certificate is validated against online revocation list, and others.
-For more information, see [API Management access restriction policies](api-management-access-restriction-policies.md).
- ## Certificate validation with context variables You can also create policy expressions with the [`context` variable](api-management-policy-expressions.md#ContextVariables) to check client certificates. Examples in the following sections show expressions using the `context.Request.Certificate` property and other `context` properties.
Below policies can be configured to check the issuer and subject of a client cer
``` > [!NOTE]
-> To disable checking certificate revocation list use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
+> To disable checking certificate revocation list, use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
> If client certificate is self-signed, root (or intermediate) CA certificate(s) must be [uploaded](api-management-howto-ca-certificates.md) to API Management for `context.Request.Certificate.Verify()` and `context.Request.Certificate.VerifyNoRevocation()` to work. ### Checking the thumbprint
Below policies can be configured to check the thumbprint of a client certificate
``` > [!NOTE]
-> To disable checking certificate revocation list use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
+> To disable checking certificate revocation list, use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
> If client certificate is self-signed, root (or intermediate) CA certificate(s) must be [uploaded](api-management-howto-ca-certificates.md) to API Management for `context.Request.Certificate.Verify()` and `context.Request.Certificate.VerifyNoRevocation()` to work. ### Checking a thumbprint against certificates uploaded to API Management
The following example shows how to check the thumbprint of a client certificate
``` > [!NOTE]
-> To disable checking certificate revocation list use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
+> To disable checking certificate revocation list, use `context.Request.Certificate.VerifyNoRevocation()` instead of `context.Request.Certificate.Verify()`.
> If client certificate is self-signed, root (or intermediate) CA certificate(s) must be [uploaded](api-management-howto-ca-certificates.md) to API Management for `context.Request.Certificate.Verify()` and `context.Request.Certificate.VerifyNoRevocation()` to work. > [!TIP]
The following example shows how to check the thumbprint of a client certificate
## Next steps -- [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md)-- [How to upload certificates](./api-management-howto-mutual-certificates.md)
+- [How to secure backend services using client certificate authentication](./api-management-howto-mutual-certificates.md)
+- [How to add a custom CA certificate in Azure API Management](./api-management-howto-ca-certificates.md)
+- Learn about [policies in API Management](api-management-howto-policies.md)
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Previously updated : 01/26/2021 Last updated : 01/12/2023 -+ # Secure backend services using client certificate authentication in Azure API Management
-API Management allows you to secure access to the backend service of an API using client certificates. This guide shows how to manage certificates in an Azure API Management service instance using the Azure portal. It also explains how to configure an API to use a certificate to access a backend service.
+API Management allows you to secure access to the backend service of an API using client certificates and mutual TLS authentication. This guide shows how to manage certificates in an Azure API Management service instance using the Azure portal. It also explains how to configure an API to use a certificate to access a backend service.
You can also manage API Management certificates using the [API Management REST API](/rest/api/apimanagement/current-ga/certificate).
Using key vault certificates is recommended because it helps improve API Managem
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* If you have not created an API Management service instance yet, see [Create an API Management service instance][Create an API Management service instance].
+* If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md).
* You should have your backend service configured for client certificate authentication. To configure certificate authentication in the Azure App Service, refer to [this article][to configure certificate authentication in Azure WebSites refer to this article].
-* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in **PFX** format. Self-signed certificates are allowed.
+* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in **PFX** format. Self-signed certificates are allowed.
-### Prerequisites for key vault integration
-
-1. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-1. Enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md) in the API Management instance.
-1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list secrets from the vault. To add the policy:
- 1. In the portal, navigate to your key vault.
- 1. Select **Settings > Access policies > + Add Access Policy**.
- 1. Select **Secret permissions**, then select **Get** and **List**.
- 1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance.
-1. Create or import a certificate to the key vault. See [Quickstart: Set and retrieve a certificate from Azure Key Vault using the Azure portal](../key-vault/certificates/quick-create-portal.md).
-1. When adding a key vault certificate to your API Management instance, you must have permissions to list secrets from the key vault.
--
-## Add a key vault certificate
-
-See [Prerequisites for key vault integration](#prerequisites-for-key-vault-integration).
-
-> [!CAUTION]
-> When using a key vault certificate in API Management, be careful not to delete the certificate, key vault, or managed identity used to access the key vault.
-
-To add a key vault certificate to API Management:
-
-1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
-1. Under **Security**, select **Certificates**.
-1. Select **Certificates** > **+ Add**.
-1. In **Id**, enter a name of your choice.
-1. In **Certificate**, select **Key vault**.
-1. Enter the identifier of a key vault certificate, or choose **Select** to select a certificate from a key vault.
- > [!IMPORTANT]
- > If you enter a key vault certificate identifier yourself, ensure that it doesn't have version information. Otherwise, the certificate won't rotate automatically in API Management after an update in the key vault.
-1. In **Client identity**, select a system-assigned or an existing user-assigned managed identity. Learn how to [add or modify managed identities in your API Management service](api-management-howto-use-managed-service-identity.md).
- > [!NOTE]
- > The identity needs permissions to get and list certificate from the key vault. If you haven't already configured access to the key vault, API Management prompts you so it can automatically configure the identity with the necessary permissions.
-1. Select **Add**.
---
- :::image type="content" source="media/api-management-howto-mutual-certificates/apim-client-cert-kv.png" alt-text="Add key vault certificate":::
+ If you use a self-signed certificate:
+ * Install trusted root and intermediate [CA certificates](api-management-howto-ca-certificates.md) in your API Management instance.
-1. Select **Save**.
-
-## Upload a certificate
-
-To upload a client certificate to API Management:
+ > [!NOTE]
+ > CA certificates for certificate validation are not supported in the Consumption tier.
+ * [Disable certificate chain validation](#disable-certificate-chain-validation-for-self-signed-certificates)
-1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
-1. Under **Security**, select **Certificates**.
-1. Select **Certificates** > **+ Add**.
-1. In **Id**, enter a name of your choice.
-1. In **Certificate**, select **Custom**.
-1. Browse to select the certificate .pfx file, and enter its password.
-1. Select **Add**.
-
- :::image type="content" source="media/api-management-howto-mutual-certificates/apim-client-cert-add.png" alt-text="Upload client certificate":::
--
-1. Select **Save**.
After the certificate is uploaded, it shows in the **Certificates** window. If you have many certificates, make a note of the thumbprint of the desired certificate in order to configure an API to use a client certificate for [gateway authentication](#configure-an-api-to-use-client-certificate-for-gateway-authentication).
-> [!NOTE]
-> To turn off certificate chain validation when using, for example, a self-signed certificate, follow the steps described in [Self-signed certificates](#self-signed-certificates), later in this article.
## Configure an API to use client certificate for gateway authentication
After the certificate is uploaded, it shows in the **Certificates** window. If y
> [!TIP] > When a certificate is specified for gateway authentication for the backend service of an API, it becomes part of the policy for that API, and can be viewed in the policy editor.
-## Self-signed certificates
+## Disable certificate chain validation for self-signed certificates
If you are using self-signed certificates, you will need to disable certificate chain validation for API Management to communicate with the backend system. Otherwise it will return a 500 error code. To configure this, you can use the [`New-AzApiManagementBackend`](/powershell/module/az.apimanagement/new-azapimanagementbackend) (for new backend) or [`Set-AzApiManagementBackend`](/powershell/module/az.apimanagement/set-azapimanagementbackend) (for existing backend) PowerShell cmdlets and set the `-SkipCertificateChainValidation` parameter to `True`.
To delete a certificate, select it and then select **Delete** from the context m
## Next steps * [How to secure APIs using client certificate authentication in API Management](api-management-howto-mutual-certificates-for-clients.md)
+* [How to add a custom CA certificate in Azure API Management](./api-management-howto-ca-certificates.md)
* Learn about [policies in API Management](api-management-howto-policies.md)
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
Previously updated : 02/09/2021 Last updated : 01/13/2023 + # Use named values in Azure API Management policies
Using key vault secrets is recommended because it helps improve API Management s
* Granular [access policies](../key-vault/general/security-features.md#privileged-access) can be applied to secrets * Secrets updated in the key vault are automatically rotated in API Management. After update in the key vault, a named value in API Management is updated within 4 hours. You can also manually refresh the secret using the Azure portal or via the management REST API.
+## Prerequisites
+
+* If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md).
+ ### Prerequisites for key vault integration
-1. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-1. Enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md) in the API Management instance.
-1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list secrets from the vault. To add the policy:
- 1. In the portal, navigate to your key vault.
- 1. Select **Settings > Access policies > +Add Access Policy**.
- 1. Select **Secret permissions**, then select **Get** and **List**.
- 1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance.
-1. Create or import a secret to the key vault. See [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md).
-1. When adding a key vault secret to your API Management instance, you must have permissions to list secrets from the key vault.
+ - If you don't already have a key vault, create one. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
+
+ To create or import a secret to the key vault, see [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md).
+
+- Enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md) in the API Management instance.
++ [!INCLUDE [api-management-key-vault-network](../../includes/api-management-key-vault-network.md)] ## Add or edit a named value
-### Add a key vault secret
+### Add a key vault secret to API Management
See [Prerequisites for key vault integration](#prerequisites-for-key-vault-integration). +
+> [!IMPORTANT]
+> When adding a key vault secret to your API Management instance, you must have permissions to list secrets from the key vault.
+ > [!CAUTION] > When using a key vault secret in API Management, be careful not to delete the secret, key vault, or managed identity used to access the key vault.
See [Prerequisites for key vault integration](#prerequisites-for-key-vault-integ
:::image type="content" source="media/api-management-howto-properties/add-property.png" alt-text="Add key vault secret value":::
-### Add a plain or secret value
+### Add a plain or secret value to API Management
### [Portal](#tab/azure-portal)
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
The `tenantId` property identifies which Azure AD tenant the identity belongs to
## Configure Key Vault access using a managed identity
-Refer to the following configurations that are needed for API Management to access secrets and certificates from Key Vault.s
+The following configurations are needed for API Management to access secrets and certificates from an Azure key vault.
-### Configure Key Vault access policy
-
-To configure an access policy using the portal:
-
-1. In the Azure portal, navigate to your key vault.
-1. Select **Settings > Access policies > + Add Access Policy**.
-1. Select **Secret permissions**, then select **Get** and **List**.
-1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance.
-1. Select **Add**.
[!INCLUDE [api-management-key-vault-network](../../includes/api-management-key-vault-network.md)]
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
Previously updated : 01/11/2022 Last updated : 01/13/2023 + # Configure a custom domain name for your Azure API Management instance
There are several API Management endpoints to which you can assign a custom doma
API Management supports custom TLS certificates or certificates imported from Azure Key Vault. You can also enable a free, managed certificate.
-> [!WARNING
+> [!WARNING]
> If you require certificate pinning, please use a custom domain name and either a custom or Key Vault certificate, not the default certificate or the free, managed certificate. We don't recommend taking a hard dependency on a certificate that you don't manage. # [Custom](#tab/custom)
If you use Azure Key Vault to manage a custom domain TLS certificate, make sure
To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. * When you use the Azure portal to import the certificate, all the necessary configuration steps are completed automatically. * When you use command-line tools or management API, these permissions must be granted manually, in two steps:
- 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal ID on that page.
- 1. Give the list and get secrets permissions to this principal ID on the Azure Key Vault containing the certificate.
+ 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal ID on that page.
+ 1. Assign permissions to the managed identity to access the key vault. Use steps in the following section.
+
+ [!INCLUDE [api-management-key-vault-access](../../includes/api-management-key-vault-access.md)]
+ If the certificate is set to `autorenew` and your API Management tier has an SLA (that is, in all tiers except the Developer tier), API Management will pick up the latest version automatically, without downtime to the service.
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, see [Version log](version-log.md#novem
New for this release: -- Azure Arc data controller
- - Support database as resource in Azure Arc data resource provider
- - Arc-enabled PostgreSQL server - Add support for automated backups
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
For stateful alerts, the alert is considered resolved when:
When an alert is considered resolved, the alert rule sends out a resolved notification using webhooks or email, and the monitor state in the Azure portal is set to resolved.
-## Manage your alerts programmatically
-
-You can query your alerts instances to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-We recommended that you use [Azure Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the 'AlertsManagementResources' schema for managing alerts across multiple subscriptions. For a sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
-
-You can use Azure Resource Graphs:
-
-You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower scale querying or to update fired alerts.
- ## Pricing See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/) for information about pricing.
See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details
- [Create a new alert rule](alerts-log.md) - [Learn about action groups](../alerts/action-groups.md) - [Learn about alert processing rules](alerts-action-rules.md)
+- [Manage your alerts programmatically](alerts-manage-alert-instances.md#manage-your-alerts-programmatically)
azure-monitor Tutorial Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/tutorial-outages.md
ms.contributor: cawa Previously updated : 12/15/2022 Last updated : 01/11/2023
The change details pane also shows important information, including who made the
Now that you've discovered the web app in-guest change and understand next steps, you can proceed with troubleshooting the issue.
+## Virtual network changes
+
+Knowing what changed in your application's networking resources is critical due to their effect on connectivity, availability, and performance. Change Analysis supports all network resource changes and captures those changes immediately. Networking changes include:
+
+- Firewalls created or edited
+- Network critical changes (for example, blocking port 22 for TCP connections)
+- Load balancer changes
+- Virtual network changes
+
+The sample application includes a virtual network to make sure the application remains secure. Via the Azure portal, you can view and assess the network changes captured by Change Analysis.
+++ ## Next steps Learn more about [Change Analysis](./change-analysis.md).
azure-monitor Alert Management Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/alert-management-solution.md
Last updated 01/02/2022
![Alert Management icon](media/alert-management-solution/icon.png) > [!CAUTION]
-> This solution is no longer in active development and may not work as expected. We suggest you try using [Azure Resource Graph to query Azure Monitor alerts](../alerts/alerts-overview.md#manage-your-alerts-programmatically).
+> This solution is no longer in active development and may not work as expected. We suggest you try using [Azure Resource Graph to query Azure Monitor alerts](../alerts/alerts-manage-alert-instances.md#manage-your-alerts-programmatically).
The Alert Management solution helps you analyze all of the alerts in your Log Analytics repository. These alerts may have come from a variety of sources including those sources [created by Log Analytics](../alerts/alerts-types.md#log-alerts) or [imported from Nagios or Zabbix](../vm/monitor-virtual-machine.md). The solution also imports alerts from any [connected System Center Operations Manager management groups](../agents/om-agents.md).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that's sent to you
## Limitations -- Custom logs created via [HTTP Data Collector API](./data-collector-api.md), or 'dataSources' API won't be supported in export. Custom log created using [data collection rule](./logs-ingestion-api-overview.md) can be exported.
+- Custom logs created via [HTTP Data Collector API](./data-collector-api.md), or 'dataSources' API won't be supported in export. This includes text logs consumed by MMA. Custom log created using [data collection rule](./logs-ingestion-api-overview.md) can be exported, including text based logs.
- We are support more tables in data export gradually, but currently limited to those specified in the [supported tables](#supported-tables) section. - You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state. - Destinations must be in the same region as the Log Analytics workspace.
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
na Previously updated : 08/05/2022 Last updated : 01/16/2023
This naming convention allows for multiple configuration files, one per database
### Result files and syslog
-For the `-c backup` command, AzAcSnap writes to a *\*.result* file and to the system log, `/var/log/messages`, by using the `logger` command. The *\*.result* filename has the same base name as the log file, and goes into the same location. The *\*.result* file is a simple one line output file, such as the following example:
+For the `-c backup` command, AzAcSnap writes to a *\*.result* file. The purpose of the *\*.result* file is to provide high-level confirmation of success/failure. If the *\*.result* file is empty, then assume failure. Any output written to the *\*.result* file is also output to the system log (for example, `/var/log/messages`) by using the `logger` command. The *\*.result* filename has the same base name as the log file to allow for matching the result file with the configuration file and the backup log file. The *\*.result* file goes into the same location as the other log files and is a simple one line output file, such as the following example:
```output Database # 1 (PR1) : completed ok
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-portal.md
description: Learn how to deploy Azure SQL Edge using the Azure portal
Previously updated : 09/16/2022 Last updated : 01/13/2023 keywords: deploy SQL Edge - # Deploy Azure SQL Edge
-Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] is a relational database engine optimized for IoT and Azure IoT Edge deployments. It provides capabilities to create a high-performance data storage and processing layer for IoT applications and solutions. This quickstart shows you how to get started with creating an Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module through Azure IoT Edge using the Azure portal.
+Azure SQL Edge is a relational database engine optimized for IoT and Azure IoT Edge deployments. It provides capabilities to create a high-performance data storage and processing layer for IoT applications and solutions. This quickstart shows you how to get started with creating an Azure SQL Edge module through Azure IoT Edge using the Azure portal.
## Before you begin
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Sign in to the [Azure portal](https://portal.azure.com/).
-* Create an [Azure IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
-* Create an [Azure IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+- Sign in to the [Azure portal](https://portal.azure.com/).
+- Create an [Azure IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
+- Create an [Azure IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
> [!NOTE] > To deploy an Azure Linux VM as an IoT Edge device, see this [quickstart guide](../iot-edge/quickstart-linux.md). ## Deploy SQL Edge Module from Azure Marketplace
-Azure Marketplace is an online applications and services marketplace where you can browse through a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [IoT Edge modules](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules). Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] can be deployed to an edge device through the marketplace.
+Azure Marketplace is an online applications and services marketplace where you can browse through a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [IoT Edge modules](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules). Azure SQL Edge can be deployed to an edge device through the marketplace.
-1. Find the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module on the Azure Marketplace.
+1. Find the Azure SQL Edge module on the Azure Marketplace.
- :::image type="content" source="media/deploy-portal/find-offer-marketplace.png" alt-text="Screenshot of SQL Edge in Marketplace.":::
+ :::image type="content" source="media/deploy-portal/find-offer-marketplace.png" alt-text="Screenshot of SQL Edge in the Azure Marketplace.":::
1. Pick the software plan that best matches your requirements and select **Create**.
Azure Marketplace is an online applications and services marketplace where you c
1. On the Target Devices for IoT Edge Module page, specify the following details and then select **Create**. | Field | Description |
- |||
- | **Subscription** | The Azure subscription under which the IoT Hub was created |
- | **IoT Hub** | Name of the IoT Hub where the IoT Edge device is registered and then select "Deploy to a device" option |
- | **IoT Edge Device Name** | Name of the IoT Edge device where [!INCLUDE [sql-edge](../../includes/sql-edge.md)] would be deployed |
+ | | |
+ | **Subscription** | The Azure subscription under which the IoT Hub was created |
+ | **IoT Hub** | Name of the IoT Hub where the IoT Edge device is registered and then select "Deploy to a device" option |
+ | **IoT Edge Device Name** | Name of the IoT Edge device where SQL Edge would be deployed |
-1. On the **Set Modules on device:** page, select the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module under **IoT Edge Modules**. The default module name is set to *AzureSQLEdge*.
+1. On the **Set Modules on device:** page, select the Azure SQL Edge module under **IoT Edge Modules**. The default module name is set to *AzureSQLEdge*.
1. On the *Module Settings* section of the **Update IoT Edge Module** pane, specify the desired values for the *IoT Edge Module Name*, *Restart Policy* and *Desired Status*. > [!IMPORTANT] > Don't change or update the **Image URI** settings on the module.
-1. On the *Environment Variables* section of the **Update IoT Edge Module** pane, specify the desired values for the environment variables. For a complete list of Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] environment variables, see [Configure using environment variables](configure.md#configure-by-using-environment-variables). The following default environment variables are defined for the module.
+1. On the *Environment Variables* section of the **Update IoT Edge Module** pane, specify the desired values for the environment variables. For a complete list of Azure SQL Edge environment variables, see [Configure using environment variables](configure.md#configure-by-using-environment-variables). The following default environment variables are defined for the module.
- |**Parameter** |**Description**|
- |||
- | MSSQL_SA_PASSWORD | Change the default value to specify a strong password for the [!INCLUDE [sql-edge](../../includes/sql-edge.md)] admin account. |
- | MSSQL_LCID | Change the default value to set the desired language ID to use for [!INCLUDE [sql-edge](../../includes/sql-edge.md)]. For example, 1036 is French. |
- | MSSQL_COLLATION | Change the default value to set the default collation for [!INCLUDE [sql-edge](../../includes/sql-edge.md)]. This setting overrides the default mapping of language ID (LCID) to collation. |
+ | **Parameter** | **Description** |
+ | | |
+ | MSSQL_SA_PASSWORD | Change the default value to specify a strong password for the SQL Edge admin account. |
+ | MSSQL_LCID | Change the default value to set the desired language ID to use for SQL Edge. For example, 1036 is French. |
+ | MSSQL_COLLATION | Change the default value to set the default collation for SQL Edge. This setting overrides the default mapping of language ID (LCID) to collation. |
> [!IMPORTANT] > Don't change or update the `ACCEPT_EULA` environment variable for the module.
Azure Marketplace is an online applications and services marketplace where you c
- **Binds** and **Mounts**
- If you need to deploy more than one [!INCLUDE [sql-edge](../../includes/sql-edge.md)] module, ensure that you update the mounts option to create a new source and target pair for the persistent volume. For more information on mounts and volume, refer [Use volumes](https://docs.docker.com/storage/volumes/) on Docker documentation.
+ If you need to deploy more than one SQL Edge module, ensure that you update the mounts option to create a new source and target pair for the persistent volume. For more information on mounts and volume, refer [Use volumes](https://docs.docker.com/storage/volumes/) on Docker documentation.
```json {
- "HostConfig": {
- "CapAdd": [
- "SYS_PTRACE"
- ],
- "Binds": [
- "sqlvolume:/sqlvolume"
- ],
- "PortBindings": {
- "1433/tcp": [
- {
- "HostPort": "1433"
- }
- ]
- },
- "Mounts": [
- {
- "Type": "volume",
- "Source": "sqlvolume",
- "Target": "/var/opt/mssql"
- }
- ]
- },
- "Env": [
- "MSSQL_AGENT_ENABLED=TRUE",
- "ClientTransportType=AMQP_TCP_Only",
- "PlanId=asde-developer-on-iot-edge"
- ]
+ "HostConfig": {
+ "CapAdd": [
+ "SYS_PTRACE"
+ ],
+ "Binds": [
+ "sqlvolume:/sqlvolume"
+ ],
+ "PortBindings": {
+ "1433/tcp": [
+ {
+ "HostPort": "1433"
+ }
+ ]
+ },
+ "Mounts": [
+ {
+ "Type": "volume",
+ "Source": "sqlvolume",
+ "Target": "/var/opt/mssql"
+ }
+ ]
+ },
+ "Env": [
+ "MSSQL_AGENT_ENABLED=TRUE",
+ "ClientTransportType=AMQP_TCP_Only",
+ "PlanId=asde-developer-on-iot-edge"
+ ]
} ``` > [!IMPORTANT]
- > Do not change the `PlanId` environment variable defined in the create config setting. If this value is changed, the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] container will fail to start.
+ > Do not change the `PlanId` environment variable defined in the create config setting. If this value is changed, the Azure SQL Edge container will fail to start.
> [!WARNING] > If you reinstall the module, remember to remove any existing bindings first, otherwise your environment variables will not be updated.
Azure Marketplace is an online applications and services marketplace where you c
## Connect to Azure SQL Edge
-The following steps use the Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] command-line tool, **sqlcmd**, inside the container to connect to Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)].
+The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside the container to connect to Azure SQL Edge.
> [!NOTE]
-> SQL Server command line tools, including **sqlcmd**, are not available inside the ARM64 version of Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] containers.
+> SQL Server command line tools, including **sqlcmd**, are not available inside the ARM64 version of Azure SQL Edge containers.
-1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `AzureSQLEdge` is name specified by the `Name` parameter of your IoT Edge Module.
+1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example, `AzureSQLEdge` is name specified by the `Name` parameter of your IoT Edge Module.
```bash sudo docker exec -it AzureSQLEdge "bash"
Now, run a query to return data from the `Inventory` table.
## Connect from outside the container
-You can connect and run SQL queries against your Azure [!INCLUDE [sql-edge](../../includes/sql-edge.md)] instance from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a [!INCLUDE [sql-edge](../../includes/sql-edge.md)] container from outside, refer [Connect and Query Azure SQL Edge](./connect.md).
+You can connect and run SQL queries against your Azure SQL Edge instance from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a SQL Edge container from outside, refer [Connect and Query Azure SQL Edge](./connect.md).
-In this quickstart, you deployed a [!INCLUDE [sql-edge](../../includes/sql-edge.md)] Module on an IoT Edge device.
+In this quickstart, you deployed a SQL Edge Module on an IoT Edge device.
## Next steps
azure-sql-edge Disconnected Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/disconnected-deployment.md
Title: Deploy Azure SQL Edge with Docker - Azure SQL Edge
description: Learn about deploying Azure SQL Edge with Docker - Previously updated : 09/22/2020 Last updated : 01/13/2023
keywords:
- SQL Edge - container - docker- - # Deploy Azure SQL Edge with Docker In this quickstart, you use Docker to pull and run the Azure SQL Edge container image. Then connect with **sqlcmd** to create your first database and run queries.
-This image consists of Azure SQL Edge based on Ubuntu 18.04. It can be used with the Docker Engine 1.8+ on Linux or on Docker for Mac/Windows.
+This image consists of SQL Edge based on Ubuntu 18.04. It can be used with the Docker Engine 1.8+ on Linux.
+
+Azure SQL Edge containers aren't supported on the following platforms for production workloads:
+
+- Windows
+- macOS
+- Azure IoT Edge for Linux on Windows (EFLOW)
## Prerequisites -- Docker Engine 1.8+ on any supported Linux distribution or Docker for Mac/Windows. For more information, see [Install Docker](https://docs.docker.com/engine/installation/). Since the Azure SQL Edge images are based on Ubuntu 18.04, its recommended that you use a Ubuntu 18.04 docker host.-- Docker **overlay2** storage driver. This is the default for most users. If you find that you are not using this storage provider and need to change, please see the instructions and warnings in the [docker documentation for configuring overlay2](https://docs.docker.com/storage/storagedriver/overlayfs-driver/#configure-docker-with-the-overlay-or-overlay2-storage-driver).
+- Docker Engine 1.8+ on any supported Linux distribution. For more information, see [Install Docker](https://docs.docker.com/engine/installation/). Since the SQL Edge images are based on Ubuntu 18.04, we recommended that you use an Ubuntu 18.04 Docker host.
+- Docker **overlay2** storage driver. This is the default for most users. If you find that you aren't using this storage provider and need to change, see the instructions and warnings in the [Docker documentation for configuring overlay2](https://docs.docker.com/storage/storagedriver/overlayfs-driver/#configure-docker-with-the-overlay-or-overlay2-storage-driver).
- Minimum of 10 GB of disk space. - Minimum of 1 GB of RAM. - [Hardware requirements for Azure SQL Edge](./features.md#hardware-support).
+> [!NOTE]
+> For the bash commands in this article `sudo` is used. If you don't want to use `sudo` to run Docker, you can configure a Docker group and add users to that group. For more information, see [Post-installation steps for Linux](https://docs.docker.com/engine/install/linux-postinstall/).
## Pull and run the container image
-Before starting the following steps, make sure that you have selected your preferred shell (bash, PowerShell, or cmd) at the top of this article.
- 1. Pull the Azure SQL Edge container image from Microsoft Container Registry.
- - Pull the Azure SQL Edge container Image
- ```bash
- sudo docker pull mcr.microsoft.com/azure-sql-edge:latest
- ```
+ ```bash
+ sudo docker pull mcr.microsoft.com/azure-sql-edge:latest
+ ```
-> [!NOTE]
-> For the bash commands in this article `sudo` is used. On macOS and Windows, `sudo` might not be required. On Linux, if you do not want to use `sudo` to run Docker, you can configure a Docker group and add users to that group. For more information, see [Post-installation steps for Linux](https://docs.docker.com/engine/install/linux-postinstall/).
+ The previous command pulls the latest SQL Edge container image. To see all available images, see the [azure-sql-edge Docker hub page](https://hub.docker.com/_/microsoft-azure-sql-edge).
-The previous command pulls the latest Azure SQL Edge container images. To see all available images, see [the azure-sql-egde Docker hub page](https://hub.docker.com/_/microsoft-azure-sql-edge).
+1. To run the container image with Docker, use the following command from a bash shell:
-2. To run the container image with Docker, you can use the following command from a bash shell (Linux/macOS) or elevated PowerShell command prompt.
-
- - Start a Azure SQL Edge instance running as the Developer edition
- ```bash
- sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge
- ```
+ - Start an Azure SQL Edge instance running as the Developer edition:
- - Start a Azure SQL Edge instance running as the Premium edition
- ```bash
- sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Premium' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge
- ```
- > [!NOTE]
- > If you are using PowerShell on Windows to run these commands use double quotes instead of single quotes.
+ ```bash
+ sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge
+ ```
+ - Start an Azure SQL Edge instance running as the Premium edition:
- > [!NOTE]
- > The password should follow the Microsoft SQL Database Engine default password policy, otherwise the container can not setup SQL engine and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols. You can examine the error log by executing the [docker logs](https://docs.docker.com/engine/reference/commandline/logs/) command.
-
- The following table provides a description of the parameters in the previous `docker run` example:
+ ```bash
+ sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Premium' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge
+ ```
- | Parameter | Description |
- |--|--|
- | **-e "ACCEPT_EULA=Y"** | Set the **ACCEPT_EULA** variable to any value to confirm your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=2139274). Required setting for the Azure SQL Edge image. |
- | **-e "MSSQL_SA_PASSWORD=yourStrong(!)Password"** | Specify your own strong password that is at least 8 characters and meets the [Azure SQL Edge password requirements](/sql/relational-databases/security/password-policy). Required setting for the Azure SQL Edge image. |
- | **-p 1433:1433** | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, Azure SQL Edge is listening on TCP 1433 in the container and this is exposed to the port, 1433, on the host. |
- | **--name azuresqledge** | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you cannot reuse this same name. |
- | **-d** | Run the container in the background (daemon) |
+ > [!IMPORTANT]
+ > The password should follow the Microsoft SQL Database Engine default password policy, otherwise the container can't set up the SQL Database Engine and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, base-10 digits, and symbols. You can examine the error log by executing the [docker logs](https://docs.docker.com/engine/reference/commandline/logs/) command.
- For a complete list of all Azure SQL Edge environment variable, see [Configure Azure SQL Edge with Environment Variables](configure.md#configure-by-using-environment-variables).You can also use a [mssql.conf file](configure.md#configure-by-using-an-mssqlconf-file) to configure Azure SQL Edge Containers.
+ The following table provides a description of the parameters in the previous `docker run` examples:
+
+ | Parameter | Description |
+ | | |
+ | **-e "ACCEPT_EULA=Y"** | Set the **ACCEPT_EULA** variable to any value to confirm your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=2139274). Required setting for the SQL Edge image. |
+ | **-e "MSSQL_SA_PASSWORD=yourStrong(!)Password"** | Specify your own strong password that is at least 8 characters and meets the [Azure SQL Edge password requirements](/sql/relational-databases/security/password-policy). Required setting for the SQL Edge image. |
+ | **-p 1433:1433** | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, SQL Edge is listening on TCP 1433 in the container and this is exposed to the port, 1433, on the host. |
+ | **--name azuresqledge** | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. |
+ | **-d** | Run the container in the background (daemon) |
+
+ For a complete list of all Azure SQL Edge environment variable, see [Configure Azure SQL Edge with Environment Variables](configure.md#configure-by-using-environment-variables).You can also use a [mssql.conf file](configure.md#configure-by-using-an-mssqlconf-file) to configure SQL Edge containers.
+
+1. To view your Docker containers, use the `docker ps` command.
-3. To view your Docker containers, use the `docker ps` command.
-
```bash sudo docker ps -a ```
-4. If the **STATUS** column shows a status of **Up**, then Azure SQL Edge is running in the container and listening on the port specified in the **PORTS** column. If the **STATUS** column for your Azure SQL Edge container shows **Exited**, see the Troubleshooting section of Azure SQL Edge Documentation.
+1. If the **STATUS** column shows a status of **Up**, then SQL Edge is running in the container and listening on the port specified in the **PORTS** column. If the **STATUS** column for your SQL Edge container shows **Exited**, see the Troubleshooting section of Azure SQL Edge documentation.
- The `-h` (host name) parameter is also useful, but it is not used in this tutorial for simplicity. This changes the internal name of the container to a custom value. This is the name you'll see returned in the following Transact-SQL query:
+ The `-h` (host name) parameter is also useful, but it isn't used in this tutorial for simplicity. This changes the internal name of the container to a custom value. This is the name you'll see returned in the following Transact-SQL query:
- ```sql
- SELECT @@SERVERNAME,
- SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
- SERVERPROPERTY('MachineName'),
- SERVERPROPERTY('ServerName')
- ```
+ ```sql
+ SELECT @@SERVERNAME,
+ SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
+ SERVERPROPERTY('MachineName'),
+ SERVERPROPERTY('ServerName')
+ ```
- Setting `-h` and `--name` to the same value is a good way to easily identify the target container.
+ Setting `-h` and `--name` to the same value is a good way to easily identify the target container.
-5. As a final step, change your SA password because the `SA_PASSWORD` is visible in `ps -eax` output and stored in the environment variable of the same name. See steps below.
+1. As a final step, change your SA password because the `MSSQL_SA_PASSWORD` is visible in `ps -eax` output and stored in the environment variable of the same name. See the following steps.
## Change the SA password
-The **SA** account is a system administrator on the Azure SQL Edge instance that gets created during setup. After creating your Azure SQL Edge container, the `MSSQL_SA_PASSWORD` environment variable you specified is discoverable by running `echo $SA_PASSWORD` in the container. For security purposes, change your SA password.
+The **SA** account is a system administrator on the Azure SQL Edge instance that gets created during setup. After creating your SQL Edge container, the `MSSQL_SA_PASSWORD` environment variable you specified is discoverable by running `echo $MSSQL_SA_PASSWORD` in the container. For security purposes, change your SA password.
1. Choose a strong password to use for the SA user.
-2. Use `docker exec` to run **sqlcmd** to change the password using Transact-SQL. In the following example, replace the old password, `<YourStrong!Passw0rd>`, and the new password, `<YourNewStrong!Passw0rd>`, with your own password values.
+1. Use `docker exec` to run **sqlcmd** to change the password using Transact-SQL. In the following example, replace the old password, `<YourStrong!Passw0rd>`, and the new password, `<YourNewStrong!Passw0rd>`, with your own password values.
```bash sudo docker exec -it azuresqledge /opt/mssql-tools/bin/sqlcmd \
The **SA** account is a system administrator on the Azure SQL Edge instance that
## Connect to Azure SQL Edge
-The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside the container to connect to Azure SQL Edge.
+The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside the container to connect to SQL Edge.
-> [!NOTE]
-> sqlcmd tool is not available inside the ARM64 version of SQL Edge containers.
+> [!NOTE]
+> **sqlcmd** is not available inside the ARM64 version of SQL Edge containers.
-1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `azuresqledge` is name specified by the `--name` parameter when you created the container.
+1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example, `azuresqledge` is the name specified by the `--name` parameter when you created the container.
```bash sudo docker exec -it azuresqledge "bash" ```
-2. Once inside the container, connect locally with sqlcmd. Sqlcmd is not in the path by default, so you have to specify the full path.
+1. Once inside the container, connect locally with sqlcmd. Sqlcmd isn't in the path by default, so you have to specify the full path.
```bash /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong@Passw0rd>" ```
- > [!TIP]
+ > [!TIP]
> You can omit the password on the command-line to be prompted to enter it.
-3. If successful, you should get to a **sqlcmd** command prompt: `1>`.
+1. If successful, you should get to a **sqlcmd** command prompt: `1>`.
## Create and query data
-The following sections walk you through using **sqlcmd** and Transact-SQL to create a new database, add data, and run a simple query.
+The following sections walk you through using **sqlcmd** and Transact-SQL to create a new database, add data, and run a query.
### Create a new database
The following steps create a new database named `TestDB`.
Go ```
-2. On the next line, write a query to return the name of all of the databases on your server:
+1. On the next line, write a query to return the name of all of the databases on your server:
```sql SELECT Name from sys.Databases
Next create a new table, `Inventory`, and insert two new rows.
USE TestDB ```
-2. Create new table named `Inventory`:
+1. Create new table named `Inventory`:
```sql
- CREATE TABLE Inventory (id INT, name NVARCHAR(50), quantity INT)
+ CREATE TABLE Inventory (id INT, name NVARCHAR(50), quantity INT);
```
-3. Insert data into the new table:
+1. Insert data into the new table:
```sql INSERT INTO Inventory VALUES (1, 'banana', 150); INSERT INTO Inventory VALUES (2, 'orange', 154); ```
-4. Type `GO` to execute the previous commands:
+1. Type `GO` to execute the previous commands:
```sql GO
Now, run a query to return data from the `Inventory` table.
SELECT * FROM Inventory WHERE quantity > 152; ```
-2. Execute the command:
+1. Execute the command:
```sql GO
Now, run a query to return data from the `Inventory` table.
QUIT ```
-2. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell.
+1. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell.
## Connect from outside the container
-You can also connect to the Azure SQL Edge instance on your Docker machine from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a SQL Edge container from outside, refer [Connect and Query Azure SQL Edge](connect.md).
+You can also connect to the SQL Edge instance on your Docker machine from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a SQL Edge container from outside, refer [Connect and Query Azure SQL Edge](connect.md).
## Remove your container
-If you want to remove the Azure SQL Edge container used in this tutorial, run the following commands:
+If you want to remove the SQL Edge container used in this tutorial, run the following commands:
```bash sudo docker stop azuresqledge sudo docker rm azuresqledge ```
-> [!WARNING]
-> Stopping and removing a container permanently deletes any Azure SQL Edge data in the container. If you need to preserve your data, [create and copy a backup file out of the container](backup-restore.md) or use a [container data persistence technique](configure.md#persist-your-data).
+> [!WARNING]
+> Stopping and removing a container permanently deletes any SQL Edge data in the container. If you need to preserve your data, [create and copy a backup file out of the container](backup-restore.md) or use a [container data persistence technique](configure.md#persist-your-data).
-## Next Steps
+## Next steps
- [Machine Learning and Artificial Intelligence with ONNX in SQL Edge](onnx-overview.md). - [Building an end to end IoT Solution with SQL Edge using IoT Edge](tutorial-deploy-azure-resources.md).
azure-sql-edge Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/features.md
Title: Supported features of Azure SQL Edge
description: Learn about details of features supported by Azure SQL Edge. - Previously updated : 09/03/2020 Last updated : 01/13/2023 keywords: - introduction to SQL Edge - what is SQL Edge - SQL Edge overview- -
-# Supported features of Azure SQL Edge
+# Supported features of Azure SQL Edge
Azure SQL Edge is built on the latest version of the SQL Database Engine. It supports a subset of the features supported in SQL Server 2019 on Linux, in addition to some features that are currently not supported or available in SQL Server 2019 on Linux (or in SQL Server on Windows).
For a complete list of the features supported in SQL Server on Linux, see [Editi
Azure SQL Edge is available with two different editions or software plans. These editions have identical feature sets, and only differ in terms of their usage rights and the amount of memory and cores they can access on the host system.
- |**Plan** |**Description** |
- |||
- |Azure SQL Edge Developer | For development only. Each Azure SQL Edge Developer container is limited to up to 4 cores and 32 GB memory. |
- |Azure SQL Edge | For production. Each Azure SQL Edge container is limited to up to 8 cores and 64 GB memory. |
+ | **Plan** | **Description** |
+ | | |
+ | Azure SQL Edge Developer | For development only. Each Azure SQL Edge Developer container is limited to a maximum of 4 cores and 32 GB of RAM. |
+ | Azure SQL Edge | For production. Each Azure SQL Edge container is limited to a maximum of 8 cores and 64 GB of RAM. |
## Operating system
-Azure SQL Edge containers are based on Ubuntu 18.04, and as such are only supported to run on Docker hosts running either Ubuntu 18.04 LTS (recommended) or Ubuntu 20.04 LTS. It's possible to run Azure SQL Edge containers on other operating system hosts, for example, it can run on other distributions of Linux or on Windows (using Docker CE or Docker EE), however Microsoft does not recommend that you do this, as this configuration may not be extensively tested.
+Azure SQL Edge containers are based on Ubuntu 18.04, and as such are only supported to run on Docker hosts running either Ubuntu 18.04 LTS (recommended) or Ubuntu 20.04 LTS. It's possible to run Azure SQL Edge containers on other operating system hosts, for example, it can run on other distributions of Linux or on Windows (using Docker CE or Docker EE), however Microsoft doesn't recommend that you do this, as this configuration may not be extensively tested.
The recommended configuration for running Azure SQL Edge on Windows is to configure an Ubuntu VM on the Windows host, and then run Azure SQL Edge inside the Linux VM.
The recommended and supported file system for Azure SQL Edge is EXT4 and XFS. If
## Hardware support
-Azure SQL Edge requires a 64-bit processor (either x64 or ARM64), with a minimum of one processor and one GB RAM on the host. While the startup memory footprint of Azure SQL Edge is close to 450MB, the additional memory is needed for other IoT Edge modules or processes running on the edge device. The actual memory and CPU requirements for Azure SQL Edge will vary based on the complexity of the workload and volume of data being processed. When choosing a hardware for your solution, Microsoft recommends that you run extensive performance tests to ensure that the required performance characteristics for your solution are met.
+Azure SQL Edge requires a 64-bit processor (either x64 or ARM64), with a minimum of 1 CPU and 1 GB of RAM on the host. While the startup memory footprint of Azure SQL Edge is close to 450 MB, the additional memory is needed for other IoT Edge modules or processes running on the edge device. The actual memory and CPU requirements for Azure SQL Edge will vary based on the complexity of the workload and volume of data being processed. When choosing hardware for your solution, Microsoft recommends that you run extensive performance tests to ensure that the required performance characteristics for your solution are met.
## Azure SQL Edge components
Azure SQL Edge only supports the database engine. It doesn't include support for
## Supported features
-In addition to supporting a subset of features of SQL Server on Linux, Azure SQL Edge includes support for the following new features:
+In addition to supporting a subset of features of SQL Server on Linux, Azure SQL Edge includes support for the following new features:
-- SQL streaming, which is based on the same engine that powers Azure Stream Analytics, provides real-time data streaming capabilities in Azure SQL Edge.
+- SQL streaming, which is based on the same engine that powers Azure Stream Analytics, provides real-time data streaming capabilities in Azure SQL Edge.
- The T-SQL function call `Date_Bucket` for Time-Series data analytics. - Machine learning capabilities through the ONNX runtime, included with the SQL engine.
In addition to supporting a subset of features of SQL Server on Linux, Azure SQL
The following list includes the SQL Server 2019 on Linux features that aren't currently supported in Azure SQL Edge. | Area | Unsupported feature or service |
-|--|--|
+| | |
| **Database Design** | In-memory OLTP, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
-| &nbsp; | `HierarchyID` data type, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
-| &nbsp; | `Spatial` data type, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
-| &nbsp; | Stretch DB, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
-| &nbsp; | Full-text indexes and search, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views.|
-| &nbsp; | `FileTable`, `FILESTREAM`, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views.|
-| **Database Engine** | Replication. Note that you can configure Azure SQL Edge as a push subscriber of a replication topology. |
-| &nbsp; | Polybase. Note that you can configure Azure SQL Edge as a target for external tables in Polybase. |
-| &nbsp; | Language extensibility through Java and Spark. |
-| &nbsp; | Active Directory integration. |
-| &nbsp; | Database Auto Shrink. The Auto shrink property for a database can be set using the `ALTER DATABASE <database_name> SET AUTO_SHRINK ON` command, however that change has no effect. The automatic shrink task will not run against the database. Users can still shrink the database files using the 'DBCC' commands. |
-| &nbsp; | Database snapshots. |
-| &nbsp; | Support for persistent memory. |
-| &nbsp; | Microsoft Distributed Transaction Coordinator. |
-| &nbsp; | Resource governor and IO resource governance. |
-| &nbsp; | Buffer pool extension. |
-| &nbsp; | Distributed query with third-party connections. |
-| &nbsp; | Linked servers. |
-| &nbsp; | System extended stored procedures (such as `XP_CMDSHELL`). |
-| &nbsp; | CLR assemblies, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
-| &nbsp; | CLR-dependent T-SQL functions, such as `ASSEMBLYPROPERTY`, `FORMAT`, `PARSE`, and `TRY_PARSE`. |
-| &nbsp; | CLR-dependent date and time catalog views, functions, and query clauses. |
-| &nbsp; | Buffer pool extension. |
-| &nbsp; | Database mail. |
-| &nbsp; | Service Broker |
-| &nbsp; | Policy Based Management |
-| &nbsp; | Management Data Warehouse |
-| &nbsp; | Contained Databases |
-| **SQL Server Agent** | Subsystems: CmdExec, PowerShell, Queue Reader, SSIS, SSAS, and SSRS. |
-| &nbsp; | Alerts. |
-| &nbsp; | Managed backup. |
-| **High Availability** | Always On availability groups. |
-| &nbsp; | Basic availability groups. |
-| &nbsp; | Always On failover cluster instance. |
-| &nbsp; | Database mirroring. |
-| &nbsp; | Hot add memory and CPU. |
+| | `HierarchyID` data type, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| | `Spatial` data type, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| | Stretch DB, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| | Full-text indexes and search, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| | `FileTable`, `FILESTREAM`, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| **Database Engine** | Replication. You can configure Azure SQL Edge as a push subscriber of a replication topology. |
+| | PolyBase. You can configure Azure SQL Edge as a target for external tables in PolyBase. |
+| | Language extensibility through Java and Spark. |
+| | Active Directory integration. |
+| | Database Auto Shrink. The Auto shrink property for a database can be set using the `ALTER DATABASE <database_name> SET AUTO_SHRINK ON` command, however that change has no effect. The automatic shrink task won't run against the database. Users can still shrink the database files using the 'DBCC' commands. |
+| | Database snapshots. |
+| | Support for persistent memory. |
+| | Microsoft Distributed Transaction Coordinator. |
+| | Resource governor and IO resource governance. |
+| | Buffer pool extension. |
+| | Distributed query with third-party connections. |
+| | Linked servers. |
+| | System extended stored procedures (such as `XP_CMDSHELL`). |
+| | CLR assemblies, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views. |
+| | CLR-dependent T-SQL functions, such as `ASSEMBLYPROPERTY`, `FORMAT`, `PARSE`, and `TRY_PARSE`. |
+| | CLR-dependent date and time catalog views, functions, and query clauses. |
+| | Buffer pool extension. |
+| | Database mail. |
+| | Service Broker |
+| | Policy Based Management |
+| | Management Data Warehouse |
+| | Contained Databases |
+| **SQL Server Agent** | Subsystems: CmdExec, PowerShell, Queue Reader, SSIS, SSAS, and SSRS. |
+| | Alerts. |
+| | Managed backup. |
+| **High Availability** | Always On availability groups. |
+| | Basic availability groups. |
+| | Always On failover cluster instance. |
+| | Database mirroring. |
+| | Hot add memory and CPU. |
| **Security** | Extensible key management. |
-| &nbsp; | Active Directory integration.|
-| &nbsp; | Support for secure enclaves.|
+| | Active Directory integration. |
+| | Support for secure enclaves. |
| **Services** | SQL Server Browser. |
-| &nbsp; | Machine Learning through R and Python. |
-| &nbsp; | StreamInsight. |
-| &nbsp; | Analysis Services. |
-| &nbsp; | Reporting Services. |
-| &nbsp; | Data Quality Services. |
-| &nbsp; | Master Data Services. |
-| &nbsp; | Distributed Replay. |
+| | Machine Learning through R and Python. |
+| | StreamInsight. |
+| | Analysis Services. |
+| | Reporting Services. |
+| | Data Quality Services. |
+| | Master Data Services. |
+| | Distributed Replay. |
| **Manageability** | SQL Server Utility Control Point. | ## Next steps
The following list includes the SQL Server 2019 on Linux features that aren't cu
- [Deploy Azure SQL Edge](deploy-portal.md) - [Configure Azure SQL Edge](configure.md) - [Connect to an instance of Azure SQL Edge using SQL Server client tools](connect.md)-- [Backup and restore databases in Azure SQL Edge](backup-restore.md)
+- [Backup and restore databases in Azure SQL Edge](backup-restore.md)
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Release notes for `3.0.015490002-onprem-amd64`:
## Translator
-The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64-preview`.
+The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation`.
This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/translator/text-translation/tags/list). | Image Tags | Notes | |-|:|
-| `1.0.019410001-amd64-preview` | |
+| `latest` | Docker Hub URL: https://hub.docker.com/_/microsoft-azure-cognitive-services-translator-text-translation |
[ad-containers]: ../anomaly-Detector/anomaly-detector-container-howto.md
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 11/21/2022 Last updated : 01/15/2023 # Identify and remediate attack paths
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 12/14/2022 Last updated : 01/15/2023 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
You can also enable the MDE unified solution at scale through the supplied REST
Here's an example request body for the PUT request to enable the MDE unified solution:
-URI: `https://management.azure.com/subscriptions/<subscriptionId>providers/Microsoft.Security/settings/WDATP_UNIFIED_SOLUTION?api-version=2022-05-01`
+URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings/WDATP_UNIFIED_SOLUTION?api-version=2022-05-01`
```json {
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Security posture for Microsoft Defender for Cloud description: Description of Microsoft Defender for Cloud's secure score and its security controls -- Previously updated : 07/18/2022 Last updated : 01/15/2023 # Security posture for Microsoft Defender for Cloud
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Previously updated : 03/22/2022 Last updated : 01/16/2023 zone_pivot_groups: front-door-tiers # Caching with Azure Front Door
+Azure Front Door is a modern content delivery network (CDN), with dynamic site acceleration and load balancing capabilities. When caching is configured on your route, the edge site that receives each request checks its cache for a valid response. Caching helps to reduce the amount of traffic sent to your origin server. If no cached response is available, the request is forwarded to the origin.
-In this article, you'll learn how Azure Front Door Standard and Premium tier routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
+Each Front Door edge site manages its own cache, and requests might be served by different edge sites. As a result, you might still see some traffic reach your origin, even if you served cached responses.
## Request methods
-Only the GET request method can generate cached content in Azure Front Door. All other request methods are always proxied through the network.
---
-The following document specifies behaviors for Azure Front Door (classic) with routing rules that have enabled caching. Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing, it also supports caching behaviors just like any other CDN.
-
+Only requests that use the `GET` request method are cacheable. All other request methods are always proxied through the network.
## Delivery of large files
-Azure Front Door delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the backend. After receiving a full or byte-range file request, the Front Door environment requests the file from the backend in chunks of 8 MB.
+Azure Front Door delivers large files without a cap on file size. If caching is enabled, Front Door uses a technique called *object chunking*. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full or byte-range file request, the Front Door environment requests the file from the origin in chunks of 8 MB.
-After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
+After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection. For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
-For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
-Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the backend's ability to support byte-range requests. If the backend doesn't support byte-range requests, this optimization isn't effective.
+Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the origin. This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, this optimization isn't effective.
## File compression
These profiles support the following compression encodings:
- [Gzip (GNU zip)](https://en.wikipedia.org/wiki/Gzip) - [Brotli](https://en.wikipedia.org/wiki/Brotli)
-If a request supports gzip and Brotli compression, Brotli compression takes precedence.</br>
-When a request for an asset specifies compression and the request results in a cache miss, Azure Front Door (classic) does compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache. The resulting item is returned with a transfer-encoding: chunked.
-If the origin uses Chunked Transfer Encoding (CTE) to send compressed data to the Azure Front Door POP, then response sizes greater than 8 MB aren't supported.
+If a request supports gzip and Brotli compression, Brotli compression takes precedence.
+When a request for an asset specifies compression and the request results in a cache miss, Azure Front Door (classic) does compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache. The resulting item is returned with a `Transfer-Encoding: chunked` response header.
+
+If the origin uses Chunked Transfer Encoding (CTE) to send compressed data to the Azure Front Door PoP, then response sizes greater than 8 MB aren't supported.
> [!NOTE]
-> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on the Origin or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `Accept-Encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on the origin, or create a Rules Engine rule to remove the `Accept-Encoding` header from the request for byte range requests.
+ ## Query string behavior
-With Azure Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.
+With Azure Front Door, you can control how files are cached for a web request that contains a query string.
+
+In a web request with a query string, the query string is that portion of the request that occurs after a question mark (`?`). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (`=`). Each key-value pair is separated by an ampersand (&).
+
+For example, the URL `http://www.contoso.com/content.mov?field1=value1&field2=value2` contains two query strings:
+- `field1`, with a value of `value1`.
+- `field2`, with a value of `value2`.
+
+If there's more than one key-value pair in a query string of a request then their order doesn't matter.
-* **Ignore query strings**: In this mode, Azure Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+When you configure caching, you specify how the cache should handle query strings. The following behaviors are supported:
-* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the backend for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
+* **Ignore query strings**: In this mode, Azure Front Door passes the query strings from the client to the origin on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+
+* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
::: zone pivot="front-door-standard-premium"
-* **Specify cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+* **Specify cache key query string** behavior to include or exclude specified parameters when the cache key is generated.
+
+ For example, suppose that the default cache key is `/foo/image/asset.html`, and a request is made to the URL `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. If there's a rules engine rule to exclude the `userid` query string parameter, then the query string cache key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+
+Configure the query string behavior on the Front Door route.
::: zone-end
Cache purges on the Front Door are case-insensitive. Additionally, they're query
## Cache expiration
-The following order of headers is used to determine how long an item will be stored in our cache:</br>
-1. Cache-Control: s-maxage=\<seconds>
-2. Cache-Control: max-age=\<seconds>
-3. Expires: \<http-date>
+The following order of headers is used to determine how long an item will be stored in our cache:
+
+1. `Cache-Control: s-maxage=<seconds>`
+1. `Cache-Control: max-age=<seconds>`
+1. `Expires: <http-date>`
-Cache-Control response headers that indicate that the response won't be cached such as Cache-Control: private, Cache-Control: no-cache, and Cache-Control: no-store are honored. If no Cache-Control is present, the default behavior is that Front Door will cache the resource for X amount of time where X gets randomly picked between 1 to 3 days.
+Some `Cache-Control` response header values indicate that the response isn't cacheable. These values include `private`, `no-cache`, and `no-store`. Front Door honors these header values and won't cache the responses, even if you override the caching behavior by using the Rules Engine.
+
+If the `Cache-Control` header isn't present on the response from the origin, by default Front Door will randomly determine a cache duration between one and three days.
> [!NOTE] > Cache expiration can't be greater than **366 days**.
Cache-Control response headers that indicate that the response won't be cached s
## Request headers
-The following request headers won't be forwarded to a backend when using caching.
-- Content-Length-- Transfer-Encoding-- Accept-- Accept-Charset-- Accept-Language
+The following request headers won't be forwarded to the origin when caching is enabled:
+
+- `Content-Length`
+- `Transfer-Encoding`
+- `Accept`
+- `Accept-Charset`
+- `Accept-Language`
## Response headers
-The following response headers will be stripped if the origin response is cacheable. For example, Cache control response header with max-age value indicates response is cacheable.
+If the origin response is cacheable, then the `Set-Cookie` header is removed before the response is sent to the client. If an origin response isn't cacheable, Front Door doesn't strip the header. For example, if the origin response includes a `Cache-Control` header with a `max-age` value, this indicates to Front Door that the response is cacheable, and the `Set-Cookie` header is stripped.
+
+In addition, Front Door attaches the `X-Cache` header to all responses. The `X-Cache` response header includes one of the following values:
-- Set-Cookie
+- `TCP_HIT` or `TCP_REMOTE_HIT`: The first 8MB chunk of the response is a cache hit, and the content is served from the Front Door cache.
+- `TCP_MISS`: The first 8MB chunk of the response is a cache miss, and the content is fetched from the origin.
+- `PRIVATE_NOSTORE`: Request can't be cached because the *Cache-Control* response header is set to either *private* or *no-store*.
+- `CONFIG_NOCACHE`: Request is configured to not cache in the Front Door profile.
++
+## Logs and reports
+
+The [Front Door Access Log](standard-premium/how-to-logs.md#access-log) includes the cache status for each request. Also, [reports](standard-premium/how-to-reports.md#caching) include information about how Front Door's cache is used in your application.
+ ## Cache behavior and duration ::: zone pivot="front-door-standard-premium"
-Cache behavior and duration can be configured in Rules Engine. Rules Engine caching configuration will always override the route configuration.
+Cache behavior and duration can be configured in Rules Engine. Rules Engine caching configuration always overrides the route configuration.
+
+* **When caching is disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of the origin response directives.
-* When *caching* is **disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of origin response directives.
+* **When caching is enabled**, the cache behavior is different based on the cache behavior value applied by the Rules Engine:
-* When *caching* is **enabled**, the cache behavior is different based on the cache behavior value selected.
- * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from 1 to 3 days.
+ * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from one to three days.
* **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable. * **Override if origin missing**: If the origin doesnΓÇÖt return caching TTL values, Azure Front Door will use the specified cache duration. This behavior will only be applied if the response is cacheable. > [!NOTE]
-> * Azure Front Door makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Front Door might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your backends are offline.
+> * Azure Front Door makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Front Door might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your origins are offline.
> * Origins may specify not to cache specific responses using the Cache-Control header with a value of no-cache, private, or no-store. When used in an HTTP response from the origin server to the Azure Front Door POPs, Azure Front Door supports Cache-control directives and honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://www.rfc-editor.org/rfc/rfc7234#section-5.2.2.8). ::: zone-end
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
Cache behavior and duration can be configured in both the Front Door designer routing rule and in Rules Engine. Rules Engine caching configuration will always override the Front Door designer routing rule configuration.
-* When *caching* is **disabled**, Azure Front Door (classic) doesnΓÇÖt cache the response contents, irrespective of origin response directives.
+* **When caching is disabled**, Azure Front Door (classic) doesnΓÇÖt cache the response contents, irrespective of origin response directives.
-* When *caching* is **enabled**, the cache behavior is different for different values of *Use cache default duration*.
- * When *Use cache default duration* is set to **Yes**, Azure Front Door (classic) will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from 1 to 3 days.
+* **When caching is enabled**, the cache behavior is different for different values of *Use cache default duration*.
+ * When *Use cache default duration* is set to **Yes**, Azure Front Door (classic) will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from one to three days.
* When *Use cache default duration* is set to **No**, Azure Front Door (classic) will always override with the *cache duration* (required fields), meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. > [!NOTE]
-> * Azure Front Door (classic) makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Azure Front Door (classic) might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your backends are offline.
-> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the backend has a greater TTL than the override value.
+> * Azure Front Door (classic) makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Azure Front Door (classic) might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your origins are offline.
+> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the origin has a greater TTL than the override value.
> ::: zone-end
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-http-headers-protocol.md
na Previously updated : 10/31/2022 Last updated : 01/16/2023
This article outlines the protocol that Front Door supports with parts of the call path (see image). In the following sections, you'll find information about HTTP headers supported by Front Door. > [!IMPORTANT] > Front Door doesn't certify any HTTP headers that aren't documented here.
Azure Front Door includes headers for an incoming request unless they're removed
## From the Front Door to the client
-Any headers sent to Azure Front Door from the backend are also passed through to the client. The following are headers sent from the Front Door to clients.
+Any headers sent to Azure Front Door from the backend are also passed through to the client. Front Door also attaches the following headers to all responses to the client:
| Header | Example and description | | - | - | | X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> This is a unique reference string that identifies a request served by Front Door, which is critical for troubleshooting as it's used to search access logs.|
-| X-Cache | *X-Cache:* This header describes the caching status of the request <br/> - *X-Cache: TCP_HIT*: The first byte of the request is a cache hit in the Front Door edge. <br/> - *X-Cache: TCP_REMOTE_HIT*: The first byte of the request is a cache hit in the regional cache (origin shield layer) but a miss in the edge cache. <br/> - *X-Cache: TCP_MISS*: The first byte of the request is a cache miss, and the content is served from the origin. <br/> - *X-Cache: PRIVATE_NOSTORE*: Request can't be cached as Cache-Control response header is set to either private or no-store. <br/> - *X-Cache: CONFIG_NOCACHE*: Request is configured to not cache in the Front Door profile. |
+| X-Cache | *X-Cache:* This header describes the caching status of the request. For more information, see [Caching with Azure Front Door](front-door-caching.md#response-headers). |
### Optional debug response headers
frontdoor How To Configure Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-caching.md
+
+ Title: 'Configure caching - Azure Front Door'
+description: This article shows you how to configure caching on Azure Front Door.
++++ Last updated : 01/16/2023+++
+# Configure caching on Azure Front Door
+
+This article shows you how to configure caching on Azure Front Door. To learn more about caching, see [Caching with Azure Front Door](front-door-caching.md).
+
+## Prerequisites
+
+Before you can create an Azure Front Door endpoint with Front Door manager, you must have an Azure Front Door profile created. The profile must have at least one or more endpoints. To organize your Azure Front Door endpoints by internet domains, web applications, or other criteria, you can use multiple profiles.
+
+To create an Azure Front Door profile and endpoint, see [Create an Azure Front Door profile](create-front-door-portal.md).
+
+## Configure caching by using the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true) and navigate to your Azure Front Door profile.
+
+1. Select **Front Door manager** and then select your route.
+
+ :::image type="content" source="./media/how-to-configure-caching/select-route.png" alt-text="Screenshot of endpoint landing page.":::
+
+1. Select **Enable caching**.
+
+1. Specify the query string caching behavior. For more information, see [Caching with Azure Front Door](front-door-caching.md#query-string-behavior).
+
+1. Optionally, select **Enable compression** for Front Door to compress responses to the client.
+
+1. Select **Update**.
+
+ :::image type="content" source="./media/how-to-configure-caching/update-route.png" alt-text="Screenshot of route with caching configured.":::
+
+## Next steps
+
+* Learn about the use of [origins and origin groups](origin.md) in an Azure Front Door configuration.
+* Learn about [rules match conditions](rules-match-conditions.md) in an Azure Front Door rule set.
+* Learn more about [policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md) for WAF with Azure Front Door.
+* Learn how to create [custom rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to protect your Azure Front Door profile.
frontdoor How To Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-compression.md
Previously updated : 03/20/2022 Last updated : 01/16/2023
There are two ways to enable file compression:
- Enabling compression directly on the Azure Front Door POP servers (*compression on the fly*). In this case, Azure Front Door compresses the files and sends them to the end users. > [!NOTE]
-> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+> Range requests may be compressed into different sizes. Azure Front Door requires the `Content-Length` response header values to be the same for any GET HTTP request. If clients send byte range requests with the `Accept-Encoding` header that leads to the origin responding with different content lengths, then Azure Front Door returns a 503 error. You can either disable compression on the origin/Azure Front Door, or create a Rules Engine rule to remove the `Accept-Encoding` header from byte range requests.
> [!IMPORTANT] > Azure Front Door configuration changes takes up to 10 mins to propagate throughout the network. If you're setting up compression for the first time for your CDN endpoint, consider waiting 1-2 hours before you troubleshoot to ensure the compression settings have propagated to all the POPs.
There are two ways to enable file compression:
You can enable compression in the following ways: * During quick create - When you enable caching, you can enable compression. * During custom create - Enable caching and compression when you're adding a route.
-* In Endpoint Manager route.
+* In Front Door manager.
* On the Optimization page.
-### Enable compression in Endpoint manager
+### Enable compression in Front Door manager
-1. From the Azure Front Door Standard/Premium profile page, go to **Endpoint Manager** and select the endpoint you want to enable compression.
+1. From the Azure Front Door Standard/Premium profile page, go to **Front Door manager** and select the endpoint you want to enable compression.
-1. Select **Edit Endpoint**, then select the **route** you want to enable compression.
+1. Within the endpoint, select the **route** you want to enable compression on.
- :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Screenshot of Endpoint Manager landing page." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Screenshot of the Front Door manager landing page." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png":::
-1. Ensure **Enable Caching** is checked, then select the checkbox for **Enable compression**.
+1. Ensure **Enable caching** is checked, then select the checkbox for **Enable compression**.
- :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Enable compression in endpoint manager.":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screenshot of Front Door Manager showing the 'Enable compression' radio button.":::
1. Select **Update** to save the configuration.
-### Enable compression in Optimization
+### Enable compression in Optimizations
1. From the Azure Front Door Standard/Premium profile page, go to **Optimizations** under Settings. Expand the endpoint to see the list of routes. 1. Select the three dots next to the **route** that has compression *Disabled*. Then select **Configure route**.
- :::image type="content" source="../media/how-to-compression/front-door-compression-optimization-1.png" alt-text="Screen of enable compression on the optimization page." lightbox="../media/how-to-compression/front-door-compression-optimization-1-expanded.png":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-optimization-1.png" alt-text="Screenshot of the Optimizations page." lightbox="../media/how-to-compression/front-door-compression-optimization-1-expanded.png":::
-1. Ensure **Enable Caching** is checked, then select the checkbox for **Enable compression**.
+1. Ensure **Enable caching** is checked, then select the checkbox for **Enable compression**.
- :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screen shot of enabling compression in endpoint manager.":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screenshot of the Optimizations page showing the 'Enable compression' radio button.":::
1. Click **Update**.
You can modify the default list of MIME types on Optimizations page.
:::image type="content" source="../media/how-to-compression/front-door-compression-edit-content-type-2.png" alt-text="Screenshot of customize file compression page.":::
-1. Select **Save**, to update compression configure .
+1. Select **Save** to update the compression configuration.
## Disabling compression You can disable compression in the following ways:
-* Disable compression in Endpoint manager route.
-* Disable compression in Optimization page.
+* Disable compression in Front Door manager route.
+* Disable compression in Optimizations page.
-### Disable compression in Endpoint manager
+### Disable compression in Front Door manager
-1. From the Azure Front Door Standard/Premium profile page, go to **Endpoint manager** under Settings. Select the endpoint you want to disable compression.
+1. From the Azure Front Door Standard/Premium profile page, go to **Front Door manager** under Settings.
-1. Select **Edit Endpoint** and then select the **route** you want to disable compression. Uncheck the **Enable compression** box.
+1. Select the **route** you want to disable compression on. Uncheck the **Enable compression** box.
1. Select **Update** to save the configuration.
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Previously updated : 03/20/2022 Last updated : 01/16/2023
Azure Front Door provides different logging to help you track, monitor, and debu
* Access logs have detailed information about every request that AFD receives and help you analyze and monitor access patterns, and debug issues. * Activity logs provide visibility into the operations done on Azure resources.
-* Health Probe logs provides the logs for every failed probe to your origin.
-* Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs.
+* Health probe logs provide the logs for every failed probe to your origin.
+* Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs. For more information on WAF logs, see [Azure Web Application Firewall monitoring and logging](../../web-application-firewall/afds/waf-front-door-monitor.md#waf-logs).
-
-Access Logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes.
+Access logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes.
You have three options for storing your logs:
You have three options for storing your logs:
* **Event hubs:** Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores. For example: Splunk/DataDog/Sumo. * **Azure Log Analytics:** Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance.
-## Configure Logs
+## Configure logs
1. Sign in to the [Azure portal](https://portal.azure.com).
You have three options for storing your logs:
1. Click on **Save**.
-## Access Log
+## Access log
Azure Front Door currently provides individual API requests with each entry having the following schema and logged in JSON format as shown below.
Azure Front Door currently provides individual API requests with each entry havi
| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 | | SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. | | Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net |
-| HttpStatusCode | The HTTP status code returned from Azure Front Door. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.|
+| HttpStatusCode | The HTTP status code returned from Azure Front Door. If a request to the origin times out, the value for HttpStatusCode is set to **0**.|
| Pop | The edge pop, which responded to the user request. |
-| Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are HIT: The HTTP request was served from AFD edge POP cache. <br> **MISS**: The HTTP request was served from origin. <br/> **PARTIAL_HIT**: Some of the bytes from a request got served from AFD edge POP cache while some of the bytes got served from origin for object chunking scenarios. <br> **CACHE_NOCONFIG**: Forwarding requests without caching settings, including bypass scenario. <br/> **PRIVATE_NOSTORE**: No cache configured in caching settings by customers. <br> **REMOTE_HIT**: The request was served by parent node cache. <br/> **N/A**:** Request that was denied by Signed URL and Rules Set. |
+| Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are:<ul><li>`HIT` and `REMOTE_HIT`: The HTTP request was served from the Front Door cache.</li><li>`MISS`: The HTTP request was served from the origin.</li><li> `PARTIAL_HIT`: Some of the bytes from a request were served from the Front Door cache, and some of the bytes were served from origin. This status occurs in [object chunking](../front-door-caching.md#delivery-of-large-files) scenarios.</li><li>`CACHE_NOCONFIG`: Request was forwarded without caching settings, including bypass scenario.</li><li>`PRIVATE_NOSTORE`: No cache configured in caching settings by customers.</li><li>`N/A`: The request was denied by a signed URL or the rules engine.</li></ul> |
| MatchedRulesSetName | The names of the rules that were processed. | | RouteNameΓÇ»| The name of the route that the request matched. | | ClientPort | The IP port of the client that made the request. | | Referrer | The URL of the site that originated the request. |
-| TimetoFirstByte | The length of time in milliseconds from AFD receives the request to the time the first byte gets sent to client, as measured on Azure Front Door. This property doesn't measure the client data. |
-| ErrorInfo | This field provides detailed info of the error token for each response. <br> **NoError**: Indicates no error was found. <br> **CertificateError**: Generic SSL certificate error. <br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. <br> **ClientDisconnected**: Request failure because of client network connection. <br> **ClientGeoBlocked**: The client was blocked due geographical location of the IP. <br> **UnspecifiedClientError**: Generic client error. <br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. <br> **DNSFailure**: DNS Failure. <br> **DNSTimeout**: The DNS query to resolve the backend timed out. <br> **DNSNameNotResolved**: The server name or address couldn't be resolved. <br> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. <br> **OriginConnectionError**: Generic origin connection error. <br> **OriginConnectionRefused**: The connection with the origin wasn't established. <br> **OriginError**: Generic origin error. <br> **OriginInvalidRequest**: An invalid request was sent to the origin. <br> **ResponseHeaderTooBig**: The origin returned a too large of a response header. <br> **OriginInvalidResponse**:** Origin returned an invalid or unrecognized response. <br> **OriginTimeout**: The timeout period for origin request expired. <br> **ResponseHeaderTooBig**: The origin returned a too large of a response header. <br> **RestrictedIP**: The request was blocked because of restricted IP. <br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. <br> **SSLInvalidRootCA**: The RootCA was invalid. <br> **SSLInvalidCipher**: Cipher was invalid for which the HTTPS connection was established. <br> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. <br> **OriginConnectionRefused**: The connection with the origin wasn't established. <br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. |
-| OriginURL | The full URL of the origin where requests are being sent. Composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If there's a URL rewrite rule in Rule Set, path refers to rewritten path. <br> **Cache on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
-| OriginIP | The origin IP that served the request. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
-| OriginName| The full DNS name (hostname in origin URL) to the origin. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
+| TimeToFirstByte | The length of time in milliseconds from AFD receives the request to the time the first byte gets sent to client, as measured on Azure Front Door. This property doesn't measure the client data. |
+| ErrorInfo | This field provides detailed info of the error token for each response. Possible values are:<ul><li>`NoError`: Indicates no error was found.</li><li>`CertificateError`: Generic SSL certificate error.</li><li>`CertificateNameCheckFailed`: The host name in the SSL certificate is invalid or doesn't match.</li><li>`ClientDisconnected`: Request failure because of client network connection.</li><li>`ClientGeoBlocked`: The client was blocked due geographical location of the IP.</li><li>`UnspecifiedClientError`: Generic client error.</li><li>`InvalidRequest`: Invalid request. It might occur because of malformed header, body, and URL.</li><li>`DNSFailure`: DNS Failure.</li><li>`DNSTimeout`: The DNS query to resolve the backend timed out.</li><li>`DNSNameNotResolved`: The server name or address couldn't be resolved.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionError`: Generic origin connection error.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`OriginError`: Generic origin error.</li><li>`OriginInvalidRequest`: An invalid request was sent to the origin.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`OriginInvalidResponse`:` Origin returned an invalid or unrecognized response.</li><li>`OriginTimeout`: The timeout period for origin request expired.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`RestrictedIP`: The request was blocked because of restricted IP.</li><li>`SSLHandshakeError`: Unable to establish connection with origin because of SSL hand shake failure.</li><li>`SSLInvalidRootCA`: The RootCA was invalid.</li><li>`SSLInvalidCipher`: Cipher was invalid for which the HTTPS connection was established.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`UnspecifiedError`: An error occurred that didnΓÇÖt fit in any of the errors in the table.</li></ul> |
+| OriginURL | The full URL of the origin where requests are being sent. Composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If there's a URL rewrite rule in Rule Set, path refers to rewritten path. <br> **Cache on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files). |
+| OriginIP | The origin IP that served the request. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) |
+| OriginName| The full DNS name (hostname in origin URL) to the origin. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) |
## Health Probe Log
Health probe logs provide logging for every failed probe to help you d
* You noticed the origin health % is lower than expected and want to know which origin failed and the reason of the failure.
-### Health Probe Log Properties
+### Health probe log properties
Each health probe log has the following schema.
Each health probe log has the following schema.
| ConnectionLatency| Duration Time spent on setting up the TCP connection to send the HTTP Probe request to origin. | | DNSResolution Latency | Duration Time spent on DNS resolution if the origin is configured to be an FDQN instead of IP. N/A if the origin is configured to IP. |
-### Health Probe Log Sample in JSON
+The following example shows a health probe log entry, in JSON format.
-`{ "records": [ { "time": "2021-02-02T07:15:37.3640748Z",
+```json
+{
+ "records": [
+ {
+ "time": "2021-02-02T07:15:37.3640748Z",
"resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE", "category": "FrontDoorHealthProbeLog", "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write",
- "properties": { "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E",
+ "properties": {
+ "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E",
"POP": "MAA", "httpVerb": "HEAD", "result": "OriginError",
Each health probe log has the following schema.
"originIP": "52.239.224.228:80", "totalLatencyMilliseconds": "141", "connectionLatencyMilliseconds": "68",
- "DNSLatencyMicroseconds": "1814" } } ]
-} `
-
-## Activity Logs
+ "DNSLatencyMicroseconds": "1814"
+ }
+ }
+ ]
+}
+```
+
+## Activity logs
Activity logs provide information about the operations done on Azure Front Door Standard/Premium. The logs include details about what, who and when a write operation was done on Azure Front Door.
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md
Cache Hits/Misses describe the request number cache hits and cache misses for cl
This report takes caching scenarios into consideration and requests that met the following requirements are taken into calculation.
-* The requested content was cached on the POP closest to the requester or origin shield.
+* The requested content was cached on a Front Door PoP.
* Partial cached contents for object chunking.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Follow the steps listed under section [Manage Users: Assign Users to Role](https
## SMART on FHIR proxy
+<details>
+ <summary> Click to expand! </summary>
+
+> [!NOTE]
+> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
The SMART on FHIR proxy uses this information to populate fields in the token re
These fields are meant to provide guidance to the app, but they don't convey any security information. A SMART on FHIR application can ignore them. Notice that the SMART on FHIR app launcher updates the **Launch URL** information at the bottom of the page. Select **Launch** to start the sample app.
+</details>
+
+## Next steps
+
+Now that you've learned about enabling SMART on FHIR functionality, see the search samples page for details about how to search using search parameters, modifiers, and other FHIR search methods.
+>[!div class="nextstepaction"]
+>[FHIR search examples](search-samples.md)
+
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Notice that the SMART on FHIR app launcher updates the **Launch URL** informatio
Inspect the token response to see how the launch context fields are passed on to the app. </details>
+
+## Next steps
+
+Now that you've learned about enabling SMART on FHIR functionality, see the search samples page for details about how to search using search parameters, modifiers, and other FHIR search methods.
+
+>[!div class="nextstepaction"]
+>[FHIR search examples](search-samples.md)
+
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Last updated 12/20/2022
keywords: java, jakartaee, microprofile, EAP, JBoss EAP, ARO, OpenShift, JBoss Enterprise Application Platform-+ # Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift (ARO) 4 cluster
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Previously updated : 06/30/2022 Last updated : 01/15/2023 + # Azure Peering Service overview
-Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.
+Azure Peering Service is a networking service that enhances the connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.
With Peering Service, customers can select a well-connected partner service provider in a given region. Public connectivity is optimized for high reliability and minimal latency from cloud services to the end-user location.
-![Distributed connectivity to Microsoft cloud](./media/peering-service-about/peering-service-what.png)
Customers can also opt for Peering Service telemetry such as user latency measures to the Microsoft network, BGP route monitoring, and alerts against leaks and hijacks by registering the Peering Service connection in the Azure portal. To use Peering Service, customers aren't required to register with Microsoft. The only requirement is to contact a [Peering Service partner](location-partners.md) to get the service. To opt in for Peering Service telemetry, customers must register for it in the Azure portal.
-For instructions on how to register Peering Service, see [Register Peering Service by using the Azure portal](azure-portal.md).
+For instructions on how to register a Peering Service, see [Create, change, or delete a Peering Service connection using the Azure portal](azure-portal.md).
> [!NOTE] > This article is intended for network architects in charge of enterprise connectivity to the cloud and to the internet.
Peering Service is:
- An IP service that uses the public internet. - A collaboration platform with service providers and a value-added service that's intended to offer optimal and reliable routing via service provider partners to the Microsoft cloud over the public network.
-Peering Service is not a private connectivity product like Azure ExpressRoute or a VPN product.
- > [!NOTE]
-> For more information about ExpressRoute, see [ExpressRoute documentation](../expressroute/expressroute-introduction.md).
+> Peering Service isn't a private connectivity product like Azure ExpressRoute or Azure VPN. For more information, see:
+> - [What is Azure ExpressRoute?](../expressroute/expressroute-introduction.md)
+> - [What is Azure VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md)
## Background
Microsoft 365, Dynamics 365, and any other Microsoft SaaS services are hosted in
Microsoft and partner service providers ensure that the traffic for the prefixes registered with a Peering Service connection enters and exits the nearest Microsoft Edge PoP locations on the Microsoft global network. Microsoft ensures that the networking traffic egressing from the prefixes registered with Peering Service connections takes the nearest Microsoft Edge PoP locations on the Microsoft global network.
-![Microsoft network and public connectivity](./media/peering-service-about/peering-service-background-final.png)
> [!NOTE] > For more information about the Microsoft global network, see [Microsoft global network](../networking/microsoft-global-network.md).
Peering Service uses two types of redundancy:
This type of redundancy uses the shortest routing path by always choosing the nearest Microsoft Edge PoP to the end user and ensures that the customer is one network hop (AS hops) away from MicrosoftΓÇï.
- ![Geo-redundancy](./media/peering-service-about/peering-service-geo-shortest.png)
+ :::image type="content" source="./media/peering-service-about/peering-service-geo-shortest.png" alt-text="Diagram showing geo-redundancy.":::
### Optimal routing
The following routing technique is preferred:
Routing that doesn't use the cold-potato technique is referred to as hot-potato routing. With hot-potato routing, traffic that originates from the Microsoft cloud then goes over the internet.
- ![Cold-potato routing](./media/peering-service-about/peering-service-cold-potato.png)
+ :::image type="content" source="./media/peering-service-about/peering-service-cold-potato.png" alt-text="Diagram showing cold-potato routing.":::
### Monitoring platform
The following routing technique is preferred:
Routing performance is measured by validating the round-trip time taken from the client to reach the Microsoft Edge PoP. Customers can view the latency reports for different geographic locations.
- Monitoring captures the events in case of any service degradation.
+ Monitoring captures the events if there's any service degradation.
- ![Monitoring platform for Peering Service](media/peering-service-about/peering-service-latency-report.png)
+ :::image type="content" source="./media/peering-service-about/peering-service-latency-report.png" alt-text="Diagram showing monitoring platform for Peering Service.":::
### Traffic protection
BGP route anomalies are reported in the Azure portal, if any.
- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md). - To find a service provider partner, see [Peering Service partners and locations](location-partners.md). - To onboard a Peering Service connection, see [Onboarding Peering Service model](onboarding-model.md).-- To register a connection by using the Azure portal, see [Register a Peering Service connection by using the Azure portal](azure-portal.md).
+- To register Peering Service, see [Create, change, or delete a Peering Service connection using the Azure portal](azure-portal.md).
- To measure telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
Title: Authorize search app requests using Azure AD
+ Title: Configure search apps for Azure AD
-description: Acquire a token from Azure AD to authorize search requests to an app built on Azure Cognitive Search.
+description: Acquire a token from Azure Active Directory to authorize search requests to an app built on Azure Cognitive Search.
Previously updated : 1/05/2022 Last updated : 01/13/2023
It's a best practice to grant minimum permissions. If your application only need
1. Select **+ Add** > **Add role assignment**.
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of Access control (IAM) page with Add role assignment menu open." border="true":::
1. Select an applicable role:
The following instructions reference an existing C# sample to demonstrate the co
SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, new DefaultAzureCredential()); ```
-> [!NOTE]
-> User-assigned managed identities work only in Azure environments. If you run this code locally, `DefaultAzureCredential` will fall back to authenticating with your credentials. Make sure you've also given yourself the required access to the search service if you plan to run the code locally.
+### Local testing
+
+User-assigned managed identities work only in Azure environments. If you run this code locally, `DefaultAzureCredential` will fall back to authenticating with your credentials. Make sure you've also given yourself the required access to the search service if you plan to run the code locally.
+
+1. Verify your account has role assignments to run all of the operations in the quickstart sample. To both create and query an index, you'll need "Search Index Data Reader" and "Search Index Data Contributor".
-The Azure.Identity documentation has more details about `DefaultAzureCredential` and using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme). `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
+1. Go to **Tools** > **Options** > **Azure Service Authentication** to choose your Azure sign-on account.
+
+You should now be able to run the project from Visual Studio on your local system, using role-based access control for authorization.
+
+> [!NOTE]
+> The Azure.Identity documentation has more details about `DefaultAzureCredential` and using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme). `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
### [**REST API**](#tab/aad-rest)
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Previously updated : 01/10/2023 Last updated : 01/14/2023 # Connect to Cognitive Search using key authentication
Best practices for using hard-coded keys in source files include:
### [**Portal**](#tab/portal-use)
-In Cognitive Search, most tasks can be performed in Azure portal, including object creation, indexing through the Import data wizard, and queries through Search explorer.
+Key authentication is built in so no action is required. By default, the portal uses API keys to authenticate the request automatically. However, if you [disable API keys](search-security-rbac.md#disable-api-key-authentication) and set up role assignments, the portal uses role assignments instead.
-Authentication is built in so no action is required. By default, the portal uses API keys to authenticate the request automatically. However, if you [disable API keys](search-security-rbac.md#disable-api-key-authentication) and set up role assignments, the portal uses role assignments instead.
+In Cognitive Search, most tasks can be performed in Azure portal, including object creation, indexing through the Import data wizard, and queries through Search explorer.
### [**PowerShell**](#tab/azure-ps-use)
+Set API keys in the request header using the following syntax:
+
+```azurepowershell
+$headers = @{
+'api-key' = '<YOUR-ADMIN-OR-QUERY-API-KEY>'
+'Content-Type' = 'application/json'
+'Accept' = 'application/json' }
+```
+ A script example showing API key usage for various operations can be found at [Quickstart: Create an Azure Cognitive Search index in PowerShell using REST APIs](search-get-started-powershell.md). ### [**REST API**](#tab/rest-use)
-+ Admin keys are only specified in HTTP request headers. You can't place an admin API key in a URL. See [Connect to Azure Cognitive Search using REST APIs](search-get-started-rest.md#connect-to-azure-cognitive-search) for an example that specifies an admin API key on a REST call.
+Set an admin key in the request header using the syntax `api-key` equal to your key. Admin keys are used for most operations, including create, delete, and update. Admin keys are also used on requests issued to the search service itself, such as listing objects or requesting service statistics. see [Connect to Azure Cognitive Search using REST APIs](search-get-started-rest.md#connect-to-azure-cognitive-search) for a more detailed example.
-+ Query keys are also specified in an HTTP request header for search, suggestion, or lookup operation that use POST.
- Alternatively, you can pass a query key as a parameter on a URL if you're using GET: `GET /indexes/hotels/docs?search=*&$orderby=lastRenovationDate desc&api-version=2020-06-30&api-key=[query key]`
+Query keys are used for search, suggestion, or lookup operations that target the `index/docs` collection. For POST, set `api-key` in the request header. Or, put the key on the URI for a GET: `GET /indexes/hotels/docs?search=*&$orderby=lastRenovationDate desc&api-version=2020-06-30&api-key=[query key]`
### [**C#**](#tab/dotnet-use)
You can view and manage API keys in the [Azure portal](https://portal.azure.com)
### [**PowerShell**](#tab/azure-ps-find)
-1. Install the Az.Search module:
+1. Install the `Az.Search` module:
```azurepowershell Install-Module Az.Search
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Title: Use Azure role-based access control
+ Title: Connect using Azure roles
description: Use Azure role-based access control for granular permissions on service administration and content tasks.
Last updated 01/12/2023
-# Use Azure role-based access controls (Azure RBAC) in Azure Cognitive Search
+# Connect to Azure Cognitive Search using Azure role-based access control (Azure RBAC)
Azure provides a global [role-based access control authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can:
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Previously updated : 12/30/2021 Last updated : 01/13/2023 # Cloud feature availability for commercial and US Government customers
The following table displays the current Microsoft Defender for IoT feature avai
## Azure Attestation
-Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service receives evidence from the platform, validates it with security standards, evaluates it against configurable policies, and produces an attestation token for claims-based applications (e.g., relying parties, auditing authorities).
+Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service receives evidence from the platform, validates it with security standards, evaluates it against configurable policies, and produces an attestation token for claims-based applications (e.g., relying parties, auditing authorities).
Azure Attestation is currently available in multiple regions across Azure public and Government clouds. In Azure Government, the service is available in preview status across US Gov Virginia and US Gov Arizona.
-For more information, see Azure Attestation [public documentation](../../attestation/overview.md).
+For more information, see Azure Attestation [public documentation](../../attestation/overview.md).
| Feature | Azure | Azure Government | |--|--|--|
security Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/physical-security.md
description: The article describes what Microsoft does to secure the Azure datac
documentationcenter: na -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na Previously updated : 07/10/2020 Last updated : 01/13/2023
This article describes what Microsoft does to secure the Azure infrastructure.
## Datacenter infrastructure Azure is composed of a [globally distributed datacenter infrastructure](https://azure.microsoft.com/global-infrastructure/), supporting thousands of online services and spanning more than 100 highly secure facilities worldwide.
-The infrastructure is designed to bring applications closer to users around the world, preserving data residency, and offering comprehensive compliance and resiliency options for customers. Azure has 58 regions worldwide, and is available in 140 countries/regions.
+The infrastructure is designed to bring applications closer to users around the world, preserving data residency, and offering comprehensive compliance and resiliency options for customers. Azure has over 60 regions worldwide, and is available in 140 countries/regions.
A region is a set of datacenters that is interconnected via a massive and resilient network. The network includes content distribution, load balancing, redundancy, and [data-link layer encryption by default](encryption-overview.md#encryption-of-data-in-transit) for all Azure traffic within a region or travelling between regions. With more global regions than any other cloud provider, Azure gives you the flexibility to deploy applications where you need them.
Microsoft takes a layered approach to physical security, to reduce the risk of u
- **Access request and approval.** You must request access prior to arriving at the datacenter. You're required to provide a valid business justification for your visit, such as compliance or auditing purposes. All requests are approved on a need-to-access basis by Microsoft employees. A need-to-access basis helps keep the number of individuals needed to complete a task in the datacenters to the bare minimum. After Microsoft grants permission, an individual only has access to the discrete area of the datacenter required, based on the approved business justification. Permissions are limited to a certain period of time, and then expire. -- **FacilityΓÇÖs perimeter.** When you arrive at a datacenter, you're required to go through a well-defined access point. Typically, tall fences made of steel and concrete encompass every inch of the perimeter. There are cameras around the datacenters, with a security team monitoring their videos at all times.
+- **Visitor access.** Temporary access badges are stored within the access-controlled SOC and inventoried at the beginning and end of each shift. All visitors that have approved access to the datacenter are designated as *Escort Only* on their badges and are required to always remain with their escorts. Escorted visitors do not have any access levels granted to them and can only travel on the access of their escorts. The escort is responsible for reviewing the actions and access of their visitor during their visit to the datacenter. Microsoft requires visitors to surrender badges upon departure from any Microsoft facility. All visitor badges have their access levels removed before they are reused for future visits.
+
+- **Facility's perimeter.** When you arrive at a datacenter, you're required to go through a well-defined access point. Typically, tall fences made of steel and concrete encompass every inch of the perimeter. There are cameras around the datacenters, with a security team monitoring their videos at all times. Security guard patrols ensure entry and exit are restricted to designated areas. Bollards and other measures protect the datacenter exterior from potential threats, including unauthorized access.
- **Building entrance.** The datacenter entrance is staffed with professional security officers who have undergone rigorous training and background checks. These security officers also routinely patrol the datacenter, and monitor the videos of cameras inside the datacenter at all times.
Microsoft takes a layered approach to physical security, to reduce the risk of u
- **Datacenter floor.** You are only allowed onto the floor that you're approved to enter. You are required to pass a full body metal detection screening. To reduce the risk of unauthorized data entering or leaving the datacenter without our knowledge, only approved devices can make their way into the datacenter floor. Additionally, video cameras monitor the front and back of every server rack. When you exit the datacenter floor, you again must pass through full body metal detection screening. To leave the datacenter, you're required to pass through an additional security scan.
-Microsoft requires visitors to surrender badges upon departure from any Microsoft facility.
+ ## Physical security reviews Periodically, we conduct physical security reviews of the facilities, to ensure the datacenters properly address Azure security requirements. The datacenter hosting provider personnel do not provide Azure service management. Personnel can't sign in to Azure systems and don't have physical access to the Azure collocation room and cages.
Upon a system's end-of-life, Microsoft operational personnel follow rigorous dat
## Compliance We design and manage the Azure infrastructure to meet a broad set of international and industry-specific compliance standards, such as ISO 27001, HIPAA, FedRAMP, SOC 1, and SOC 2. We also meet country- or region-specific standards, including Australia IRAP, UK G-Cloud, and Singapore MTCS. Rigorous third-party audits, such as those done by the British Standards Institute, verify adherence to the strict security controls these standards mandate.
-For a full list of compliance standards that Azure adheres to, see the [Compliance offerings](https://www.microsoft.com/trustcenter/compliance/complianceofferings).
+For a full list of compliance standards that Azure adheres to, see the [Compliance offerings](/azure/compliance/).
## Next steps To learn more about what Microsoft does to help secure the Azure infrastructure, see:
To learn more about what Microsoft does to help secure the Azure infrastructure,
- [Azure infrastructure monitoring](infrastructure-monitoring.md) - [Azure infrastructure integrity](infrastructure-integrity.md) - [Azure customer data protection](protection-customer-data.md)--
storage Storage Quickstart Blobs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-portal.md
Previously updated : 05/05/2022 Last updated : 01/13/2023
To upload a block blob to your new container in the Azure portal, follow these s
1. In the Azure portal, navigate to the container you created in the previous section. 1. Select the container to show a list of blobs it contains. This container is new, so it won't yet contain any blobs.
-1. Select the **Upload** button to open the upload blade and browse your local file system to find a file to upload as a block blob. You can optionally expand the **Advanced** section to configure other settings for the upload operation.
+1. Select the **Upload** button to open the upload blade and browse your local file system to find a file to upload as a block blob. You can optionally expand the **Advanced** section to configure other settings for the upload operation. You can, for example, upload a blob into a new or existing virtual folder or by supplying a value in the **Upload to folder** field.
:::image type="content" source="media/storage-quickstart-blobs-portal/upload-blob.png" alt-text="Screenshot showing how to upload a blob from your local drive via the Azure portal":::
virtual-machines Lsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md
The Lsv3-series VMs are available in sizes from 8 to 80 vCPUs. There are 8 GiB o
1. **Temp disk**: Lsv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80s_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation. 2. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. 3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. + > [!NOTE] > Central US, East US 2, and Qatar Central do not support Local NVME disk encryption, but will be added in the future.
-4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
-5. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+
+5. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+6. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
> [!NOTE] > Lsv3-series VMs don't provide host cache for data disk as it doesn't benefit the Lsv3 workloads.