Updates from: 08/23/2021 03:03:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reply-url.md
description: A description of the restrictions and limitations on redirect URI (
Previously updated : 06/23/2021 Last updated : 08/06/2021
A redirect URI, or reply URL, is the location where the authorization server sends the user once the app has been successfully authorized and granted an authorization code or access token. The authorization server sends the code or token to the redirect URI, so it's important you register the correct location as part of the app registration process.
- The following restrictions apply to redirect URIs:
+The Azure Active Directory (Azure AD) application model specifies these restrictions to redirect URIs:
-* The redirect URI must begin with the scheme `https`. There are some [exceptions for localhost](#localhost-exceptions) redirect URIs.
+* Redirect URIs must begin with the scheme `https`. There are some [exceptions for localhost](#localhost-exceptions) redirect URIs.
-* The redirect URI is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the redirect URI. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
+* Redirect URIs are case-sensitive and must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the redirect URI. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
-* A Redirect Uri without a path segment appends a trailing slash to the URI in the response. For e.g. URIs such as https://contoso.com and http://localhost:7071 will return as https://contoso.com/ and http://localhost:7071/ respectively. This is applicable only when the response mode is either query or fragment.
+* Redirect URIs *not* configured with a path segment are returned with a trailing slash ('`/`') in the response. This applies only when the response mode is `query` or `fragment`.
-* Redirect Uris containing path segment do not append a trailing slash. (Eg. https://contoso.com/abc, https://contoso.com/abc/response-oidc will be used as it is in the response)
+ Examples:
+
+ * `https://contoso.com` is returned as `https://contoso.com/`
+ * `http://localhost:7071` is returned as `http://localhost:7071/`
+
+* Redirect URIs that contain a path segment are *not* appended with a trailing slash in the response.
+
+ Examples:
+
+ * `https://contoso.com/abc` is returned as `https://contoso.com/abc`
+ * `https://contoso.com/abc/response-oidc` is returned as `https://contoso.com/abc/response-oidc`
## Maximum number of redirect URIs
You can use a maximum of 256 characters for each redirect URI you add to an app
## Supported schemes
-The Azure Active Directory (Azure AD) application model currently supports both HTTP and HTTPS schemes for apps that sign in work or school accounts in any organization's Azure AD tenant. These account types are specified by the `AzureADMyOrg` and `AzureADMultipleOrgs` values in the `signInAudience` field of the application manifest. For apps that sign in personal Microsoft accounts (MSA) *and* work and school accounts (that is, the `signInAudience` is set to `AzureADandPersonalMicrosoftAccount`), only the HTTPS scheme is allowed.
+**HTTPS**: The HTTPS scheme (`https://`) is supported for all HTTP-based redirect URIs.
+
+**HTTP**: The HTTP scheme (`http://`) is supported *only* for *localhost* URIs and should be used only during active local application development and testing.
-To add redirect URIs with an HTTP scheme to app registrations that sign in work or school accounts, use the application manifest editor in [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) in the Azure portal. However, though it's possible to set an HTTP-based redirect URI by using the manifest editor, we *strongly* recommend that you use the HTTPS scheme for your redirect URIs.
+| Example redirect URI | Validity |
+|--|-|
+| `https://contoso.com` | Valid |
+| `https://contoso.com/abc/response-oidc` | Valid |
+| `https://localhost` | Valid |
+| `http://contoso.com/abc/response-oidc` | Invalid |
+| `http://localhost` | Valid |
+| `http://localhost/abc` | Valid |
-## Localhost exceptions
+### Localhost exceptions
Per [RFC 8252 sections 8.3](https://tools.ietf.org/html/rfc8252#section-8.3) and [7.3](https://tools.ietf.org/html/rfc8252#section-7.3), "loopback" or "localhost" redirect URIs come with two special considerations:
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/security-planning.md
Azure AD Identity Protection is an algorithm-based monitoring and reporting tool
#### Obtain your Microsoft 365 Secure Score (if using Microsoft 365)
-Secure Score looks at your settings and activities for the Microsoft 365 services you're using and compares them to a baseline established by Microsoft. You'll get a score based on how aligned you are with security practices. Anyone who has the administrator permissions for a Microsoft 365 Business Standard or Enterprise subscription can access the Secure Score at [https://securescore.office.com](https://securescore.office.com/).
+Secure Score looks at your settings and activities for the Microsoft 365 services you're using and compares them to a baseline established by Microsoft. You'll get a score based on how aligned you are with security practices. Anyone who has the administrator permissions for a Microsoft 365 Business Standard or Enterprise subscription can access the Secure Score at `https://securescore.office.com`.
#### Review the Microsoft 365 security and compliance guidance (if using Microsoft 365)
For more information about how Microsoft Office 365 handles security incidents,
* [Microsoft Intune Security](https://www.microsoft.com/trustcenter/security/intune-security) ΓÇô Intune provides mobile device management, mobile application management, and PC management capabilities from the cloud.
-* [Microsoft Dynamics 365 security](https://www.microsoft.com/trustcenter/security/dynamics365-security) ΓÇô Dynamics 365 is the Microsoft cloud-based solution that unifies customer relationship management (CRM) and enterprise resource planning (ERP) capabilities.
+* [Microsoft Dynamics 365 security](https://www.microsoft.com/trustcenter/security/dynamics365-security) ΓÇô Dynamics 365 is the Microsoft cloud-based solution that unifies customer relationship management (CRM) and enterprise resource planning (ERP) capabilities.
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-app-insights.md
Before you can use Application Insights, you first need to create an instance of
:::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-logger-2.png" alt-text="Screenshot that shows where to view the newly created Application Insights logger with instrumentation key"::: > [!NOTE]
-> Behind the scenes, a [Logger](/rest/api/apimanagement/2019-12-01/logger/createorupdate) entity is created in your API Management instance, containing the instrumentation key of the Application Insights instance.
+> Behind the scenes, a [Logger](/rest/api/apimanagement/2020-12-01/logger/create-or-update) entity is created in your API Management instance, containing the instrumentation key of the Application Insights instance.
## Enable Application Insights logging for your API
Before you can use Application Insights, you first need to create an instance of
> Overriding the default value **0** in the **Number of payload bytes to log** setting may significantly decrease the performance of your APIs. > [!NOTE]
-> Behind the scenes, a [Diagnostic](/rest/api/apimanagement/2019-12-01/diagnostic/createorupdate) entity named 'applicationinsights' is created at the API level.
+> Behind the scenes, a [Diagnostic](/rest/api/apimanagement/2020-12-01/diagnostic/create-or-update) entity named 'applicationinsights' is created at the API level.
| Setting name | Value type | Description | |-|--|--|
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
App settings in a function app contain global configuration options that affect
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)] There are other global configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-develop-local.md#local-settings-file) file.
+Example connection string values are truncated for readability.
> [!NOTE] > You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_I
|Key|Sample value| |||
-|APPINSIGHTS_INSTRUMENTATIONKEY|55555555-af77-484b-9032-64f83bb83bb|
+|APPINSIGHTS_INSTRUMENTATIONKEY|`55555555-af77-484b-9032-64f83bb83bb`|
## APPLICATIONINSIGHTS_CONNECTION_STRING
For more information, see [Connection strings](../azure-monitor/app/sdk-connecti
|Key|Sample value| |||
-|APPLICATIONINSIGHTS_CONNECTION_STRING|InstrumentationKey=[key];IngestionEndpoint=[url];LiveEndpoint=[url];ProfilerEndpoint=[url];SnapshotEndpoint=[url];|
+|APPLICATIONINSIGHTS_CONNECTION_STRING|`InstrumentationKey=...`|
## AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL
By default, [Functions proxies](functions-proxies.md) use a shortcut to send API
|Key|Value|Description| |-|-|-|
-|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|true|Calls with a backend URL pointing to a function in the local function app won't be sent directly to the function. Instead, the requests are directed back to the HTTP frontend for the function app.|
-|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|false|Calls with a backend URL pointing to a function in the local function app are forwarded directly to the function. This is the default value. |
+|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|`true`|Calls with a backend URL pointing to a function in the local function app won't be sent directly to the function. Instead, the requests are directed back to the HTTP frontend for the function app.|
+|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|`false`|Calls with a backend URL pointing to a function in the local function app are forwarded directly to the function. This is the default value. |
## AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES
This setting controls whether the characters `%2F` are decoded as slashes in rou
|Key|Value|Description| |-|-|-|
-|AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES|true|Route parameters with encoded slashes are decoded. |
-|AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES|false|All route parameters are passed along unchanged, which is the default behavior. |
+|AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES|`true`|Route parameters with encoded slashes are decoded. |
+|AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES|`false`|All route parameters are passed along unchanged, which is the default behavior. |
For example, consider the proxies.json file for a function app at the `myfunction.com` domain.
Optional storage account connection string for storing logs and displaying them
|Key|Sample value| |||
-|AzureWebJobsDashboard|DefaultEndpointsProtocol=https;AccountName=<name>;AccountKey=<key>|
+|AzureWebJobsDashboard|`DefaultEndpointsProtocol=https;AccountName=...`|
> [!NOTE] > For better performance and experience, runtime version 2.x and later versions use APPINSIGHTS_INSTRUMENTATIONKEY and App Insights for monitoring instead of `AzureWebJobsDashboard`.
Optional storage account connection string for storing logs and displaying them
|Key|Sample value| |||
-|AzureWebJobsDisableHomepage|true|
+|AzureWebJobsDisableHomepage|`true`|
When this app setting is omitted or set to `false`, a page similar to the following example is displayed in response to the URL `<functionappname>.azurewebsites.net`.
When this app setting is omitted or set to `false`, a page similar to the follow
|Key|Sample value| |||
-|AzureWebJobsDotNetReleaseCompilation|true|
+|AzureWebJobsDotNetReleaseCompilation|`true`|
## AzureWebJobsFeatureFlags
A comma-delimited list of beta features to enable. Beta features enabled by thes
|Key|Sample value| |||
-|AzureWebJobsFeatureFlags|feature1,feature2|
+|AzureWebJobsFeatureFlags|`feature1,feature2`|
## AzureWebJobsSecretStorageType
The Azure Functions runtime uses this storage account connection string for norm
|Key|Sample value| |||
-|AzureWebJobsStorage|DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key]|
+|AzureWebJobsStorage|`DefaultEndpointsProtocol=https;AccountName=...`|
## AzureWebJobs_TypeScriptPath
Path to the compiler used for TypeScript. Allows you to override the default if
|Key|Sample value| |||
-|AzureWebJobs_TypeScriptPath|%HOME%\typescript|
+|AzureWebJobs_TypeScriptPath|`%HOME%\typescript`|
## FUNCTION\_APP\_EDIT\_MODE
Dictates whether editing in the Azure portal is enabled. Valid values are "readw
|Key|Sample value| |||
-|FUNCTION\_APP\_EDIT\_MODE|readonly|
+|FUNCTION\_APP\_EDIT\_MODE|`readonly`|
## FUNCTIONS\_EXTENSION\_VERSION
The version of the Functions runtime that hosts your function app. A tilde (`~`)
|Key|Sample value| |||
-|FUNCTIONS\_EXTENSION\_VERSION|~3|
+|FUNCTIONS\_EXTENSION\_VERSION|`~3`|
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
Requires that [FUNCTIONS\_EXTENSION\_VERSION](functions-app-settings.md#function
|Key|Sample value| |||
-|FUNCTIONS\_V2\_COMPATIBILITY\_MODE|true|
+|FUNCTIONS\_V2\_COMPATIBILITY\_MODE|`true`|
## FUNCTIONS\_WORKER\_PROCESS\_COUNT
Specifies the maximum number of language worker processes, with a default value
|Key|Sample value| |||
-|FUNCTIONS\_WORKER\_PROCESS\_COUNT|2|
+|FUNCTIONS\_WORKER\_PROCESS\_COUNT|`2`|
## FUNCTIONS\_WORKER\_RUNTIME
The language worker runtime to load in the function app. This corresponds to th
|Key|Sample value| |||
-|FUNCTIONS\_WORKER\_RUNTIME|node|
+|FUNCTIONS\_WORKER\_RUNTIME|`node`|
Valid values:
Each PowerShell worker process initiates checking for module upgrades on the Pow
|Key|Sample value| |||
-|MDMaxBackgroundUpgradePeriod|7.00:00:00|
+|MDMaxBackgroundUpgradePeriod|`7.00:00:00`|
To learn more, see [Dependency management](functions-reference-powershell.md#dependency-management).
Within every `MDNewSnapshotCheckPeriod`, the PowerShell worker checks whether or
|Key|Sample value| |||
-|MDNewSnapshotCheckPeriod|01:00:00|
+|MDNewSnapshotCheckPeriod|`01:00:00`|
To learn more, see [Dependency management](functions-reference-powershell.md#dependency-management).
To avoid excessive module upgrades on frequent Worker restarts, checking for mod
|Key|Sample value| |||
-|MDMinBackgroundUpgradePeriod|1.00:00:00|
+|MDMinBackgroundUpgradePeriod|`1.00:00:00`|
To learn more, see [Dependency management](functions-reference-powershell.md#dependency-management).
The value for this setting indicates a custom package index URL for Python apps.
|Key|Sample value| |||
-|PIP\_EXTRA\_INDEX\_URL|http://my.custom.package.repo/simple |
+|PIP\_EXTRA\_INDEX\_URL|`http://my.custom.package.repo/simple` |
To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
The configuration is specific to Python function apps. It defines the prioritiza
|Key|Value|Description| ||--|--|
-|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|0| Prioritize loading the Python libraries from internal Python worker's dependencies. Third-party libraries defined in requirements.txt may be shadowed. |
-|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|1| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. |
+|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies. Third-party libraries defined in requirements.txt may be shadowed. |
+|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. |
## PYTHON\_ENABLE\_WORKER\_EXTENSIONS
The configuration is specific to Python function apps. Setting this to `1` allow
|Key|Value|Description| ||--|--|
-|PYTHON\_ENABLE\_WORKER\_EXTENSIONS|0| Disable any Python worker extension. |
-|PYTHON\_ENABLE\_WORKER\_EXTENSIONS|1| Allow Python worker to load extensions from requirements.txt. |
+|PYTHON\_ENABLE\_WORKER\_EXTENSIONS|`0`| Disable any Python worker extension. |
+|PYTHON\_ENABLE\_WORKER\_EXTENSIONS|`1`| Allow Python worker to load extensions from requirements.txt. |
## PYTHON\_THREADPOOL\_THREAD\_COUNT
This setting controls logging from the Azure Functions scale controller. For mor
|Key|Sample value| |-|-|
-|SCALE_CONTROLLER_LOGGING_ENABLED|AppInsights:Verbose|
+|SCALE_CONTROLLER_LOGGING_ENABLED|`AppInsights:Verbose`|
The value for this key is supplied in the format `<DESTINATION>:<VERBOSITY>`, which is defined as follows:
Controls the timeout, in seconds, when connected to streaming logs. The default
|Key|Sample value| |-|-|
-|SCM_LOGSTREAM_TIMEOUT|1800|
+|SCM_LOGSTREAM_TIMEOUT|`1800`|
The above sample value of `1800` sets a timeout of 30 minutes. To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs).
Connection string for storage account where the function app code and configurat
|Key|Sample value| |||
-|WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key]|
+|WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
A value of `1` enables your function app to scale when you have your storage acc
|Key|Sample value| |||
-|WEBSITE_CONTENTOVERVNET|1|
+|WEBSITE_CONTENTOVERVNET|`1`|
-Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) plans](dedicated-plan.md) (Standard and higher) running Windows. Not currently supported for Consumption and Premium plans running Linux.
+Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) plans](dedicated-plan.md) (Standard and higher). Not supported when running on a [Consumption plan](consumption-plan.md).
## WEBSITE\_CONTENTSHARE
The file path to the function app code and configuration in an event-driven scal
|Key|Sample value| |||
-|WEBSITE_CONTENTSHARE|functionapp091999e2|
+|WEBSITE_CONTENTSHARE|`functionapp091999e2`|
Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
Sets the DNS server used by an app when resolving IP addresses. This setting is
|Key|Sample value| |||
-|WEBSITE\_DNS\_SERVER|168.63.129.16|
+|WEBSITE\_DNS\_SERVER|`168.63.129.16`|
## WEBSITE\_ENABLE\_BROTLI\_ENCODING
The maximum number of instances that the app can scale out to. Default is no lim
|Key|Sample value| |||
-|WEBSITE\_MAX\_DYNAMIC\_APPLICATION\_SCALE\_OUT|5|
+|WEBSITE\_MAX\_DYNAMIC\_APPLICATION\_SCALE\_OUT|`5`|
## WEBSITE\_NODE\_DEFAULT_VERSION
Sets the version of Node.js to use when running your function app on Windows. Yo
|Key|Sample value| |||
-|WEBSITE\_NODE\_DEFAULT_VERSION|~10|
+|WEBSITE\_NODE\_DEFAULT_VERSION|`~10`|
## WEBSITE\_RUN\_FROM\_PACKAGE
Enables your function app to run from a mounted package file.
|Key|Sample value| |||
-|WEBSITE\_RUN\_FROM\_PACKAGE|1|
+|WEBSITE\_RUN\_FROM\_PACKAGE|`1`|
Valid values are either a URL that resolves to the location of a deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When using zip deployment with this setting, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
Allows you to set the timezone for your function app.
|Key|OS|Sample value| ||--||
-|WEBSITE\_TIME\_ZONE|Windows|Eastern Standard Time|
-|WEBSITE\_TIME\_ZONE|Linux|America/New_York|
+|WEBSITE\_TIME\_ZONE|Windows|`Eastern Standard Time`|
+|WEBSITE\_TIME\_ZONE|Linux|`America/New_York`|
[!INCLUDE [functions-timezone](../../includes/functions-timezone.md)]
Indicates whether all outbound traffic from the app is routed through the virtua
|Key|Sample value| |||
-|WEBSITE\_VNET\_ROUTE\_ALL|1|
+|WEBSITE\_VNET\_ROUTE\_ALL|`1`|
## Next steps
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-premium-plan.md
# Azure Functions Premium plan
-The Azure Functions Premium plan (sometimes referred to as Elastic Premium plan) is a hosting option for function apps. For other hosting plan options, see the [hosting plan article](functions-scale.md).
+The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps. For other hosting plan options, see the [hosting plan article](functions-scale.md).
+
+>[!IMPORTANT]
+>Azure Functions runs on the Azure App Service platform. In the App Service platform, plans that host Premium plan function apps are referred to as *Elastic* Premium plans, with SKU names like `EP1`. If you choose to run your function app on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as `EP1`. App Service plan SKU names that start with "P", such as `P1V2` (Premium V2 Small plan), are actually [Dedicated hosting plans](dedicated-plan.md). Because they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may increase your costs.
Premium plan hosting provides the following benefits to your functions:
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
Previously updated : 07/22/2021 Last updated : 08/20/2021 # Azure guidance for secure isolation
-Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.
+Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: (1) user access controls with authentication and identity separation, (2) compute isolation for processing, (3) networking isolation including data encryption in transit, (4) storage isolation with data encryption at rest, and (5) security assurance processes embedded in service design to correctly develop logically isolated services.
-This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
-
-## Executive summary
-Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.
-
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
-
-Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a multi-layered approach depicted in Figure 1.
+Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for isolated cloud services using a multi-layered approach depicted in Figure 1.
:::image type="content" source="./media/secure-isolation-fig1.png" alt-text="Azure isolation approaches" border="false"::: **Figure 1.** Azure isolation approaches A brief summary of isolation approaches is provided below. -- **User access controls with authentication and identity separation** ΓÇô All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure Active Directory (Azure AD) that customer organization receives and owns when they sign up for a Microsoft cloud service. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users.-- **Compute isolation** ΓÇô Azure provides customers with both logical and physical compute isolation for processing. Logical isolation is implemented via:
+- **User access controls with authentication and identity separation** ΓÇô All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure Active Directory (Azure AD) that your organization receives and owns when you sign up for a Microsoft cloud service. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users.
+- **Compute isolation** ΓÇô Azure provides you with both logical and physical compute isolation for processing. Logical isolation is implemented via:
- *Hypervisor isolation* for services that provide cryptographically certain isolation by using separate virtual machines and using Azure Hypervisor isolation. - *Drawbridge isolation* inside a virtual machine (VM) for services that provide cryptographically certain isolation for workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. - *User context-based isolation* for services that are composed solely of Microsoft-controlled code and customer code is not allowed to run. </br>
-In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.
-- **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements. Customers can use [network security groups](../virtual-network/network-security-groups-overview.md) (NSGs) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. Customers can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, customers can use [Azure Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides customers with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Azure Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq).-- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and customers have the option to use Azure Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes managed disks.
+In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation, you can use Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.
+- **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements. You can use [network security groups](../virtual-network/network-security-groups-overview.md) (NSGs) to achieve network isolation and protect your Azure resources from the Internet while accessing Azure services that have public endpoints. You can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, you can use [Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in your VNet, ensuring that traffic between your VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides you with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and Azure ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq).
+- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and you can use Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes managed disks.
- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as customer workloads get migrated from an on-premises datacenter to the cloud, the delineation of responsibility between the customer and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. Customers can deploy Azure isolation technologies to achieve the desired level of isolation for their applications and data deployed in the cloud.
+In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as you migrate workloads from your on-premises datacenter to the cloud, the delineation of responsibility between you and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. You can use Azure isolation technologies to achieve the desired level of isolation for your applications and data deployed in the cloud.
-Throughout this article, call-out boxes outline important considerations or actions considered to be part of customerΓÇÖs responsibility. For example, customers can use Azure Key Vault to store their secrets, including encryption keys that remain under customer control.
+Throughout this article, call-out boxes outline important considerations or actions considered to be part of your responsibility. For example, you can use Azure Key Vault to store your secrets, including encryption keys that remain under your control.
> [!NOTE]
-> Use of Azure Key Vault for Customer Managed Keys (CMK) is optional and represents customerΓÇÖs responsibility.
+> Use of Azure Key Vault for Customer Managed Keys (CMK) is optional and represents your responsibility.
> > *Additional resources:* > - How to **[get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)**
-This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
+This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help you achieve your secure isolation objectives.
> [!TIP]
-> For recommendations on how to improve the security of applications and data deployed in Azure, customers should review the **[Azure Security Benchmark](../security/benchmarks/index.yml)**.
+> For recommendations on how to improve the security of applications and data deployed in Azure, you should review the **[Azure Security Benchmark](../security/benchmarks/index.yml)**.
## Identity-based isolation
-[Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is an identity repository and cloud service that provides authentication, authorization, and access control for an organizationΓÇÖs users, groups, and objects. Azure AD can be used as a standalone cloud directory or as an integrated solution with existing on-premises Active Directory to enable key enterprise features such as directory synchronization and single sign-on.
+[Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is an identity repository and cloud service that provides authentication, authorization, and access control for your users, groups, and objects. Azure AD can be used as a standalone cloud directory or as an integrated solution with existing on-premises Active Directory to enable key enterprise features such as directory synchronization and single sign-on.
-Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) is associated with an Azure AD tenant. Using [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md), users, groups, and applications from that directory can be granted access to resources in the Azure subscription. For example, a storage account can be placed in a resource group to control access to that specific storage account using Azure AD. Azure Storage defines a set of Azure built-in roles that encompass common permissions used to access blob or queue data. A request to Azure Storage can be authorized using either customerΓÇÖs Azure AD account or the Storage Account Key. In this manner, only specific users can be given the ability to access data in Azure Storage.
+Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) is associated with an Azure AD tenant. Using [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md), users, groups, and applications from that directory can be granted access to resources in the Azure subscription. For example, a storage account can be placed in a resource group to control access to that specific storage account using Azure AD. Azure Storage defines a set of Azure built-in roles that encompass common permissions used to access blob or queue data. A request to Azure Storage can be authorized using either your Azure AD account or the Storage Account Key. In this manner, only specific users can be given the ability to access data in Azure Storage.
### Zero Trust architecture-
-All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that customer organization receives and owns when they sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the [Zero Trust model](https://aka.ms/Zero-Trust), which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust [design principles](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/). Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework.
+All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that your organization receives and owns when you sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the [Zero Trust model](https://aka.ms/Zero-Trust), which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust [design principles](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/). Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework.
> [!NOTE] > Additional resources:
All data in Azure irrespective of the type or storage location is associated wit
> - For definitions and general deployment models, see **[NIST SP 800-207](https://csrc.nist.gov/publications/detail/sp/800-207/final)** *Zero Trust Architecture*. ### Azure Active Directory
-The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. Customers can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from co-mingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide extra protections from untrusted traffic.
+The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. You can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide extra protections from untrusted traffic.
Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
As shown in Figure 2, access via Azure AD requires user authentication through a
- Azure AD instances are discrete containers and there is no relationship between them. - Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation. - Access is not permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances.-- Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is restricted to properly authorized Microsoft operational roles using Just-In-Time (JIT) privileged access management system.
+- Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using Just-In-Time (JIT) privileged access management system.
- Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical Azure RBAC policy checks. :::image type="content" source="./media/secure-isolation-fig2.png" alt-text="Azure Active Directory logical tenant isolation":::
As shown in Figure 2, access via Azure AD requires user authentication through a
In summary, AzureΓÇÖs approach to logical tenant isolation uses identity, managed through Azure Active Directory, as the first logical control boundary for providing tenant-level access to resources and authorization through Azure RBAC. ## Data encryption key management
-Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
+Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
- Server-side encryption that uses service-managed keys, customer-managed keys in Azure, or customer-managed keys on customer-controlled hardware.-- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
+- Client-side encryption that enables you to manage and store keys on premises or in another secure location.
Data encryption provides isolation assurances that are tied directly to encryption (cryptographic) key access. Since Azure uses strong ciphers for data encryption, only entities with access to cryptographic keys can have access to data. Deleting or revoking cryptographic keys renders the corresponding data inaccessible. More information about **data encryption in transit** is provided in *[Networking isolation](#networking-isolation)* section, whereas **data encryption at rest** is covered in *[Storage isolation](#storage-isolation)* section. ### Azure Key Vault- Proper protection and management of cryptographic keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types that are described in the rest of this section: - **Vault** supports software-protected and hardware security module (HSM)-protected secrets, keys, and certificates. - **Managed HSM** supports only HSM-protected cryptographic keys.
-**Customers who require extra security for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault.**
+**If you require extra security for your most sensitive customer data stored in Azure services, you can encrypt it using your own encryption keys you control in Key Vault.**
-The Azure Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable service use from cloud applications and authentication through [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) to allow an organization to centralize and customize authentication, disaster recovery, high availability, and elasticity. Azure Key Vault supports [cryptographic keys](../key-vault/keys/about-keys.md) of various types, sizes, and curves, including RSA and Elliptic Curve keys. With managed HSMs, support is also available for AES symmetric keys.
+The Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable service use from cloud applications and authentication through [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) to allow you to centralize and customize authentication, disaster recovery, high availability, and elasticity. Key Vault supports [cryptographic keys](../key-vault/keys/about-keys.md) of various types, sizes, and curves, including RSA and Elliptic Curve keys. With managed HSMs, support is also available for AES symmetric keys.
-With Azure Key Vault, customers can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios, as shown in Figure 3. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Azure Key Vault vary depending on the underlying HSM, as explained in online documentation.
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios, as shown in Figure 3. **Keys generated inside the Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Key Vault vary depending on the underlying HSM, as explained in online documentation.
:::image type="content" source="./media/secure-isolation-fig3.png" alt-text="Azure Key Vault support for bring your own key (BYOK)"::: **Figure 3.** Azure Key Vault support for bring your own key (BYOK)
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
-Azure Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized:
+Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized:
- Authentication establishes the identity of the caller (user or application). - Authorization determines which operations the caller can perform, based on a combination of [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC) and key vault access policy or managed HSM local RBAC. Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in *[Azure Active Directory](#azure-active-directory)* section. Access to a key vault or managed HSM is controlled through two interfaces or planes - management plane and data plane - with both planes using Azure AD for authentication. -- **Management plane** enables customers to manage the key vault or managed HSM itself, for example, create and delete key vaults or managed HSMs, retrieve key vault or managed HSM properties, and update access policies. For authorization, the management plane uses Azure RBAC with both key vaults and managed HSMs.-- **Data plane** enables customers to work with the data stored in their key vaults and managed HSMs, including adding, deleting, and modifying their data. For vaults, stored data can include keys, secrets, and certificates. For managed HSMs, stored data is limited to cryptographic keys only. For authorization, the data plane uses [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) and [Azure RBAC for data plane operations](../key-vault/general/rbac-guide.md) with key vaults, or [managed HSM local RBAC](../key-vault/managed-hsm/access-control.md) with managed HSMs.
+- **Management plane** enables you to manage the key vault or managed HSM itself, for example, create and delete key vaults or managed HSMs, retrieve key vault or managed HSM properties, and update access policies. For authorization, the management plane uses Azure RBAC with both key vaults and managed HSMs.
+- **Data plane** enables you to work with the data stored in your key vaults and managed HSMs, including adding, deleting, and modifying your data. For vaults, stored data can include keys, secrets, and certificates. For managed HSMs, stored data is limited to cryptographic keys only. For authorization, the data plane uses [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) and [Azure RBAC for data plane operations](../key-vault/general/rbac-guide.md) with key vaults, or [managed HSM local RBAC](../key-vault/managed-hsm/access-control.md) with managed HSMs.
When you create a key vault or managed HSM in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the [key vault](../key-vault/general/security-features.md) or [managed HSM](../key-vault/managed-hsm/access-control.md).
-Azure customers control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:
+You control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:
- All authenticated REST API requests, including failed requests - Operations on the key vault such as creation, deletion, setting access policies, etc.
Azure customers control access permissions and can extract detailed activity log
- Unauthenticated requests such as requests that do not have a bearer token, are malformed or expired, or have an invalid token. > [!NOTE]
-> With Azure Key Vault, customers can monitor how and when their key vaults and managed HSMs are accessed and by whom.
+> With Azure Key Vault, you can monitor how and when your key vaults and managed HSMs are accessed and by whom.
> > *Additional resources:* > - **[Configure monitoring and alerting for Azure Key Vault](../key-vault/general/alert.md)** > - **[Enable logging for Azure Key Vault](../key-vault/general/logging.md)** > - **[How to secure storage account for Azure Key Vault logs](../storage/blobs/security-recommendations.md)**
-Customers can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Azure Key Vault logs. To use this solution, customers need to enable logging of Azure Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.
+You can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.
> [!NOTE] > For a comprehensive list of Azure Key Vault security recommendations, see the **[Security baseline for Azure Key Vault](../key-vault/general/security-baseline.md)**. #### Vault
+**[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). To see a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication. These HSMs meet Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.
-**[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). To see a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. Customers who require extra assurances can choose to safeguard their secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication. These HSMs meet Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.
-
-Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where customers can control their own keys in HSMs and use them to encrypt data at rest for a wide range of Azure services. As mentioned previously, customers can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support bring your own key (BYOK) scenarios.
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs and use them to encrypt data at rest for a wide range of Azure services. As mentioned previously, you can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support bring your own key (BYOK) scenarios.
-Azure Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling customers to enroll and automatically renew certificates from supported public Certificate Authorities. Azure Key Vault certificates support provides for the management of customerΓÇÖs X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
+Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling you to enroll and automatically renew certificates from supported public Certificate Authorities. Key Vault certificates support provides for the management of your X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
-When customers create a key vault in a resource group, they can [manage access](../key-vault/general/security-features.md) by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
+When you create a key vault in a resource group, you can [manage access](../key-vault/general/security-features.md) by using Azure AD, which enables you to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, you can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
> [!IMPORTANT]
-> Customers should control tightly who has Contributor role access to their key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy.
+> You should control tightly who has Contributor role access to your key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy.
> > *Additional resources:* > - How to **[secure access to a key vault](../key-vault/general/security-features.md)** #### Managed HSM-
-**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps customers meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by the customer and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure customer control.
+**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by you and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure your control.
When a managed HSM is created, the requestor also provides a list of data plane administrators. Only these administrators are able to [access the managed HSM data plane](../key-vault/managed-hsm/access-control.md) to perform key operations and manage data plane role assignments (managed HSM local RBAC). The permission model for both the management and data planes uses the same syntax, but permissions are enforced at different levels, and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane-managed HSM local RBAC is enforced by the managed HSM itself. > [!IMPORTANT]
-> Unlike with key vaults, granting users management plane access to managed HSMs does not grant them any data plane access to keys or data plane role assignments managed HSM local RBAC. This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
+> Unlike with key vaults, granting your users management plane access to a managed HSM does not grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
-As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in customerΓÇÖs on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as bring your own key (BYOK) scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
+As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as bring your own key (BYOK) scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
-Managed HSM enables customers to use the established Azure Key Vault API and management interfaces. Customers can use the same application development and deployment patterns for all their applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
+Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
## Compute isolation
-Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that customer code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
+Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
:::image type="content" source="./media/secure-isolation-fig4.png" alt-text="Isolation of Hypervisor, Root VM, and Guest VMs"::: **Figure 4.** Isolation of Hypervisor, Root VM, and Guest VMs
-Physical servers hosting VMs are grouped into clusters and they are independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
+Physical servers hosting VMs are grouped into clusters, and they are independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
-The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines for customers and Azure cloud services. The Hypervisor/Host OS pairing applies decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
+The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing applies decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
### Management network isolation There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:
Communication is permitted from the FC VLAN to the main VLAN but cannot be initi
- Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It cannot initiate connections to the FC or other privileged internal nodes. - The FC treats responses from the agent service as if they were untrusted. Communication with the agent is further restricted to a set of authorized IP addresses using firewall rules on each physical node, and routing rules at the border gateways.-- Throttling is used to ensure that customer VMs cannot saturate the network and management commands form being routed.
+- Throttling is used to ensure that customer VMs cannot saturate the network and management commands from being routed.
Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.
The Hypervisor and the Host OS provide network packet filters so untrusted VMs c
The Azure Management Console and Management Plane follow strict security architecture principles of least privilege to secure and isolate tenant processing: - **Management Console (MC)** ΓÇô The MC in Azure Cloud is composed of the Azure portal GUI and the Azure Resource Manager API layers. They both utilize user credentials to authenticate and authorized all operations. -- **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is composed of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor (which has its own Hypervisor Agent to service communication). These layers all utilize system contexts that are granted the least permissions needed to perform their operations.
+- **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is composed of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor, which has its own Hypervisor Agent to service communication. These layers all use system contexts that are granted the least permissions needed to perform their operations.
-The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs comprise a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes (separate FCs exist to manage compute and storage clusters). If a customer updates their applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
+The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes - separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
-CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling customers to create and manage virtual machine resources and extensions via simple templates.
+CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications cannot bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md). :::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
-All management commands are authenticated via RSA signed certificate or JSON Web Token (JWT). Authentication and command channels are encrypted via Transport Layer Security (TLS) 1.2 as described in *[Data encryption in transit](#data-encryption-in-transit)* section. Server certificates are used to provide TLS connectivity to the authentication providers where a separate authorization mechanism is used, for example, Azure Active Directory or datacenter Security Token Service (dSTS). dSTS is a token provider like Azure Active Directory that is isolated to the Microsoft datacenter and utilized for service level communications.
+All management commands are authenticated via RSA signed certificate or JSON Web Token (JWT). Authentication and command channels are encrypted via Transport Layer Security (TLS) 1.2 as described in *[Data encryption in transit](#data-encryption-in-transit)* section. Server certificates are used to provide TLS connectivity to the authentication providers where a separate authorization mechanism is used, for example, Azure Active Directory or datacenter Security Token Service (dSTS). dSTS is a token provider like Azure Active Directory that is isolated to the Microsoft datacenter and used for service level communications.
-Figure 6 illustrates the management flow corresponding to a user command to stop a virtual machine. The steps enumerated in Table 1 apply to other management commands in the same way and utilize the same encryption and authentication flow.
+Figure 6 illustrates the management flow corresponding to a user command to stop a virtual machine. The steps enumerated in Table 1 apply to other management commands in the same way and use the same encryption and authentication flow.
**Table 1.** Management flow involving various MC and MP components
Figure 6 illustrates the management flow corresponding to a user command to stop
|**9.**|The FA again validates the command is allowed and comes from a trusted source. Once validated, the FA will establish a secure connection using mutual certificate authentication and issue the command to the Hypervisor Agent that is only accessible by the FA.|Mutual Certificate|TLS 1.2| |**10.**|Hypervisor Agent on the host executes an internal call to stop the VM.|System Context|N.A.|
-Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully and events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
+Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully and events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
### Logical isolation implementation options Azure provides isolation of compute processing through a multi-layered approach, including: - **Hypervisor isolation** for services that provide cryptographically certain isolation by using separate virtual machines and using Azure Hypervisor isolation. Examples: *App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.*-- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*
+- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*
- **User context-based isolation** for services that are composed solely of Microsoft-controlled code and customer code is not allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Azure Security Center, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.* These logical isolation options are discussed in the rest of this section. #### Hypervisor isolation
-Hypervisor isolation in Azure is based on [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) technology, which enables Azure Hypervisor-based isolation to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation. Customers can review independent third-party assessment reports about Hyper-V security functions, including the [National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS) reports](https://www.niap-ccevs.org/Product/PCL.cfm?par303=Microsoft%20Corporation) such as the [report published in Aug-2019](https://www.commoncriteriaportal.org/files/epfiles/2019-22-INF-2839.pdf) that is discussed herein.
-
-The Target of Evaluation (TOE) was composed of Windows 10 and Windows Server Standard and Datacenter Editions (version 1903, May 2019 update), including Windows Server 2016 and 2019 Hyper-V evaluation platforms (&#8220;Windows&#8221;). TOE enforces the following security policies as described in the report:
--- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, selecting which events should be audited, and providing secure storage for audit event entries.-- **Cryptographic Support** ΓÇô Windows provides FIPS 140 Cryptographic Algorithm Validation Program (CAVP) validated cryptographic functions that support encryption/decryption, cryptographic signatures, cryptographic hashing, cryptographic key agreement (which is not studied in this evaluation), and random number generation. The TOE additionally provides support for public keys, credential management, and certificate validation functions and provides support for the National Security AgencyΓÇÖs Suite B cryptographic algorithms. Windows also provides extensive auditing support of cryptographic operations, the ability to replace cryptographic functions and random number generators with alternative implementations, and a key isolation service designed to limit the potential exposure of secret and private keys. In addition to using cryptography for its own security functions, Windows offers access to the cryptographic support functions for user-mode and kernel-mode programs. Public key certificates generated and used by Windows authenticate users and machines, and protect both user and system data in transit.-- **User Data Protection** ΓÇô In the context of this evaluation Windows protects user data and provides virtual private networking capabilities. -- **Identification and Authentication** ΓÇô Each Windows user must be identified and authenticated based on administrator-defined policy prior to performing any TSF-mediated functions. Windows maintains databases of accounts including their identities, authentication information, group associations, and privilege and logon rights associations. Windows account policy functions include the ability to define the minimum password length, the number of failed logon attempts, the duration of lockout, and password age.-- **Protection of the TOE Security Functions (TSF)** ΓÇô Windows provides several features to ensure the protection of TOE security functions. Specifically, Windows:
- - Protects against unauthorized data disclosure and modification by using a suite of Internet standard protocols including IPsec, IKE, and ISAKMP.
- - Ensures process isolation security for all processes through private virtual address spaces, execution context, and security context.
- - Uses protected kernel-mode memory to store data structures defining process address space, execution context, memory protection, and security context.
- - Includes self-testing features that ensure the integrity of executable program images and its cryptographic functions.
- - Provides a trusted update mechanism to update its own Windows binaries.
-- **Session Locking** ΓÇô In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog. -- **TOE Access** ΓÇô Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.-- **Trusted Path for Communications** ΓÇô Windows uses TLS, HTTPS, DTLS, and EAP-TLS to provide a trusted path for communications. -- **Security Management** ΓÇô Windows includes several functions to manage security policies. Policy management is controlled through a combination of access control, membership in administrator groups, and privileges.-
-More information is available from the [third-party certification report](https://www.commoncriteriaportal.org/files/epfiles/2019-22-INF-2839.pdf).
+Hypervisor isolation in Azure is based on [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) technology, which enables Azure Hypervisor-based isolation to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation. You can review independent third-party assessment reports about Hyper-V security functions, including the [National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS) reports](https://www.niap-ccevs.org/Product/PCL.cfm?par303=Microsoft%20Corporation) such as the [report published in Feb-2021](https://www.niap-ccevs.org/Product/Compliant.cfm?PID=11087) that is discussed herein.
+
+The Target of Evaluation (TOE) was composed of Microsoft Windows Server, Microsoft Windows 10 version 1909 (November 2019 Update),
+and Microsoft Windows Server 2019 (version 1809) Hyper-V (&#8220;Windows&#8221;). TOE enforces the following security policies as described in the report:
+
+- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, authorized review of stored audit records, and providing secure storage for audit event entries.
+- **Cryptographic Support** ΓÇô Windows provides validated cryptographic functions that support encryption/decryption, cryptographic signatures, cryptographic hashing, and random number generation. Windows implements these functions in support of IPsec, TLS, and HTTPS protocol implementation. Windows also ensures that its Guest VMs have access to entropy data so that virtualized operating systems can ensure the implementation of strong cryptography.
+- **User Data Protection** ΓÇô Windows makes certain computing services available to Guest VMs but implements measures to ensure that access to these services is granted on an appropriate basis and that these interfaces do not result in unauthorized data leakage between Guest VMs and Windows or between multiple Guest VMs.
+- **Identification and Authentication** ΓÇô Windows offers several methods of user authentication, which includes X.509 certificates needed for trusted protocols. Windows implements password strength mechanisms and ensures that excessive failed authentication attempts using methods subject to brute force guessing (password, PIN) results in lockout behavior.
+- **Security Management** - Windows includes several functions to manage security policies. Access to administrative functions is enforced through administrative roles. Windows also has the ability to support the separation of management and operational networks and to prohibit data sharing between Guest VMs.
+- **Protection of the TOE Security Functions (TSF)** ΓÇô Windows implements various self-protection mechanisms to ensure that it cannot be used as a platform to gain unauthorized access to data stored on a Guest VM, that the integrity of both the TSF and its Guest VMs is maintained, and that Guest VMs are accessed solely through well-documented interfaces.
+- **TOE Access** - In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.
+- **Trusted Path/Channels** - Windows implements IPsec, TLS, and HTTPS trusted channels and paths for the purpose of remote administration, transfer of audit data to the operational environment, and separation of management and operational networks.
+
+More information is available from the [third-party certification report](https://www.niap-ccevs.org/MMO/Product/st_vid11087-vr.pdf).
The critical Hypervisor isolation is provided through: - Strongly defined security boundaries enforced by the Hypervisor
The critical Hypervisor isolation is provided through:
These technologies are described in the rest of this section. **They enable Azure Hypervisor to offer strong security assurances for tenant separation in a multi-tenant cloud.** ##### *Strongly defined security boundaries*
-Customer code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.
+Your code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.
:::image type="content" source="./media/secure-isolation-fig7.png" alt-text="Compute isolation with Azure Hypervisor"::: **Figure 7.** Compute isolation with Azure Hypervisor (see online [glossary of terms](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture#glossary))
-The Azure Hypervisor acts like a micro-kernel, passing all hardware access requests from Guest VMs using a Virtualization Service Client (VSC) to the Host OS for processing by using a shared-memory interface called VMBus. The Host OS proxies the hardware requests using a Virtualization Service Provider (VSP) that prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources. The privileged Root partition (also known as Host OS) has direct access to the physical devices/peripherals on the system (for example, storage controllers, GPUs, networking adapters, etc.). The Host OS allows Guest partitions to share the use of these physical devices by exposing virtual devices to each Guest partition. So, an operating system executing in a Guest partition has access to virtualized peripheral devices that are provided by VSPs executing in the Root partition. These virtual device representations can take one of three forms:
+The Azure Hypervisor acts like a micro-kernel, passing all hardware access requests from Guest VMs using a Virtualization Service Client (VSC) to the Host OS for processing by using a shared-memory interface called VMBus. The Host OS proxies the hardware requests using a Virtualization Service Provider (VSP) that prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources. The privileged Root partition, also known as Host OS, has direct access to the physical devices/peripherals on the system, for example, storage controllers, GPUs, networking adapters, and so on. The Host OS allows Guest partitions to share the use of these physical devices by exposing virtual devices to each Guest partition. So, an operating system executing in a Guest partition has access to virtualized peripheral devices that are provided by VSPs executing in the Root partition. These virtual device representations can take one of three forms:
- **Emulated devices** ΓÇô The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition. - **Para-virtualized devices** ΓÇô The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as &#8220;synthetic&#8221; devices.
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce
The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, and secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation within the hypervisor itself that helps mitigate a wide range of sophisticated side channel attacks.
-The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resource between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
+The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resources between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
- **Isolation of network traffic between potentially hostile guests** ΓÇô Virtual Network (VNet) provides isolation of network traffic between tenants as part of its fundamental design, as described later in *[Separation of tenant network traffic](#separation-of-tenant-network-traffic)* section. VNet forms an isolation boundary where the VMs within a VNet can only communicate with each other. Any traffic destined to a VM from within the VNet or external senders without the proper policy configured will be dropped by the Host and not delivered to the VM.-- **Isolation for encryption keys and cryptographic material** ΓÇô Customers can further augment the isolation capabilities with the use of [hardware security managers or specialized key storage](../security/fundamentals/encryption-overview.md), for example, storing encryption keys in FIPS 140 validated hardware security modules via [Azure Key Vault](../key-vault/general/overview.md).
+- **Isolation for encryption keys and cryptographic material** ΓÇô You can further augment the isolation capabilities with the use of [hardware security managers or specialized key storage](../security/fundamentals/encryption-overview.md), for example, storing encryption keys in FIPS 140 validated hardware security modules (HSMs) via [Azure Key Vault](../key-vault/general/overview.md).
- **Scheduling of system resources** ΓÇô Azure design includes guaranteed availability and segmentation of compute, memory, storage, and both direct and para-virtualized device access. The Azure Hypervisor meets the security objectives shown in Table 2.
Listed below are some key design principles adopted by Microsoft to secure Hyper
Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. The goal of defense-in-depth mitigations is to make weaponized exploitation of a vulnerability as expensive as possible for an attacker, limiting their impact and maximizing the window for detection. All exploit mitigations are evaluated for effectiveness by a thorough security review of the Azure Hypervisor attack surface using methods that adversaries may employ. Table 3 outlines some of the mitigations intended to protect the Hypervisor isolation boundaries and hardware host integrity.
-**Table 3.** Azure Hypervisor defense-in-depth
+**Table 3.** Azure Hypervisor defense-in-depth
|Mitigation|Security Impact|Mitigation Details| |-|||
-|**Control flow Integrity**|Increases cost to perform control flow integrity attacks (for example, return-orientedΓÇöprogramming exploits)|[Control Flow Guard](https://www.blackhat.com/docs/us-16/materials/us-16-Weston-Windows-10-Mitigation-Improvements.pdf) (CFG) ensures indirect control flow transfers are instrumented at compile time and enforced by the kernel (user-mode) or secure kernel (kernel-mode), mitigating stack return vulnerabilities.|
+|**Control flow integrity**|Increases cost to perform control flow integrity attacks (for example, return-orientedΓÇöprogramming exploits)|[Control Flow Guard](https://www.blackhat.com/docs/us-16/materials/us-16-Weston-Windows-10-Mitigation-Improvements.pdf) (CFG) ensures indirect control flow transfers are instrumented at compile time and enforced by the kernel (user-mode) or secure kernel (kernel-mode), mitigating stack return vulnerabilities.|
|**User-mode code integrity**|Protects against malicious and unwanted binary execution in user mode|Address Space Layout Randomization (ASLR) forced on all binaries in host partition, all code compiled with SDL security checks (for example, `strict_gs`), [arbitrary code generation restrictions](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) in place on host processes prevent injection of runtime-generated code.|
-|**Hypervisor enforced user and kernel mode code integrity**|No code loaded into code pages marked for execution until authenticity of code is verified|[Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) uses memory isolation to create a secure world to enforce policy and store sensitive code and secrets. With Hypervisor enforced Code Integrity (HVCI), the secure world is used to prevent unsigned code from being injected into the normal world kernel.|
+|**Hypervisor enforced user and kernel mode code integrity**|No code loaded into code pages marked for execution until authenticity of code is verified|[Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) uses memory isolation to create a secure world to enforce policy and store sensitive code and secrets. With Hypervisor enforced Code Integrity (HVCI), the secure world is used to prevent unsigned code from being injected into the normal world kernel.|
|**Hardware root-of-trust with platform secure boot**|Ensures host only boots exact firmware and OS image required|Windows [secure boot](/windows-hardware/design/device-experiences/oem-secure-boot) validates that Azure Hypervisor infrastructure is only bootable in a known good configuration, aligned to Azure firmware, hardware, and kernel production versions.|
-|**Reduced attack surface VMM**|Protects against escalation of privileges in VMM user functions|The Azure Hypervisor Virtual Machine Manager (VMM) contains both user and kernel mode components. User mode components are isolated to prevent break-out into kernel mode functions in addition to numerous layered mitigations.|
+|**Reduced attack surface VMM**|Protects against escalation of privileges in VMM user functions|The Azure Hypervisor Virtual Machine Manager (VMM) contains both user and kernel mode components. User mode components are isolated to prevent break-out into kernel mode functions in addition to numerous layered mitigations.|
Moreover, Azure has adopted an assume-breach security strategy implemented via [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf). This approach relies on a dedicated team of security researchers and engineers who conduct continuous ongoing testing of Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the Azure infrastructure and platform engineering or operations teams. This approach tests security detection and response capabilities and helps identify production vulnerabilities in Azure Hypervisor and other systems, including configuration errors, invalid assumptions, or other security issues in a controlled manner. Microsoft invests heavily in these innovative security measures for continuous Azure threat mitigation. ##### *Strong security assurance processes*
-The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By using the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
+The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By using the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
**Table 4.** Hyper-V attack surface details
The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.c
|**Host partition kernel-mode components**|System in kernel mode: full system compromise with the ability to compromise other Guests|- Virtual Infrastructure Driver (VID) intercept handling </br>- Kernel-mode client library </br>- Virtual Machine Bus (VMBus) channel messages </br>- Storage Virtualization Service Provider (VSP) </br>- Network VSP </br>- Virtual Hard Disk (VHD) parser </br>- Azure Networking Virtual Filtering Platform (VFP) and Virtual Network (VNet)| |**Host partition user-mode components**|Worker process in user mode: limited compromise with ability to attack Host and elevate privileges|- Virtual devices (VDEVs)|
-To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
+To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
> [!NOTE] > Azure has adopted an industry leading approach to ensure Hypervisor-based tenant separation that has been strengthened and improved over two decades of Microsoft investments in Hyper-V technology for virtual machine isolation. The outcome of this approach is a robust Hypervisor that helps ensure tenant separation via 1) strongly defined security boundaries, 2) defense-in-depth exploits mitigations, and 3) strong security assurances processes.
The ABI is implemented within two components:
Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and is not able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes etc.), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and cannot directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
-When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.
+When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.
In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.
Like a virtual machine, the pico-process is much easier to secure than a traditi
In cases where an Azure service is composed of Microsoft-controlled code and customer code is not allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code is not allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated. ### Physical isolation
-In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer.
+In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation you can use Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer.
#### Azure Dedicated Host
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](https://azure.microsoft.com/services/virtual-machines/sql-server/) VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](https://azure.microsoft.com/services/virtual-machines/sql-server/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
> [!NOTE]
-> Customers can deploy a dedicated host using the **[Azure portal](../virtual-machines/dedicated-hosts-portal.md)**, Azure **[PowerShell](../virtual-machines/windows/dedicated-hosts-powershell.md)**, and Azure **[Command-Line Interface](../virtual-machines/linux/dedicated-hosts-cli.md)** (CLI).
+> You can deploy a dedicated host using the **[Azure portal](../virtual-machines/dedicated-hosts-portal.md)**, Azure **[PowerShell](../virtual-machines/windows/dedicated-hosts-powershell.md)**, and Azure **[Command-Line Interface](../virtual-machines/linux/dedicated-hosts-cli.md)** (CLI).
-Customers can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing customers to opt in to a maintenance window to reduce potential impact to their provisioned services. Most maintenance events have little to no impact on customer VMs; however, customers in highly regulated industries or with sensitive workloads may want to have control over any potential maintenance impact.
+You can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing you to opt in to a maintenance window to reduce potential impact to your provisioned services. Most maintenance events have little to no impact on your VMs; however, if you are in a highly regulated industry or with a sensitive workload, you may want to have control over any potential maintenance impact.
> [!NOTE] > Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
Table 5 summarizes available security guidance for customer virtual machines pro
|**Linux**|[Secure policies](../virtual-machines/security-policy.md) <br/> [Azure Disk Encryption](../virtual-machines/linux/disk-encryption-overview.md) <br/> [Built-in security controls](../virtual-machines/linux/security-baseline.md) <br/> [Security recommendations](../virtual-machines/security-recommendations.md)| #### Isolated Virtual Machines
-Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow customer workloads to be deployed on dedicated physical servers. Utilizing Isolated VMs essentially guarantees that a customer VM will be the only one running on that specific server node. Customers can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
+Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
## Networking isolation
-The logical isolation of customer infrastructure in a public cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that each customerΓÇÖs private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between customer VMs remains private within a VNet. Customers can connect their VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on their connectivity options, including bandwidth, latency, and encryption requirements.
+The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty. ### Separation of tenant network traffic
-Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. A customer subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside the same subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.
+Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. Your Azure subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside your subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.
-Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Customers can additionally configure their host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
+Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. You can additionally configure your host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
Azure provides network isolation for each deployment and enforces the following rules: - Traffic between VMs always traverses through trusted packet filters. - Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection. - VMs cannot capture any traffic on the network that is not intended for them.-- Customer VMs cannot send traffic to Azure private interfaces and infrastructure services, or to other customersΓÇÖ VMs. Customer VMs can only communicate with other VMs owned or controlled by the same customer and with Azure infrastructure service endpoints meant for public communications.-- When customers put VMs on a VNet, those VMs get their own address spaces that are invisible, and hence, not reachable from VMs outside of a deployment or virtual network (unless configured to be visible via public IP addresses). Customer environments are open only through the ports that customers specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access.
+- Your VMs cannot send traffic to Azure private interfaces and infrastructure services, or to VMs belonging to other customers. Your VMs can only communicate with other VMs owned or controlled by you and with Azure infrastructure service endpoints meant for public communications.
+- When you put a VM on a VNet, that VM gets its own address space that is invisible, and hence, not reachable from VMs outside of a deployment or VNet (unless configured to be visible via public IP addresses). Your environment is open only through the ports that you specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access.
#### Packet flow and network path protection
-AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses several networking implementations to achieve these goals: flat addressing to allow service instances to be placed anywhere in the network; load balancing to spread traffic uniformly across network paths; and end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
+AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses several networking implementations to achieve these goals: a) flat addressing to allow service instances to be placed anywhere in the network; b) load balancing to spread traffic uniformly across network paths; and c) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
-These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible that the traffic of one service could be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
+These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric - creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch. The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation): -- **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, and forward packets encapsulated with CAs along shortest paths.
+- **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, and forward packets encapsulated with CAs along shortest paths.
- **Provider address (PA)** is the Azure assigned internal fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that is not externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning. Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
Figure 9 depicts a sample packet flow where sender S sends packets to destinatio
:::image type="content" source="./media/secure-isolation-fig9.png" alt-text="Sample packet flow"::: **Figure 9.** Sample packet flow
-A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it could enforce a policy that only servers belonging to the same service can communicate with each other.
+A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it can enforce a policy that only servers belonging to the same service can communicate with each other.
#### Traffic flow patterns
-To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (that is, the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
+To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (that is, the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over Azure ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
:::image type="content" source="./media/secure-isolation-fig10.png" alt-text="Separation of tenant network traffic using VNets"::: **Figure 10.** Separation of tenant network traffic using VNets
-**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customerΓÇÖs on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and customerΓÇÖs internal network (for example, via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to customerΓÇÖs on-premises resources as though it was directly on that network.
+**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When you place a public IP on your VNet gateway, traffic from the public Internet or your on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when you establish private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of your on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and your internal network (for example, via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to your on-premises resources as though it was directly on that network.
At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.
At the Internet Edge Router or the MSEE Router, the packet is encapsulated using
Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including: - **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, and reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.-- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that a customer is always operating within their unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
+- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you are always operating within your unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Therefore, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.
-In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
+In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
-Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets [RFC 3809](https://datatracker.ietf.org/doc/rfc3809/) and [RFC 4110](https://datatracker.ietf.org/doc/rfc4110/). When using Azure VPN Gateway in combination with ExpressRoute, Azure meets [RFC 4111](https://datatracker.ietf.org/doc/rfc4111/) and [RFC 4364](https://datatracker.ietf.org/doc/rfc4364/). With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide customers with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.
+Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets [RFC 3809](https://datatracker.ietf.org/doc/rfc3809/) and [RFC 4110](https://datatracker.ietf.org/doc/rfc4110/). When using Azure VPN Gateway in combination with ExpressRoute, Azure meets [RFC 4111](https://datatracker.ietf.org/doc/rfc4111/) and [RFC 4364](https://datatracker.ietf.org/doc/rfc4364/). With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide you with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.
-Customers are also able to utilize Azure services to further isolate and protect their resources. Using [network security groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, customers can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network access control list (ACL). Customers can apply an NSG to each NIC in a Virtual Machine, apply an NSG to the subnet that a NIC, or another Azure resource, is connected to, and directly to Virtual Machine Scale Sets, allowing finer control over the customer infrastructure.
+You are also able to use Azure services to further isolate and protect your resources. Using [network security groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, you can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network access control list (ACL). You can apply an NSG to each NIC in a virtual machine, apply an NSG to the subnet that a NIC or another Azure resource is connected to, and directly to virtual machine scale set (VMSS), allowing finer control over your infrastructure.
-At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running on top of the Hypervisor within virtual machines from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances utilize the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs and are maintained by the same software that provisions tenants, so they are never out of date. They are applied using the Machine Configuration File (MCF) to Windows Firewall.
+At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running within virtual machines on top of the Hypervisor from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances use the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs - they are maintained by the same software that provisions tenants, so they are never out of date. The fine-grained ACLs are applied using the Machine Configuration File (MCF) to Windows Firewall.
-At the top of the operating system stack is the Guest OS, which customers utilize as their operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. It is possible for customers to enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, customers can define ACLs on the public IPs for extra protection if desired.
+At the top of the operating system stack is the Guest OS, which you use as your operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. You can enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, you can define ACLs on the public IPs for extra protection if desired.
### Service tags
-Customers can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, customers can define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules.
+You can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, you can define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules.
> [!NOTE]
-> Customers can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for a wide range of Azure services for use in network security group rules.
+> You can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for a wide range of Azure services for use in network security group rules.
> > *Additional resources:* > - **[Available service tags for specific Azure services](../virtual-network/service-tags-overview.md#available-service-tags)** ### Azure Private Link
-Customers can use [Azure Private Link](../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer/partner services over a [private endpoint](../private-link/private-endpoint-overview.md) in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. Customers can also create their own [private link service](../private-link/private-link-service-overview.md) in their own VNet and deliver it to their customers.
+You can use [Private Link](../private-link/private-link-overview.md) to access Azure PaaS services and Azure-hosted customer/partner services over a [private endpoint](../private-link/private-endpoint-overview.md) in your VNet, ensuring that traffic between your VNet and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. You can also create your own [private link service](../private-link/private-link-service-overview.md) in your own VNet and deliver it to your customers.
-Azure private endpoint is a network interface that connects customers privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from customerΓÇÖs VNet, effectively bringing the service into customerΓÇÖs VNet.
+Azure private endpoint is a network interface that connects you privately and securely to a service powered by Private Link. Private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet.
-From the networking isolation standpoint, key benefits of Azure Private Link include:
+From the networking isolation standpoint, key benefits of Private Link include:
-- Customers can connect their VNet to services in Azure without a public IP address at the source or destination. Azure Private Link handles the connectivity between the service and its consumers over the Microsoft global backbone network.-- Customers can access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. Azure Private Link eliminates the need to set up public peering or traverse the Internet to reach the service.-- Customers can connect privately to services running in other Azure regions.
+- You can connect your VNet to services in Azure without a public IP address at the source or destination. Private Link handles the connectivity between the service and its consumers over the Microsoft global backbone network.
+- You can access services running in Azure from on-premises over Azure ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. Private Link eliminates the need to set up public peering or traverse the Internet to reach the service.
+- You can connect privately to services running in other Azure regions.
> [!NOTE]
-> Customers can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections.
+> You can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections.
> > *Additional resources:* > - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)** > - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)** ### Data encryption in transit
-Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
-- CustomerΓÇÖs end users and Azure service-- CustomerΓÇÖs on-premises datacenter and Azure region
+- Your end users and Azure service
+- Your on-premises datacenter and Azure region
- Microsoft datacenters as part of expected Azure service operation
-#### CustomerΓÇÖs end users connection to Azure service
-**Transport Layer Security (TLS):** Azure uses the TLS protocol to help protect data when it is traveling between customers and Azure services. Most customer end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.
+#### End user's connection to Azure service
+**Transport Layer Security (TLS):** Azure uses the TLS protocol to help protect data when it is traveling between your end users and Azure services. Most of your end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which you or your end users may access or move customer data.
> [!IMPORTANT]
-> Customers can increase security by enabling encryption in transit. For example, customers can use **[Azure Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Azure Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
+> You can increase security by enabling encryption in transit. For example, you can use **[Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
Across Azure services, traffic to and from the service is [protected by TLS 1.2](https://azure.microsoft.com/updates/azuretls12/) using RSA-2048 for key exchange and AES-256 for data encryption. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server).
-TLS provides strong authentication, message privacy, and integrity. [Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between customerΓÇÖs client systems and Microsoft cloud services by generating a unique session key for every session a customer initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.
+TLS provides strong authentication, message privacy, and integrity. [Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between your client systems and Microsoft cloud services by generating a unique session key for every session you initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.
-**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from a client computer to Windows and Linux VMs enables TLS protection for data in transit. Customers can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
+**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from your client computer to Windows and Linux VMs enables TLS protection for data in transit. You can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
> [!IMPORTANT]
-> Customers should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
+> You should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[Azure ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
-**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, customers can configure their storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
+**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, you can configure your storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Therefore, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS.
-#### CustomerΓÇÖs datacenter connection to Azure region
-**VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. With VNet, customers choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses the customer is using elsewhere. Customers have options to securely connect to a VNet from their on-premises infrastructure or remote locations.
+#### Datacenter connection to Azure region
+**VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of your internal (on-premises) network. With VNet, you choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses you are using elsewhere. You have options to securely connect to a VNet from your on-premises infrastructure or remote locations.
-- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and the customerΓÇÖs internal network, allowing an Azure VM to connect to the customerΓÇÖs back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. Customers can use [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, for example, a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140 validated. Moreover, customers can configure Azure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).-- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from an individual client computer to customerΓÇÖs VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, customers need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address.
+- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and your internal network, allowing an Azure VM to connect to your back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. You can use Azure [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between your VNet and your on-premises infrastructure across the public Internet, for example, a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. VPN Gateway supports a wide range of encryption algorithms that are FIPS 140 validated. Moreover, you can configure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
+- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from your individual client computer to your VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, you need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address.
-In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides customers with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows customers to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates a PSK when the VPN tunnel is created. Customers can change the autogenerated PSK to their own.
+In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides you with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows you to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates the PSK when the VPN tunnel is created. You can change the autogenerated PSK to your own.
-**ExpressRoute encryption:** [ExpressRoute](../expressroute/expressroute-introduction.md) allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. Customers can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enables customers to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). Customers can use MACsec to encrypt the physical links between their network devices and Microsoft network devices when they connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from customer's edge to the Microsoft Enterprise edge routers at the peering locations.
+**Azure ExpressRoute encryption:** [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
-Customers can enable IPsec in addition to MACsec on their ExpressRoute Direct ports, as shown in Figure 11. Using Azure VPN Gateway, customers can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of customerΓÇÖs ExpressRoute circuit between customerΓÇÖs on-premises network and customerΓÇÖs Azure VNet. MACsec secures the physical connection between customerΓÇÖs on-premises network and Microsoft. IPsec secures the end-to-end connection between customerΓÇÖs on-premises network and their VNets in Azure. MACsec and IPsec can be enabled independently.
+You can enable IPsec in addition to MACsec on your ExpressRoute Direct ports, as shown in Figure 11. Using VPN Gateway, you can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of your ExpressRoute circuit between your on-premises network and your Azure VNet. MACsec secures the physical connection between your on-premises network and Microsoft. IPsec secures the end-to-end connection between your on-premises network and your VNets in Azure. MACsec and IPsec can be enabled independently.
:::image type="content" source="./media/secure-isolation-fig11.png" alt-text="VPN and ExpressRoute encryption for data in transit" border="false"::: **Figure 11.** VPN and ExpressRoute encryption for data in transit #### Traffic across Microsoft global network backbone
-Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geo; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
+Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same geography; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
-Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems.
+Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 250,000 km of lit fiber optic and undersea cable systems.
> [!IMPORTANT]
-> Customers should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](../azure-sql/database/security-overview.md#information-protection-and-encryption)**.
+> You should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](../azure-sql/database/security-overview.md#information-protection-and-encryption)**.
### Third-party network virtual appliances
-Azure provides customers with many features to help them achieve their security and isolation goals, including [Azure Security Center](../security-center/security-center-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [Network Security Groups](../virtual-network/network-security-groups-overview.md), [Azure Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Azure Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, customers can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate their specific network isolation requirements while at the same time applying existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in customer virtual networks and deployments.
+Azure provides you with many features to help you achieve your security and isolation goals, including [Azure Security Center](../security-center/security-center-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [network security groups](../virtual-network/network-security-groups-overview.md), [Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Azure Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, you can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate your specific network isolation requirements while at the same time applying existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in your virtual networks and deployments.
-The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving customer Virtual Networks is always encrypted.
+The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving your virtual networks is always encrypted.
> [!TIP]
-> Customers should review published Azure networking documentation for guidance on how to use native security features to help protect their data.
+> You should review published Azure networking documentation for guidance on how to use native security features to help protect your data.
> > *Additional resources:* > - **[Azure network security overview](../security/fundamentals/network-overview.md)** > - **[Azure network security white paper](https://azure.microsoft.com/resources/azure-network-security/)** ## Storage isolation
-Microsoft Azure separates customer VM-based computation resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. So, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
+Microsoft Azure separates your VM-based computation resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. So, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
-Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) can have one or more storage accounts. Azure storage supports various [authentication options](/rest/api/storageservices/authorize-requests-to-azure-storage), including:
+Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) can have one or more storage accounts. Azure storage supports various [authentication options](/rest/api/storageservices/authorize-requests-to-azure-storage), including:
-- **Shared symmetric keys:** Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. These keys can be rotated and regenerated by customers at any point thereafter without coordination with their applications.
+- **Shared symmetric keys:** Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. You can rotate and regenerate these keys at any point thereafter without coordination with your applications.
- **Azure AD-based authentication:** Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).-- **Shared access signatures (SAS):** Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be signification limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
+- **Shared access signatures (SAS):** Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
- **User delegation SAS:** Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.-- **Anonymous public read access:** Customers can allow a small portion of their storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level for customers who desire more stringent control.
+- **Anonymous public read access:** You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
Azure Storage provides storage for a wide variety of workloads, including:
Azure Storage provides storage for a wide variety of workloads, including:
While Azure Storage supports a wide range of different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section. ### Data replication
-Customer data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options):
+Your data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies your data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. You can typically choose to replicate your data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, you can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options):
-- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of customer data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).-- **Zone-redundant storage (ZRS)** replicates customer data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.-- **Geo-redundant storage (GRS)** replicates customer data to a [secondary (paired) region](../best-practices-availability-paired-regions.md) region that is hundreds of kilometers away from the primary region. GRS storage accounts are durable even during a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.-- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, customers can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.-- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../best-practices-availability-paired-regions.md).-- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. Customers can optionally enable read access to data in the secondary region with RA-GZRS if their applications need to be able to read data following a disaster in the primary region.
+- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of your data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).
+- **Zone-redundant storage (ZRS)** replicates your data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.
+- **Geo-redundant storage (GRS)** replicates your data to a [secondary (paired) region](../best-practices-availability-paired-regions.md) that is hundreds of kilometers away from the primary region. GRS storage accounts are durable even during a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.
+- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, you can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.
+- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../best-practices-availability-paired-regions.md).
+- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. You can optionally enable read access to data in the secondary region with RA-GZRS if your applications need to be able to read data following a disaster in the primary region.
### High-level Azure Storage architecture
-Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps and the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery ([Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf)).
+Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps and the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery ([Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf)).
:::image type="content" source="./media/secure-isolation-fig12.png" alt-text="High-level Azure Storage architecture"::: **Figure 12.** High-level Azure Storage architecture (Source: [Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf))
There are three layers within a storage stamp: front-end, partition, and stream.
#### Front-end layer The front-end (FE) layer consists of a set of stateless servers that take the incoming requests, authenticate and authorize the requests, and then route them to a partition server in the Partition Layer. The FE layer knows what partition server to forward each request to, since each front-end server caches a partition map. The partition map keeps track of the partitions for the service being accessed and what partition server is controlling (serving) access to each partition in the system. The FE servers also stream large objects directly from the stream layer.
-Transferring large volumes of data across the Internet is inherently unreliable. Using Azure block blobs service, users can upload and store large files efficiently by breaking up large files into smaller blocks of data. In this manner, block blobs allow partitioning of data into individual blocks for reliability of large uploads, as shown in Figure 13. Each block can be up to 100 MB in size with up to 50,000 blocks in the block blob. If a block fails to transmit correctly, only that particular block needs to be resent versus having to resend the entire file again. In addition, with a block blob, multiple blocks can be sent in parallel to decrease upload time.
+Transferring large volumes of data across the Internet is inherently unreliable. Using Azure block blobs service, you can upload and store large files efficiently by breaking up large files into smaller blocks of data. In this manner, block blobs allow partitioning of data into individual blocks for reliability of large uploads, as shown in Figure 13. Each block can be up to 100 MB in size with up to 50,000 blocks in the block blob. If a block fails to transmit correctly, only that particular block needs to be resent versus having to resend the entire file again. In addition, with a block blob, multiple blocks can be sent in parallel to decrease upload time.
:::image type="content" source="./media/secure-isolation-fig13.png" alt-text="Block blob partitioning of data into individual blocks"::: **Figure 13.** Block blob partitioning of data into individual blocks
-Customers can upload blocks in any order and determine their sequence in the final blocklist commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.
+You can upload blocks in any order and determine their sequence in the final blocklist commitment step. You can also upload a new block to replace an existing uncommitted block of the same block ID.
#### Partition layer The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
The stream layer provides synchronous replication (intra-stamp) across different
All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for &#8220;bit rot&#8221;. If a bad extent is found a new copy of that extent is created to replace the bad extent.
-Customer data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. Customers can choose between platform-managed encryption keys or customer-managed encryption keys. The handling of data encryption and decryption is transparent to customers, as discussed in the next section.
+Your data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. You can choose between Microsoft-managed encryption keys (also known as platform-managed encryption keys) or customer-managed encryption keys (CMK). The handling of data encryption and decryption is transparent to customers, as discussed in the next section.
### Data encryption at rest
-Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
+Azure provides extensive options for [data encryption at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. For more information, see [data encryption models](../security/fundamentals/encryption-models.md). This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.
> [!NOTE]
-> Customers who require extra security and isolation assurances for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault.
+> If you require extra security and isolation assurances for your most sensitive data stored in Azure services, you can encrypt it using your own encryption keys you control in Azure Key Vault.
In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although other encryption keys can be used as described in *[Storage service encryption](#storage-service-encryption)* section. -- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is utilized for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by the customer. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys. These keys are not exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
+- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
+- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys. These keys are not exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
-**Figure 16.** Data Encryption Keys are encrypted using customerΓÇÖs key stored in Azure Key Vault
+**Figure 16.** Data Encryption Keys are encrypted using your key stored in Azure Key Vault
-Therefore, key hierarchy involves both DEK and KEK. DEK is encrypted with KEK and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEK can decrypt the DEK. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEK, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
+Therefore, the encryption key hierarchy involves both DEK and KEK. DEK is encrypted with KEK and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEK can decrypt the DEK. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEK, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for a wide range of Azure platform services is available in online documentation. Moreover, some Azure services provide other [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and Azure Disk encryption for IaaS Virtual Machines, including server-side encryption for managed disks. > [!TIP]
-> Customers should review published Azure data encryption documentation for guidance on how to protect their data.
+> You should review published Azure data encryption documentation for guidance on how to protect your data.
> > *Additional resources:* > - **[Encryption at rest overview](../security/fundamentals/encryption-atrest.md)**
Detailed information about various [data encryption models](../security/fundamen
> - **[Data encryption best practices](../security/fundamentals/data-encryption-best-practices.md)** #### Storage service encryption
-Azure [Storage service encryption](../storage/common/storage-service-encryption.md) for data at rest ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140 validated 256-bit AES encryption, and the handling of encryption, decryption, and key management in Storage service encryption is transparent to customers. By default, Microsoft controls the encryption keys and is responsible for key rotation, usage, and access. Keys are stored securely and protected inside a Microsoft key store. This option provides the most convenience for customers given that all Azure Storage services are supported.
+Azure [Storage service encryption](../storage/common/storage-service-encryption.md) for data at rest ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140 validated 256-bit AES encryption, and the handling of encryption, decryption, and key management in Storage service encryption is transparent to customers. By default, Microsoft controls the encryption keys and is responsible for key rotation, usage, and access. Keys are stored securely and protected inside a Microsoft key store. This option provides you with the most convenience given that all Azure Storage services are supported.
-However, customers can also choose to manage encryption with their own keys by specifying:
+However, you can also choose to manage encryption with your own keys by specifying:
-- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides much flexibility for customers to create, rotate, disable, and revoke access to customer-managed keys. Customers must use Azure Key Vault to store customer-managed keys. Both key vaults and managed HSMs are supported, as described previously in *[Azure Key Vault](#azure-key-vault)* section.-- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on customer premises to meet regulatory compliance requirements. Customer-provided keys enable customers to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
+- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides much flexibility for you to create, rotate, disable, and revoke access to customer-managed keys. You must use Azure Key Vault to store customer-managed keys. Both key vaults and managed HSMs are supported, as described previously in *[Azure Key Vault](#azure-key-vault)* section.
+- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on your premises to meet regulatory compliance requirements. Customer-provided keys enable you to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
> [!NOTE]
-> Customers can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. Customers can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
+> You can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. You can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
Storage service encryption is enabled by default for all new and existing storage accounts and it [cannot be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest: - *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. Customers must use Azure Key Vault to store their customer-managed keys for Storage service encryption. -- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, that is, cluster of storage hardware. This key is used to perform a final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)).
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. You must use Azure Key Vault to store your customer-managed keys for Storage service encryption.
+- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, that is, cluster of storage hardware. This key is used to perform the final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)).
These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it cannot be disabled.
These three keys are combined to protect any data that is written to Azure Stora
Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage [redundancy options](../storage/common/storage-redundancy.md) support encryption and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
-Because data encryption is performed by the Storage service, server-side encryption with CMK enables customers to use any operating system types and images for their VMs. For Windows and Linux customer IaaS VMs, Azure also provides Azure Disk encryption that enables customers to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables double encryption of data at rest.
+Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables [double encryption of data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md).
#### Azure Disk encryption
-Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) may optionally be used to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
+Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
-Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
+Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
-For managed disks, Azure Disk encryption allows customers to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help customers control and manage the disk encryption keys in key vaults. Customers can supply their own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key* (BYOK) scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
+For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key* (BYOK) scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
Azure Disk encryption is not supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption.
Azure Disk encryption is not supported by Managed HSM or an on-premises key mana
Azure Disk encryption relies on two encryption keys for implementation, as described previously: - *Data Encryption Key (DEK)* is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under customer control including granting access permissions through Azure Active Directory.
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under your control including granting access permissions through Azure Active Directory.
-The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where customers can choose to store their keys in [FIPS 140 validated hardware security modules](../key-vault/keys/hsm-protected-keys-byok.md).
+The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where you can choose to store your keys in [FIPS 140 validated hardware security modules](../key-vault/keys/hsm-protected-keys-byok.md).
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Disk encryption selects the encryption method in BitLocker based on the version of Windows, for example, XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). For [Linux VMs](../virtual-machines/linux/disk-encryption-faq.yml), Azure Disk encryption uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace. ##### *Server-side encryption for managed disks*
-[Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for customers. Azure managed disks automatically encrypt customer data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140 validated. For encryption key management, customers have the following choices:
+[Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for you. Azure managed disks automatically encrypt your data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140 validated. For encryption key management, you have the following choices:
- [Platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys) is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.-- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables customers to have control over their own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs do not support Azure Disk encryption.
+- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables you to have control over your own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs do not support Azure Disk encryption.
+
+Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys.
-Customer-managed keys (CMK) enable customers to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over their encryption keys. Customers can grant access to managed disks in their Azure Key Vault so that their keys can be used for encrypting and decrypting the DEK. Customers can also disable their keys or revoke access to managed disks at any time. Finally, customers have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing their encryption keys.
+##### *Encryption at host*
+Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled are not encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
-Customers are [always in control of their customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. The following sections explain how data deletion, retention, and destruction works in Azure.
+You are [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
### Data deletion
-Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that [maps addresses on the virtual disk to areas on the physical disk](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage) and that table is initially empty. The first time a customer writes data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
+Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
-When the customer deletes a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data (for customers who provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage)). At the primary location, the customer can immediately try to access the blob or entity, and they wonΓÇÖt find it in their index, since Azure provides strong consistency for the delete. So, the customer can verify directly that the data has been deleted.
+When you delete a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data, if you provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage). At the primary location, you can immediately try to access the blob or entity, and you wonΓÇÖt find it in your index, since Azure provides strong consistency for the delete. So, you can verify directly that the data has been deleted.
-In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they are written (new versions of pointers are also written sequentially). A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file (and updating all pointers as it goes). It then deletes the oldest log file. Consequently, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
+In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they are written - new versions of pointers are also written sequentially. A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
-The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
+The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
-- A customer cannot read deleted data of another customer.
+- A customer can't read deleted data of another customer.
- If anyone tries to read a region on a virtual disk that they have not yet written to, physical space will not have been allocated for that region and therefore only zeroes would be returned.
-Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way to express a request to read from or write to a physical address that is allocated to a different customer or a physical address that is free. For more information, see the blog post on [data cleansing and leakage](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage).
+Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. For non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
-Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under customer control (that is, customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and customers can choose to have encryption keys under their own control. In this manner, customers can also prevent access to their data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
+Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under your control (that is, customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and you can choose to have encryption keys under your own control. In this manner, you can also prevent access to your data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
### Data retention
-Always during the term of customerΓÇÖs Azure subscription, customer has the ability to access, extract, and delete customer data stored in Azure.
+Always during the term of your Azure subscription, you have the ability to access, extract, and delete your data stored in Azure.
-If a subscription expires or is terminated, Microsoft will preserve customer data for a 90-day retention period to permit customers to extract data or renew their subscriptions. After this retention period, Microsoft will delete all customer data within an another 90 days, that is, customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, customers can control how long their data is stored by timing when they end the service with Microsoft. It is recommended that customers do not terminate their service until they have extracted all data so that the initial 90-day retention period can act as a safety buffer should customers later realize they missed something.
+If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It is recommended that you do not terminate your service until you have extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something.
-If the customer deleted an entire storage account by mistake, they should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. Customers can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless the customer made a backup, deleted storage objects cannot be recovered. For Blob storage, customers can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period specified by the customer. To avoid retention of data after storage account or subscription deletion, customers can delete storage objects individually before deleting the storage account or subscription.
+If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription.
-For accidental deletion involving Azure SQL Database, customers should check backups that the service makes automatically (for example, full database backup is done weekly, and differential database backups are done hourly) and use point-in-time restore. Also, individual services (such as Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen).
+For accidental deletion involving Azure SQL Database, you should check backups that the service makes automatically and use point-in-time restore. For example, full database backup is done weekly, and differential database backups are done hourly. Also, individual services (such as Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen).
### Data destruction If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data cannot be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
If a disk drive used for storage suffers a hardware failure, it is securely [era
- **Purge:** &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, etc.) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three-pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary. - **Destroy:** &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;cannot be reused as originally intended.&#8221;
-Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or Destroyed.
+Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or destroyed.
In addition to technical implementation details that enable Azure compute, networking, and storage isolation, Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems, as described in the next section.
Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of t
- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share MicrosoftΓÇÖs learnings with the broader industry and incorporate industry feedback to create a stronger security development process. - **Tooling and processes** ΓÇô All Azure code is subject to an extensive set of both static and dynamic analysis tools that identify potential vulnerabilities, ineffective security patterns, memory corruption, user privilege issues, and other critical security problems. - *Purpose built fuzzing* ΓÇô A testing technique used to find security vulnerabilities in software products and services. It consists of repeatedly feeding modified, or fuzzed, data to software inputs to trigger hangs, exceptions, and crashes, which are fault conditions that could be used by an attacker to disrupt or take control of applications and services. The Microsoft SDL recommends [fuzzing](https://www.microsoft.com/research/blog/a-brief-introduction-to-fuzzing-and-why-its-an-important-tool-for-developers/) all attack surfaces of a software product, especially those surfaces that expose a data parser to untrusted data.
- - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms and MicrosoftΓÇÖs own tenants, applications, and data. Customer tenants, applications, and data hosted in Azure are never targeted; however, customers can conduct [their own penetration testing](../security/fundamentals/pen-testing.md) of their applications deployed in Azure.
+ - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms and MicrosoftΓÇÖs own tenants, applications, and data. Your tenants, applications, and data hosted in Azure are never targeted; however, you can conduct [your own penetration testing](../security/fundamentals/pen-testing.md) of your applications deployed in Azure.
- *Threat modeling* ΓÇô A core element of the Microsoft SDL. ItΓÇÖs an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle.
- - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, can lead to unintended system configuration changes.
-- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training and any extra training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year plus STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.-- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for customers and their data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, etc.) to better protect AzureΓÇÖs infrastructure and customer data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000.-- **Red Team activities** ΓÇô Microsoft utilizes [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
+ - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, they can lead to unintended system configuration changes.
+- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training and any extra training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year plus STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.
+- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect AzureΓÇÖs infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000.
+- **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
-When migrating to the cloud, customers accustomed to traditional on-premises data center deployment will usually conduct a risk assessment to gauge their threat exposure and formulate mitigating measures. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help customers with this comparison.
+If you are accustomed to traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
## Logical isolation considerations
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep customers from accessing one another's data or applications. This section addresses concerns common to customers who are migrating from traditional on-premises physically isolated infrastructure to the cloud.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you are migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
### Physical versus logical security considerations Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (bare metal) versus logically isolated cloud-based deployments (Azure). ItΓÇÖs useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
Table 6 provides a summary of key security considerations for physically isolate
Listed below are key risks that are unique to shared cloud environments that may need to be addressed when accommodating sensitive data and workloads. ### Exploitation of vulnerabilities in virtualization technologies
-Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Customer software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect customerΓÇÖs service from attacks by its own end users. These permissions can be modified by customers, and they can also choose to configure their VMs to allow remote administrative access.
+Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Your software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect your service from attacks by its own end users. You can modify these permissions, and you can also choose to configure your VMs to allow remote administrative access.
-PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it much more difficult for a compromise to persist.
+PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist.
When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they cannot learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack: - The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used. - Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM. - All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal.-- Customers can provision VMs on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section.
+- You can provision VMs on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section.
-Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of customer VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
+Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
## Summary
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: - User access controls with authentication and identity separation that uses Azure Active Directory and Azure role-based access control (Azure RBAC). - Compute isolation for processing, including both logical and physical compute isolation. - Networking isolation including separation of network traffic and data encryption in transit.-- Storage isolation with data encryption at rest using advanced algorithms with multiple ciphers and encryption keys and provisions for customer-managed keys (CMK) under customer control in Azure Key Vault.
+- Storage isolation with data encryption at rest using advanced algorithms with multiple ciphers and encryption keys and provisions for customer-managed keys (CMK) under your control in Azure Key Vault.
- Security assurance processes embedded in service design to correctly develop logically isolated services, including Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate risks.
-In line with the shared responsibility model in cloud computing, this article provides customer guidance for activities that are part of the customer responsibility. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.
+In line with the shared responsibility model in cloud computing, this article provides you with guidance for activities that are part of your responsibility. It also explores design principles and technologies available in Azure to help you achieve your secure isolation objectives.
## Next steps
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP, DoD, and ICD 503 compliance scope for
Previously updated : 08/12/2021 Last updated : 08/20/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | | | [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | | | | [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power Query Online](/powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power Query Online](https://powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | | | | | | [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | | | | [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
The Azure Monitor Agent is implemented as an [Azure VM extension](../../virtual-
| Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent | | TypeHandlerVersion | 1.0 | 1.5 |
+## Extension versions
+It is strongly recommended to update to GA+ versions instead of using preview versions.
+
+| Release Date | Release notes | Windows | Linux |
+|:|:|:|:|:|
+| June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 |
+| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
+| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0 (do not use 1.10.7.0) |
+ ## Install with Azure portal To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) in the Azure portal. This allows you to associate the data collection rule with one or more Azure virtual machines or Azure Arc enabled servers. The agent will be installed on any of these virtual machines that don't already have it.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
When compared with the existing agents, this new agent doesn't yet have full par
- **Comparison with Log Analytics agents (MMA/OMS):** - Not all Log Analytics solutions are supported today. See [what's supported](#supported-services-and-features). - No support for Azure Private Links.
- - No support for collecting custom logs or IIS logs.
+ - No support for collecting file based logs or IIS logs.
- **Comparison with Azure Diagnostics extensions (WAD/LAD):** - No support for Event Hubs and Storage accounts as destinations.
+ - No support for collecting file based logs, IIS logs, ETW events, .NET events and crash dumps.
### Changes in data collection The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent.
The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-ov
Azure virtual machines, virtual machine scale sets, and Azure ArcΓÇôenabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported. ## Supported regions
-The Azure Monitor agent is available in all public regions that support Log Analytics. Government regions and clouds aren't currently supported.
+Azure Monitor agent is available in all public regions that support Log Analytics, as well as the Azure government and China clouds. Air-gapped clouds are not yet supported.
+
+## Supported operating systems
+For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems).
+ ## Supported services and features The following table shows the current support for the Azure Monitor agent with other Azure services.
The Azure Monitor agent sends data to Azure Monitor Metrics or a Log Analytics w
<sup>1</sup> There's a limitation today on the Azure Monitor agent for Linux. Using Azure Monitor Metrics as the *only* destination isn't supported. Using it along with Azure Monitor Logs works. This limitation will be addressed in the next extension update.
-## Supported operating systems
-For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems).
- ## Security The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent.
The Azure Monitor agent extensions for Windows and Linux can communicate either
![Flowchart to determine the values of setting and protectedSetting parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-1. After the values for the *setting* and *protectedSetting* parameters are determined, provide these additional parameters when you deploy the Azure Monitor agent by using PowerShell commands. The following examples are for Azure virtual machines.
+2. After the values for the *setting* and *protectedSetting* parameters are determined, provide these additional parameters when you deploy the Azure Monitor agent by using PowerShell commands. The following examples are for Azure virtual machines.
| Parameter | Value | |:|:|
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitat
> Get-WinEvent -LogName 'Application' -FilterXPath $XPath > ``` >
+> - **In the cmdlet above, the value for '-LogName' parameter is the initial part of the XPath query until the '!', while only the rest of the XPath query goes into the $XPath parameter.**
> - If events are returned, the query is valid. > - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine. > - If you receive the message *The specified query is invalid* , the query syntax is invalid.
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-overview.md
Each data source has a data source type. Each type defines a unique set of prope
## Limits For limits that apply to each data collection rule, see [Azure Monitor service limits](../service-limits.md#data-collection-rules).
-## Data residency
+## Data resiliency and high availability
Data Collection Rules as a service is deployed regionally. A rule gets created and stored in the region you specify, and is backed up to the [paired-region](../../best-practices-availability-paired-regions.md#azure-regional-pairs) within the same Geo.
+Additionally, the service is deployed to all 3 [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region, making it a **zone-redundant service** which further adds to high availability.
+ **Single region data residency**: The previewed feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. Single region residency is enabled by default in these regions.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Supported tables are currently limited to those specified below. All data from t
| DatabricksSQLPermissions | | | DatabricksSSH | | | DatabricksWorkspace | |
-| DeviceFileEvents | |
-| DeviceNetworkEvents | |
| DeviceNetworkInfo | |
-| DeviceProcessEvents | |
-| DeviceRegistryEvents | |
| DnsEvents | | | DnsInventory | | | DummyHydrationFact | |
Supported tables are currently limited to those specified below. All data from t
## Next steps -- [Query the exported data from Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
+- [Query the exported data from Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
azure-monitor Move Workspace Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/move-workspace-region.md
If you wish to discard the source workspace, delete the exported resources or re
## Clean up
-The original workspace and data ingested to it before the migration, remain in original region and data is subjected to the retention policy in the workspace. It's recommended to remain the original workspace for the duration your older data is needed, to allow you to [query across](./cross-workspace-query.md#performing-a-query-across-multiple-resources) target and original workspaces. If you no longer need access to older data in original workspace or other resources in original region, select the original resource group in Azure portal, select any resources that you want to remove and click **Delete** in toolbar.
+While new data is being ingested to your new workspace, older data in original workspace remain available for query and subjected to the retention policy defined in workspace. It's recommended to remain the original workspace for the duration older data is needed to allow you to [query across](./cross-workspace-query.md#performing-a-query-across-multiple-resources) workspaces. If you no longer need access to older data in original workspace, select the original resource group in Azure portal, then select any resources that you want to remove and click **Delete** in toolbar.
## Next steps In this tutorial, you moved an Log Analytics workspace and associated resources from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to: - [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs to another region](../../site-recovery/azure-to-azure-tutorial-migrate.md)
azure-monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/queries.md
Each query has multiple properties that help you group and find them. These prop
- **Category** ΓÇô A type of information such as *Security* or *Audit*. Categories are identical to the categories defined in the Tables side pane. See the [Azure Monitor Table Reference](/azure/azure-monitor/reference/tables/tables-category) for a full list of categories. - **Solution** ΓÇô An Azure Monitor solution associated with the queries - **Topic** ΓÇô The topic of the example query such as *Activity Logs* or *App logs*. The topic property is unique to example queries and may differ according to the specific resource type.+ - **Labels** - Custom labels that you can define and assign when you [save your own query](save-query.md). - **Tags** - Custom properties that can be defined when you [create a query pack](query-packs.md). Tags allow your organization to create their own taxonomies for organizing queries.
The query interface is populated with the following types of queries:
**Query packs:** A [query pack](query-packs.md) holds a collection of log queries, including queries that you save yourself. This includes the [default query pack](query-packs.md#default-query-pack) and any other query packs that your organization may have created in the subscription. **Legacy queries:** Log queries previously saved in the query explorer experience and queries Azure solutions that are installed in the workspace. These are listed in the query dialog box under **Legacy queries**.
+>[!TIP]
+> Legacy Quereis are only avaiable in a Log Analytics Workspace.
## Effect of query scope The queries that are available when you open Log Analytics is determined by the current [query scope ](scope.md).
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/database-security.md
description: Learn how Azure Cosmos DB provides database protection and data sec
Previously updated : 10/21/2020 Last updated : 08/20/2021
The following screenshot shows how you can use audit logging and activity logs t
<a id="primary-keys"></a>
-## Primary keys
+## Primary/secondary keys
-Primary keys provide access to all the administrative resources for the database account. Primary keys:
+Primary/secondary keys provide access to all the administrative resources for the database account. Primary/secondary keys:
- Provide access to accounts, databases, users, and permissions. - Cannot be used to provide granular access to containers and documents. - Are created during the creation of an account. - Can be regenerated at any time.
-Each account consists of two primary keys: a primary key and secondary key. The purpose of dual keys is so that you can regenerate, or roll keys, providing continuous access to your account and data.
+Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is so that you can regenerate, or roll keys, providing continuous access to your account and data.
-In addition to the two primary keys for the Cosmos DB account, there are two read-only keys. These read-only keys only allow read operations on the account. Read-only keys do not provide access to read permissions resources.
+Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but do not provide access to read permissions resources.
-Primary, secondary, read only, and read-write primary keys can be retrieved and regenerated using the Azure portal. For instructions, see [View, copy, and regenerate access keys](manage-with-cli.md#regenerate-account-key).
+Primary/secondary keys can be retrieved and regenerated using the Azure portal. For instructions, see [View, copy, and regenerate access keys](manage-with-cli.md#regenerate-account-key).
:::image type="content" source="./media/secure-access-to-data/nosql-database-security-master-key-portal.png" alt-text="Access control (IAM) in the Azure portal - demonstrating NoSQL database security":::
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
Previously updated : 05/25/2021 Last updated : 08/20/2021
Diagram of gateway mode connection with a dedicated gateway:
A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
-Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. In other words, if an item or query is cached on one node, it isn't necessarily cached on the others.
+Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. In other words, if an item or query is cached on one node, it isn't necessarily cached on the others.
For development, we recommend starting with one node but for production, you should provision three or more nodes for high availability. [Learn how to provision a dedicated gateway cluster with an integrated cache](how-to-configure-integrated-cache.md). Provisioning multiple dedicated gateway nodes allows the dedicated gateway cluster to continue to route requests and serve cached data, even when one of the dedicated gateway nodes is unavailable.
The dedicated gateway has the following limitations during the public preview:
- Dedicated gateways are only supported on SQL API accounts. - You can't provision a dedicated gateway in Azure Cosmos DB accounts with [IP firewalls](how-to-configure-firewall.md) or [Private Link](how-to-configure-private-endpoints.md) configured. - You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](high-availability.md#availability-zone-support) enabled.
+- You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
## Next steps
cosmos-db Graph Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-modeling-tools.md
This tool provides the data modeling of vertices / edges and their respective pr
The animation at Figure-2 provides a demonstration of reverse engineering, extraction of entities from RDBMS then Hackolade will discover relations from foreign key relationships then modifications.
-Sample DDL for source as SQL Server available at [here](https://github.com/Azure-Samples/northwind-ddl-sample/nw.sql)
+Sample DDL for source as SQL Server available at [here](https://github.com/Azure-Samples/northwind-ddl-sample/blob/main/nw.sql)
:::image type="content" source="./media/graph-modeling-tools/hackolade-screenshot.jpg" alt-text="Graph Diagram":::
The following image demonstrates reverse engineering from RDBMS & Hackolade in a
- [Documentation of Hackolade](https://hackolade.com/help/CosmosDBGremlin.html) ## Next steps-- [Visualizing the data](/graph-visualization)
+- [Visualizing the data](/azure/cosmos-db/graph/graph-visualization-partners)
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/secure-access-to-data.md
Previously updated : 06/22/2021 Last updated : 08/20/2021
Azure Cosmos DB provides three ways to control access to your data.
| Access control type | Characteristics | |||
-| [Primary keys](#primary-keys) | Shared secret allowing any management or data operation. It comes in both read-write and read-only variants. |
+| [Primary/secondary keys](#primary-keys) | Shared secret allowing any management or data operation. It comes in both read-write and read-only variants. |
| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Azure Active Directory (AAD) identities for authentication. | | [Resource tokens](#resource-tokens)| Fine-grained permission model based on native Azure Cosmos DB users and permissions. |
-## <a id="primary-keys"></a> Primary keys
+## <a id="primary-keys"></a> Primary/secondary keys
-Primary keys provide access to all the administrative resources for the database account. Each account consists of two primary keys: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll keys, providing continuous access to your account and data. To learn more about primary keys, see the [Database security](database-security.md#primary-keys) article.
+Primary/secondary keys provide access to all the administrative resources for the database account. Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll keys, providing continuous access to your account and data. To learn more about primary/secondary keys, see the [Database security](database-security.md#primary-keys) article.
-### <a id="key-rotation"></a> Key rotation
+### <a id="key-rotation"></a> Key rotation and regeneration
-The process of rotating your primary key is simple.
+The process of key rotation and regeneration is simple. First, make sure that your application is consistently using either the primary key or the secondary key to access your Azure Cosmos DB account. Then, follow the steps outlined below.
-1. Navigate to the Azure portal to retrieve your secondary key.
-2. Replace your primary key with your secondary key in your application. Make sure that all the Cosmos DB clients across all the deployments are promptly restarted and will start using the updated key.
-3. Rotate the primary key in the Azure portal.
-4. Validate the new primary key works against all resource. Key rotation process can take anywhere from less than a minute to hours depending on the size of the Cosmos DB account.
-5. Replace the secondary key with the new primary key.
+# [If your application is currently using the primary key](#tab/using-primary-key)
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+
+ :::image type="content" source="./media/secure-access-to-data/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/secure-access-to-data/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+# [If your application is currently using the secondary key](#tab/using-secondary-key)
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+
+ :::image type="content" source="./media/secure-access-to-data/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/secure-access-to-data/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
++ ### Code sample to use a primary key
-The following code sample illustrates how to use a Cosmos DB account endpoint and primary key to instantiate a DocumentClient and create a database:
+The following code sample illustrates how to use a Cosmos DB account endpoint and primary key to instantiate a CosmosClient:
```csharp
-//Read the Azure Cosmos DB endpointUrl and authorization keys from config.
-//These values are available from the Azure portal on the Azure Cosmos DB account blade under "Keys".
-//Keep these values in a safe and secure location. Together they provide Administrative access to your Azure Cosmos DB account.
+// Read the Azure Cosmos DB endpointUrl and authorization keys from config.
+// These values are available from the Azure portal on the Azure Cosmos DB account blade under "Keys".
+// Keep these values in a safe and secure location. Together they provide Administrative access to your Azure Cosmos DB account.
private static readonly string endpointUrl = ConfigurationManager.AppSettings["EndPointUrl"]; private static readonly string authorizationKey = ConfigurationManager.AppSettings["AuthorizationKey"];
private static readonly string authorizationKey = ConfigurationManager.AppSettin
CosmosClient client = new CosmosClient(endpointUrl, authorizationKey); ```
-The following code sample illustrates how to use the Azure Cosmos DB account endpoint and primary key to instantiate a `CosmosClient` object:
-- ## <a id="rbac"></a> Role-based access control Azure Cosmos DB exposes a built-in role-based access control (RBAC) system that lets you:
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-json-scripting-reference.md
Following table describe the properties within the activity JSON definition:
| type |Specifies the type of the activity. See the [DATA STORES](#data-stores) and [DATA TRANSFORMATION ACTIVITIES](#data-transformation-activities) sections for different types of activities. |Yes | | inputs |Input tables used by the activity<br/><br/>`// one input table`<br/>`"inputs": [ { "name": "inputtable1" } ],`<br/><br/>`// two input tables` <br/>`"inputs": [ { "name": "inputtable1" }, { "name": "inputtable2" } ],` |No for HDInsightStreaming and SqlServerStoredProcedure activities <br/> <br/> Yes for all others | | outputs |Output tables used by the activity.<br/><br/>`// one output table`<br/>`"outputs": [ { "name": ΓÇ£outputtable1ΓÇ¥ } ],`<br/><br/>`//two output tables`<br/>`"outputs": [ { "name": ΓÇ£outputtable1ΓÇ¥ }, { "name": ΓÇ£outputtable2ΓÇ¥ } ],` |Yes |
-| linkedServiceName |Name of the linked service used by the activity. <br/><br/>An activity may require that you specify the linked service that links to the required compute environment. |Yes for HDInsight activities, Azure Machine Learning Studio (classic) activities, and Stored Procedure Activity. <br/><br/>No for all others |
+| linkedServiceName |Name of the linked service used by the activity. <br/><br/>An activity may require that you specify the linked service that links to the required compute environment. |Yes for HDInsight activities, ML Studio (classic) activities, and Stored Procedure Activity. <br/><br/>No for all others |
| typeProperties |Properties in the typeProperties section depend on type of the activity. |No | | policy |Policies that affect the run-time behavior of the activity. If it is not specified, default policies are used. |No | | scheduler |ΓÇ£schedulerΓÇ¥ property is used to define desired scheduling for the activity. Its subproperties are the same as the ones in the [availability property in a dataset](data-factory-create-datasets.md#dataset-availability). |No |
The following table lists the compute environments supported by Data Factory and
| | | | [On-demand HDInsight cluster](#on-demand-azure-hdinsight-cluster) or [your own HDInsight cluster](#existing-azure-hdinsight-cluster) |[.NET custom activity](#net-custom-activity), [Hive activity](#hdinsight-hive-activity), [Pig activity](#hdinsight-pig-activity), [MapReduce activity](#hdinsight-mapreduce-activity), Hadoop streaming activity, [Spark activity](#hdinsight-spark-activity) | | [Azure Batch](#azure-batch) |[.NET custom activity](#net-custom-activity) |
-| [Azure Machine Learning Studio (classic)](#azure-machine-learning-studio-classic) | [Azure Machine Learning Studio (classic) Batch Execution Activity](#azure-machine-learning-studio-classic-batch-execution-activity), [Azure Machine Learning Studio (classic) Update Resource Activity](#azure-machine-learning-studio-classic-update-resource-activity) |
+| [Machine Learning Studio (classic)](#ml-studio-classic) | [ML Studio (classic) Batch Execution Activity](#ml-studio-classic-batch-execution-activity), [ML Studio (classic) Update Resource Activity](#ml-studio-classic-update-resource-activity) |
| [Azure Data Lake Analytics](#azure-data-lake-analytics) |[Data Lake Analytics U-SQL](#data-lake-analytics-u-sql-activity) | | [Azure SQL Database](#azure-sql-database), [Azure Synapse Analytics](#azure-synapse-analytics), [SQL Server](#sql-server-stored-procedure) |[Stored Procedure](#stored-procedure-activity) |
The following table provides descriptions for the properties used in the Azure J
} ```
-## Azure Machine Learning Studio (classic)
-You create an Azure Machine Learning Studio (classic) linked service to register a Studio (classic) batch scoring endpoint with a data factory. Two data transformation activities that can run on this linked service: [Azure Machine Learning Studio (classic) Batch Execution Activity](#azure-machine-learning-studio-classic-batch-execution-activity), [Azure Machine Learning Studio (classic) Update Resource Activity](#azure-machine-learning-studio-classic-update-resource-activity).
+## ML Studio (classic)
+You create an ML Studio (classic) linked service to register a Studio (classic) batch scoring endpoint with a data factory. Two data transformation activities that can run on this linked service: [ML Studio (classic) Batch Execution Activity](#ml-studio-classic-batch-execution-activity), [ML Studio (classic) Update Resource Activity](#ml-studio-classic-update-resource-activity).
### Linked service The following table provides descriptions for the properties used in the Azure JSON definition of a Studio (classic) linked service.
Activity | Description
[HDInsight MapReduce Activity](#hdinsight-mapreduce-activity) | The HDInsight MapReduce activity in a Data Factory pipeline executes MapReduce programs on your own or on-demand Windows/Linux-based HDInsight cluster. [HDInsight Streaming Activity](#hdinsight-streaming-activity) | The HDInsight Streaming Activity in a Data Factory pipeline executes Hadoop Streaming programs on your own or on-demand Windows/Linux-based HDInsight cluster. [HDInsight Spark Activity](#hdinsight-spark-activity) | The HDInsight Spark activity in a Data Factory pipeline executes Spark programs on your own HDInsight cluster.
-[Azure Machine Learning Studio (classic) Batch Execution Activity](#azure-machine-learning-studio-classic-batch-execution-activity) | Azure Data Factory enables you to easily create pipelines that use a published Studio (classic) web service for predictive analytics. Using the Batch Execution Activity in an Azure Data Factory pipeline, you can invoke a Studio (classic) web service to make predictions on the data in batch.
-[Azure Machine Learning Studio (classic) Update Resource Activity](#azure-machine-learning-studio-classic-update-resource-activity) | Over time, the predictive models in the Azure Machine Learning Studio (classic) scoring experiments need to be retrained using new input datasets. After you are done with retraining, you want to update the scoring web service with the retrained machine learning model. You can use the Update Resource Activity to update the web service with the newly trained model.
+[ML Studio (classic) Batch Execution Activity](#ml-studio-classic-batch-execution-activity) | Azure Data Factory enables you to easily create pipelines that use a published Studio (classic) web service for predictive analytics. Using the Batch Execution Activity in an Azure Data Factory pipeline, you can invoke a Studio (classic) web service to make predictions on the data in batch.
+[ML Studio (classic) Update Resource Activity](#ml-studio-classic-update-resource-activity) | Over time, the predictive models in the ML Studio (classic) scoring experiments need to be retrained using new input datasets. After you are done with retraining, you want to update the scoring web service with the retrained machine learning model. You can use the Update Resource Activity to update the web service with the newly trained model.
[Stored Procedure Activity](#stored-procedure-activity) | You can use the Stored Procedure activity in a Data Factory pipeline to invoke a stored procedure in one of the following data stores: Azure SQL Database, Azure Synapse Analytics, SQL Server Database in your enterprise or an Azure VM. [Data Lake Analytics U-SQL activity](#data-lake-analytics-u-sql-activity) | Data Lake Analytics U-SQL Activity runs a U-SQL script on an Azure Data Lake Analytics cluster. [.NET custom activity](#net-custom-activity) | If you need to transform data in a way that is not supported by Data Factory, you can create a custom activity with your own data processing logic and use the activity in the pipeline. You can configure the custom .NET activity to run using either an Azure Batch service or an Azure HDInsight cluster.
Note the following points:
For more information about the activity, see [Spark Activity](data-factory-spark.md) article.
-## Azure Machine Learning Studio (classic) Batch Execution Activity
-You can specify the following properties in an Azure Machine Learning Studio (classic) Batch Execution Activity JSON definition. The type property for the activity must be: **AzureMLBatchExecution**. You must create a Studio (classic) linked service first and specify the name of it as a value for the **linkedServiceName** property. The following properties are supported in the **typeProperties** section when you set the type of activity to AzureMLBatchExecution:
+## ML Studio (classic) Batch Execution Activity
+You can specify the following properties in an ML Studio (classic) Batch Execution Activity JSON definition. The type property for the activity must be: **AzureMLBatchExecution**. You must create a Studio (classic) linked service first and specify the name of it as a value for the **linkedServiceName** property. The following properties are supported in the **typeProperties** section when you set the type of activity to AzureMLBatchExecution:
Property | Description | Required -- | -- | --
In the JSON example, the deployed Studio (classic) Web service uses a reader and
> [!NOTE] > Only inputs and outputs of the AzureMLBatchExecution activity can be passed as parameters to the Web service. For example, in the above JSON snippet, MLSqlInput is an input to the AzureMLBatchExecution activity, which is passed as an input to the Web service via webServiceInput parameter.
-## Azure Machine Learning Studio (classic) Update Resource Activity
-You can specify the following properties in an Azure Machine Learning Studio (classic) Update Resource Activity JSON definition. The type property for the activity must be: **AzureMLUpdateResource**. You must create a Studio (classic) linked service first and specify the name of it as a value for the **linkedServiceName** property. The following properties are supported in the **typeProperties** section when you set the type of activity to AzureMLUpdateResource:
+## ML Studio (classic) Update Resource Activity
+You can specify the following properties in an ML Studio (classic) Update Resource Activity JSON definition. The type property for the activity must be: **AzureMLUpdateResource**. You must create a Studio (classic) linked service first and specify the name of it as a value for the **linkedServiceName** property. The following properties are supported in the **typeProperties** section when you set the type of activity to AzureMLUpdateResource:
Property | Description | Required -- | -- | --
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/diagnostic-logging.md
The following table lists the field names and descriptions:
| | | | **TimeGenerated** | The date and time in UTC when the report was created. | | **ResourceId** | The resource ID of your public IP address. |
-| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **Category** | For notifications, this will be `DDoSMitigationReports`.|
| **ResourceGroup** | The resource group that contains your public IP address and virtual network. | | **SubscriptionId** | Your DDoS protection plan subscription ID. | | **Resource** | The name of your public IP address. |
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authenticate-with-active-directory.md
Following are the prerequisites to authenticate to Event Grid.
- Install the SDK on your application. - [Java](/java/api/overview/azure/messaging-eventgrid-readme#include-the-package) - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme-pre#install-the-package)
- - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid.md#install-the-azureeventgrid-package)
+ - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid#install-the-azureeventgrid-package)
- [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid#install-the-package) - Install the Azure Identity client library. The Event Grid SDK depends on the Azure Identity client library for authentication. - [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
New-AzResource -ResourceGroupName <ResourceGroupName> -ResourceType Microsoft.Ev
- Java SDK: [github](https://github.com/Azure/azure-sdk-for-jav) - .NET SDK: [github](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid) | [samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid/MigrationGuide.md) - Python SDK: [github](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid) | [samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventgrid/azure-eventgrid/migration_guide.md)
- - JavaScript SDK: [github](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/) | [samples](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/migration.md)
+ - JavaScript SDK: [github](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/) | [samples](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventgrid/eventgrid/MIGRATION.md)
- [Event Grid SDK blog](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) - Azure Identity client library - [Java](https://github.com/Azure/azure-sdk-for-jav)
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Previously updated : 08/17/2021 Last updated : 08/20/2021
> [!IMPORTANT] > Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In this article, you'll learn how to enable diagnostic settings in the FHIR service and review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements, such as Health Insurance Portability and Accountability Act (HIPAA), is a must. To access this feature in the Azure portal, refer to the steps below.
+In this article, you'll learn how to enable diagnostic settings in the FHIR service and review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service. Compliance with regulatory requirements like Health Insurance Portability and Accountability Act (HIPAA), is a must. To access this feature in the Azure portal, refer to the steps below.
## Enable audit logs
-1. Select your FHIR service in the Azure portal
+1. Select your FHIR service in the Azure portal.
2. Browse to **Diagnostic** settings under the **Monitoring** menu option.
- [ ![Add Azure FHIR diagnostic settings.](media/diagnostic-logs/fhir-diagnostic-settings-screen.png) ](media/diagnostic-logs/fhir-diagnostic-settings-screen.png#lightbox)
+ [ ![Screenshot of the diagnostic settings page in the Azure portal.](media/diagnostic-logs/fhir-diagnostic-settings-screen.png) ](media/diagnostic-logs/fhir-diagnostic-settings-screen.png#lightbox)
3. Select **+ Add diagnostic settings**. 4. Enter a name for the setting.
-5. Select the method you want to use to access your diagnostic logs:
+5. Select the method you want to use to access your diagnostic logs.
-**Archive to a storage account** for auditing or manual inspection.
+**Archive to a storage account** is used for auditing or manual inspection.
The storage account you want to use needs to be already created.
-**Stream to event hub** for ingestion by a third-party service or custom analytic solution.
+**Stream to event hub** is used for ingestion by a third-party service or custom analytic solution.
You will need to create an event hub namespace and event hub policy before you can configure this step.
-**Stream to the Log Analytics** workspace in Azure Monitor.
-You will need to create your Logs Analytics Workspace before you can select this option.
+**Stream to the Log Analytics** is used for sending logs and metrics to a Log Analytics workspace in Azure Monitor.
+You will need to create your Logs Analytics workspace before you can select this option.
6. Select **AuditLogs**.
- [ ![Azure FHIR diagnostic settings audit logs.](media/diagnostic-logs/fhir-diagnostic-settings-add.png) ](media/diagnostic-logs/fhir-diagnostic-settings-add.png#lightbox)
+ [ ![Screenshot of checkbox used for enabling or disabling audit logs.](media/diagnostic-logs/fhir-diagnostic-settings-add.png) ](media/diagnostic-logs/fhir-diagnostic-settings-add.png#lightbox)
7. Select **Save**.
At this time, the FHIR service returns the following fields in the audit log:
Listed below are a few basic Application Insights queries you can use to explore your log data.
-Run this query to see the **100 most recent** logs:
+Run the following query to view the **100 most recent** logs.
Insights MicrosoftHealthcareApisAuditLogs | limit 100
-Run this query to group operations by **FHIR Resource Type**:
+Run the following query to group operations by **FHIR Resource Type**.
Insights MicrosoftHealthcareApisAuditLogs | summarize count() by FhirResourceType
-Run this query to get all the **failed results**:
+Run the following query to get all the **failed results**.
Insights MicrosoftHealthcareApisAuditLogs
MicrosoftHealthcareApisAuditLogs
## Conclusion
-Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. The FHIR service of the Azure Healthcare APIs allows you to do these actions through diagnostic logs.
+Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. The Azure Healthcare APIs FHIR service allows you to do these actions through diagnostic logs.
FHIR is the registered trademark of [HL7](https://www.hl7.org/fhir/https://docsupdatetracker.net/index.html) and is used with the permission of HL7.
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/egress-only.md
+
+ Title: Outbound-only load balancer configuration
+
+description: In this article, learn about how to create an internal load balancer with outbound NAT
++++ Last updated : 08/21/2021+++
+# Outbound-only load balancer configuration
+
+Use a combination of internal and external standard load balancers to create outbound connectivity for VMs behind an internal load balancer.
+
+This configuration provides outbound NAT for an internal load balancer scenario, producing an "egress only" setup for your backend pool.
+
+> [!NOTE]
+> **Azure Virtual Network NAT** is the recommended configuration for outbound connectivity in production deployments. For more information about **Virtual Network NAT** and the **NAT gateway** resource, see **[What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)**.
+>
+> To deploy an outbound only load balancer configuration with Azure Virtual Network NAT and a NAT gateway, see [Tutorial: Integrate NAT gateway with an internal load balancer - Azure portal](../virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md).
+>
+> For more information about outbound connections in Azure and default outbound access, see [Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [Default outbound access](../virtual-network/default-outbound-access.md).
++
+*Figure: Egress only load balancer configuration*
+
+In this how-to article, you'll:
+
+1. Create a virtual network with a bastion host.
+
+2. Create both internal and public standard load balancers with backend pools.
+
+3. Create a virtual machine with only a private IP and add to the internal load balancer backend pool.
+
+4. Add virtual machine to public load balancer backend pool.
+
+5. Connect to your VM through the bastion host and:
+
+ 1. Test outbound connectivity,
+
+ 2. Configure an outbound rule on the public load balancer.
+
+ 3. Retest outbound connectivity.
+
+## Create virtual network and load balancers
+
+In this section, you'll create a virtual network and subnet for the load balancers and the virtual machine. You'll next create the load balancers.
+
+### Create the virtual network
+
+In this section, you'll create the virtual network and subnets for the virtual machine, load balancer, and bastion host.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **myResourceGroupLB**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(US) East US 2** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+11. Select the **Review + create** tab or select the **Review + create** button.
+
+12. Select **Create**.
+
+### Create internal load balancer
+
+In this section, you'll create the internal load balancer.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroupLB**. |
+ | **Instance details** | |
+ | Name | Enter **myInternalLoadBalancer** |
+ | Region | Select **(US) East US 2**. |
+ | Type | Select **Internal**. |
+ | SKU | Leave the default **Standard**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **LoadBalancerFrontend** in **Name**.
+
+7. Select **myBackendSubnet** in **Subnet**.
+
+8. Select **Dynamic** for **Assignment**.
+
+9. Select **Zone-redundant** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+10. Select **Add**.
+
+11. Select **Next: Backend pools** at the bottom of the page.
+
+12. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+13. Enter **myInternalBackendPool** for **Name** in **Add backend pool**.
+
+14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+
+15. Select **IPv4** or **IPv6** for **IP version**.
+
+16. Select **Add**.
+
+17. Select the blue **Review + create** button at the bottom of the page.
+
+18. Select **Create**.
+
+### Create public load balancer
+
+In this section, you'll create the public load balancer.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroupLB**. |
+ | **Instance details** | |
+ | Name | Enter **myPublicLoadBalancer** |
+ | Region | Select **(US) East US 2**. |
+ | Type | Select **Public**. |
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **LoadBalancerFrontend** in **Name**.
+
+7. Select **IPv4** or **IPv6** for the **IP version**.
+
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
+
+8. Select **IP address** for the **IP type**.
+
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md).
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+11. Select **Zone-redundant** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter **myPublicBackendPool** for **Name** in **Add backend pool**.
+
+18. Select **myVNet** in **Virtual network**.
+
+19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+
+20. Select **IPv4** or **IPv6** for **IP version**.
+
+21. Select **Add**.
+
+22. Select the blue **Review + create** button at the bottom of the page.
+
+23. Select **Create**.
+
+## Create virtual machine
+
+You'll create a virtual machine in this section. During creation, you'll add it to the backend pool of the internal load balancer. After the virtual machine is created, you'll add the virtual machine to the backend pool of the public load balancer.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+
+3. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **myResourceGroupLB** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM** |
+ | Region | Select **(US) East US 2** |
+ | Availability Options | Select **No infrastructure redundancy required** |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+5. In the Networking tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Leave the default of **Basic**. |
+ | **Load balancing** |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the box. |
+ | **Load balancing settings** |
+ | Load-balancing options | Select **Azure load balancing** |
+ | Select a load balancer | Select **myInternalLoadBalancer** |
+ | Select a backend pool | Select **myInternalBackendPool** |
+
+6. Select **Review + create**.
+
+7. Review the settings, and then select **Create**.
+
+### Add VM to backend pool of public load balancer
+
+In this section, you'll add the virtual machine you created previously to the backend pool of the public load balancer.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myPublicLoadBalancer**.
+
+3. Select **Backend pools** in **Settings** in **myPublicLoadBalancer**.
+
+4. Select **myPublicBackendPool** under **Backend pool** in the **Backend pools** page.
+
+5. In **myPublicBackendPool**, select **myVNet** in **Virtual network**.
+
+6. In **Virtual machines**, select the blue **+ Add** button.
+
+7. Select the box next to **myVM** in **Add virtual machines to backend pool**.
+
+8. Select **Add**.
+
+9. Select **Save**.
+## Test connectivity before outbound rule
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM**.
+
+3. In the **Overview** page, select **Connect**, then **Bastion**.
+
+4. Enter the username and password entered during VM creation.
+
+5. Select **Connect**.
+
+6. Open Internet Explorer.
+
+7. Enter **https://whatsmyip.org** in the address bar.
+
+8. The connection should fail. By default, standard public load balancer [doesn't allow outbound traffic without a defined outbound rule](load-balancer-overview.md#securebydefault).
+
+## Create a public load balancer outbound rule
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myPublicLoadBalancer**.
+
+3. Select **Outbound rules** in **Settings** in **myPublicLoadBalancer**.
+
+4. Select **+ Add** in **Outbound rules**.
+
+5. Enter or select the following information to configure the outbound rule.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myOutboundRule**. |
+ | Frontend IP address | Select **LoadBalancerFrontEnd**.|
+ | Protocol | Leave the default of **All**. |
+ | Idle timeout (minutes) | Move slider to **15 minutes**.|
+ | TCP Reset | Select **Enabled**.|
+ | Backend pool | Select **myPublicBackendPool**.|
+ | **Port allocation** | |
+ | Port allocation | Select **Manually choose number of outbound ports**. |
+ | **Outbound ports** | |
+ | Choose by | Select **Ports per instance**. |
+ | Ports per instance | Enter **10000**
+
+6. Select **Add**.
+
+## Test connectivity after outbound rule
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM**.
+
+3. On the **Overview** page, select **Connect**, then **Bastion**.
+
+4. Enter the username and password entered during VM creation.
+
+5. Select **Connect**.
+
+6. Open Internet Explorer.
+
+7. Enter **https://whatsmyip.org** in the address bar.
+
+8. The connection should succeed.
+
+9. The IP address displayed should be the frontend IP address of **myPublicLoadBalancer**.
+
+## Clean up resources
+
+When no longer needed, delete the resource group, load Balancers, VM, and all related resources.
+
+To do so, select the resource group **myResourceGroupLB** and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you created an "egress only" configuration with a combination of public and internal load balancers.
+
+This configuration allows you to load balance incoming internal traffic to your backend pool while still preventing any public inbound connections.
+
+- Learn about [Azure Load Balancer](load-balancer-overview.md).
+- Learn about [outbound connections in Azure](load-balancer-outbound-connections.md).
+- Load balancer [FAQs](load-balancer-faqs.yml).
+- Learn about [Azure Bastion](../bastion/bastion-overview.md)
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
Result
1.16.13 ```
-If you'd like to **programmatically check the available versions**, use the [Container Service Client - List Orchestrators](/rest/api/container-service/container-service-client/list-orchestrators) REST API. To find the available versions, look at the entries where `orchestratorType` is `Kubernetes`. The associated `orchestrationVersion` entries contain the available versions that can be **attached** to your workspace.
+If you'd like to **programmatically check the available versions**, use the Container Service Client - List Orchestrators REST API. To find the available versions, look at the entries where `orchestratorType` is `Kubernetes`. The associated `orchestrationVersion` entries contain the available versions that can be **attached** to your workspace.
To find the default version that is used when **creating** a cluster through Azure Machine Learning, find the entry where `orchestratorType` is `Kubernetes` and `default` is `true`. The associated `orchestratorVersion` value is the default version. The following JSON snippet shows an example entry:
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
The inference configuration below specifies that the machine learning deployment
You can use any [Azure Machine Learning inference curated environments](concept-prebuilt-docker-images-inference.md#list-of-prebuilt-docker-images-for-inference) as the base Docker image when creating your project environment. We will install the required dependencies on top and store the resulting Docker image into the repository that is associated with your workspace. > [!NOTE]
-> Azure machine learning [inference source directory](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py#constructor&preserve-view=true) upload does not respect **.gitignore** or **.amlignore**
+> Azure machine learning [inference source directory](/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py#constructor&preserve-view=true) upload does not respect **.gitignore** or **.amlignore**
# [Azure CLI](#tab/azcli)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
To deploy a model to Azure Kubernetes Service, create a __deployment configurati
```python from azureml.core.webservice import AksWebservice, Webservice from azureml.core.model import Model
+from azureml.core.compute import AksCompute
aks_target = AksCompute(ws,"myaks") # If deploying to a cluster configured for dev/test, ensure that it was created with enough
Azure Security Center provides unified security management and advanced threat p
* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) * [Consume a ML Model deployed as a web service](how-to-consume-web-service.md) * [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
For either AKS deployment with custom certificate or ACI deployment, you must up
> When you use a certificate from Microsoft for AKS deployment, you don't need to manually update the DNS value for the cluster. The value should be set automatically. You can follow following steps to update DNS record for your custom domain name:
-1. Get scoring endpoint IP address from scoring endpoint URI, which is usually in the format of *http://104.214.29.152:80/api/v1/service/<service-name>/score*. In this example, the IP address is 104.214.29.152.
+1. Get scoring endpoint IP address from scoring endpoint URI, which is usually in the format of *http://104.214.29.152:80/api/v1/service/service-name/score*. In this example, the IP address is 104.214.29.152.
1. Use the tools from your domain name registrar to update the DNS record for your domain name. The record maps the FQDN (for example, www\.contoso.com) to the IP address. The record must point to the IP address of scoring endpoint. > [!TIP]
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-distributed-gpu.md
Title: Distributed GPU training guide
-description: Distributed training with MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.
+description: Learn the best practices for performing distributed training with Azure Machine Learning supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.
Previously updated : 08/12/2021 Last updated : 08/19/2021 # Distributed GPU training guide
media-services Transform Create Overlay How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-create-overlay-how-to.md
Title: How to create an overlay with Media Encoder Standard
-description: Learn how to create an overlay with Media Encoder Standard.
+ Title: How to create an image overlay
+description: Learn how to create an image overlay
Last updated 08/31/2020
-# How to create an overlay with Media Encoder Standard
+# How to create an image overlay
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-The Media Encoder Standard allows you to overlay an image, audio file, or another video onto another video. The input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file.
+Media Services allows you to overlay an image, audio file, or another video on top of a video. The input must specify exactly one image file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file in a supported file format.
## Prerequisites * Collect the account information that you need to configure the *appsettings.json* file in the sample. If you're not sure how to do that, see [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md). The following values are expected in the *appsettings.json* file.
- ```json
+```json
{ "AadClientId": "", "AadEndpoint": "https://login.microsoftonline.com",
The Media Encoder Standard allows you to overlay an image, audio file, or anothe
"ResourceGroup": "", "SubscriptionId": "" }
- ```
+```
-If you aren't already familiar with Transforms, it is recommended that you complete the following activities:
+If you aren't already familiar with the creation of Transforms, it is recommended that you complete the following activities:
* Read [Encoding video and audio with Media Services](encode-concept.md) * Read [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
Once you are familiar with Transforms, download the overlays sample.
## Overlays preset sample
-Download the [media-services-overlay sample](https://github.com/Azure-Samples/media-services-overlays) to get started with overlays.
+Clone the Media Services .NET sample repository.
+
+```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
+```
+
+Navigate into the solution folder, and launch Visual Studio Code, or Visual Studio 2019.
+
+A number of encoding samples are available in the VideoEncoding folder. Open the project in the [VideoEncoding/Encoding_OverlayImage](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_OverlayImage) solution folder to get started learning how to use overlays.
+
+The sample project contains two media files. A video file, and a logo image to overlay on top of the video.
+* ignite.mp4
+* cloud.png
+
+In this sample, we first create a custom Transform that can overlay the image on top of the video in the CreateCustomTransform method. Using the *[Filters](/rest/api/media/transforms/create-or-update#filters)* property of the *[StandardEncoderPreset](/rest/api/media/transforms/create-or-update#standardencoderpreset)*, we assign a new Filters collection that contains the video overlay settings.
+
+A [VideoOverlay](/rest/api/media/transforms/create-or-update#videooverlay) contains a property called *InputLabel* that is required to map from the list of job input files submitted into the job and locate the right input source file intended for use as the overlay image or video. When submitting the job this same label name is used to match up to the setting here in the Transform. In the sample we are using the label name of "logo" as seen in the string constant *OverlayLabel*.
+
+The following code snippet shows how the Transform is formatted to use an overlay.
+
+```csharp
+new TransformOutput
+ {
+ Preset = new StandardEncoderPreset
+ {
+ Filters = new Filters
+ {
+ Overlays = new List<Overlay>
+ {
+ new VideoOverlay
+ {
+ InputLabel = OverlayLabel, // same as the one used in the JobInput to identify which asset is the overlay image
+ Position = new Rectangle( "1200","670") // left, top position of the overlay in absolute pixel position relative to the source videos resolution.
+
+ }
+ }
+ },
+ Codecs = new List<Codec>
+ {
+ new AacAudio
+ {
+ },
+ new H264Video
+ {
+ KeyFrameInterval = TimeSpan.FromSeconds(2),
+ Layers = new List<H264Layer>
+ {
+ new H264Layer
+ {
+ Profile = H264VideoProfile.Baseline,
+ Bitrate = 1000000, // 1Mbps
+ Width = "1280",
+ Height = "720"
+ },
+ new H264Layer // Adding a second layer to see that the image also is scaled and positioned the same way on this layer.
+ {
+ Profile = H264VideoProfile.Baseline,
+ Bitrate = 600000, // 600 kbps
+ Width = "480",
+ Height = "270"
+ }
+ }
+ }
+ },
+ Formats = new List<Format>
+ {
+ new Mp4Format
+ {
+ FilenamePattern = "{Basename}_{Bitrate}{Extension}",
+ }
+ }
+ }
+ }
+```
+
+When submitting the Job to the Transform, you must first create the two input assets.
+
+* Asset 1 - in this sample the first Asset created is the local video file "ignite.mp4". This is the video that we will use as the background of the composite, and overlay a logo image on top of.
+* Asset 2 - in this sample, the second asset (stored in the overlayImageAsset variable) contains the .png file to be used for the logo. This image will be positioned onto the video during encoding.
+
+When the Job is created in the *SubmitJobAsync* method, we first construct a JobInput array using a List<> object. The List will contain the references to the two source assets.
+
+In order to identify and match which input asset is to be used as the overlay in the Filter defined in above Transform, we again use the "logo" label name to handle the matching. The label name is added to the JobInputAsset for the .png image. This tells the Transform which asset to use when doing the overlay operation. you can re-use this same Transform with different Assets stored in Media Services that contain various logos or graphics that you wish to overlay, and simply change the asset name passed into the Job, while using the same label name "logo" for the Transform to match it to.
+
+``` csharp
+ // Add both the Video and the Overlay image assets here as inputs to the job.
+ List<JobInput> jobInputs = new List<JobInput>() {
+ new JobInputAsset(assetName: inputAssetName),
+ new JobInputAsset(assetName: overlayAssetName, label: OverlayLabel)
+ };
+```
+
+Run the sample by selecting the project in the Run and Debug window in Visual Studio Code. The sample will output progress of the encoding operation, and finally download the contents into the /Output folder in either your project root, or in the case of full Visual Studio this may be in your /bin/Output folder.
+
+The sample also publishes the content for streaming and will output the full HLS, DASH and Smooth Streaming manifest file URLs that can be used in any compatible player. You can also easily copy the manifest URL to the [Azure Media Player demo](http://ampdemo.azureedge.net/) and paste the URL that ends with /manifest into the URL box, then click *Update Player*.
+
+## API references
+
+* [VideoOverlay object](/rest/api/media/transforms/create-or-update#videooverlay)
+* [Filters](/rest/api/media/transforms/create-or-update#filters)
+* [StandardEncoderPreset](/rest/api/media/transforms/create-or-update#standardencoderpreset)
++ ## Next steps
security Cyber Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/cyber-services.md
Our team of technical professionals consists of highly trained experts who offer
Learn more about services provided by Microsoft * [Security Risk Assessment](https://download.microsoft.com/download/5/D/0/5D06F4EA-EAA1-4224-99E2-0C0F45E941D0/Microsoft%20Security%20Risk%20Asessment%20Datasheet.pdf)
+* Dynamic Identity Framework Assessment
* [Offline Assessment for Active Directory Services](https://download.microsoft.com/download/1/C/1/1C15BA51-840E-498D-86C6-4BD35D33C79E/Prerequisites_Offline_AD.pdf) * [Enhanced Security Administration Environment](https://download.microsoft.com/download/A/C/5/AC5D21A6-E04B-4DC4-B1F2-AE060319A4D7/Premier_Support_for_Security/Popis/Enhanced-Security-Admin-Environment-Solution-Datasheet-%5BEN%5D.pdf)
+* Azure AD Implementation Services
* [Securing Against Lateral Account Movement](/azure-advanced-threat-protection/use-case-lateral-movement-path) * [Incident Response and Recovery](/microsoft-365/compliance/gdpr-breach-microsoft-support-professional-services#data-protection-incident-response-overview)
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
If you have a Docker container already running with an earlier version of the SA
./ sapcon-instance-update.sh ```
-The SAP data connector Docker container on your machine is updated.
+1. Restart the Docker container:
+
+ ```bash
+ docker restart sapcon-[SID]
+ ```
+
+The SAP data connector Docker container on your machine is updated.
+
+Make sure to check for any other updates available:
+
+- Relevant SAP change requests, in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR).
+- Azure Sentinel SAP security content, in the **Azure Sentinel Continuous Threat Monitoring for SAP** solution
+- Relevant watchlists, in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists)
+ ## Collect SAP HANA audit logs
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-troubleshoot.md
Previously updated : 07/29/2021 Last updated : 08/09/2021
For more information, see the [Docker CLI documentation](https://docs.docker.com
## Review system logs
-We highly recommend that you review the system logs after installing or resetting the data connector.
+We highly recommend that you review the system logs after installing or [resetting the data connector](#reset-the-sap-data-connector).
Run: ```bash docker logs -f sapcon-[SID] ```+ ## Enable debug mode printing
-To enable debug mode printing:
+**To enable debug mode printing**:
1. Copy the following file to your **sapcon/[SID]** directory, and then rename it as `loggingconfig.yaml`: https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/loggingconfig_DEV.yaml 1. [Reset the SAP data connector](#reset-the-sap-data-connector).
-For example, for SID A4H:
+For example, for SID `A4H`:
```bash wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/loggingconfig_DEV.y
wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP
docker restart sapcon-A4H ```
+**To disable debug mode printing again, run**:
+
+```bash
+mv loggingconfig.yaml loggingconfig.old
+ls
+docker restart sapcon-[SID]
+```
+ ## View all Docker execution logs To view all Docker execution logs for your Azure Sentinel SAP data connector deployment, run one of the following commands:
The following steps reset the connector and re-ingest SAP logs from the last 24
docker stop sapcon-[SID] ```
-1. Delete the **metadata.db** file from the **sapcon/[SID]** directory.
+1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+
+ ```bash
+ cd ~/sapcon/<SID>
+ ls
+ mv metadata.db metadata.old
+ ```
> [!NOTE] > The **metadata.db** file contains the last timestamp for each of the logs, and works to prevent duplication.
This occurs when the connector fails to boot with PyRfc, or zip-related error me
1. Reinstall the SAP SDK. 1. Verify that you're the correct Linux 64-bit version. As of the current date, the release filename is: **nwrfc750P_8-70002752.zip**.
-If you'd installed the data connector manually, make sure that you'd copied the SDK file into the docker container.
+If you'd installed the data connector manually, make sure that you'd copied the SDK file into the Docker container.
Run:
If ABAP runtime errors appear on large systems, try setting a smaller chunk size
### Empty or no audit log retrieved, with no special error messages 1. Check that audit logging is enabled in SAP.
-1. Verify transactions **SM19** and **RASU_CONFIG**.
+1. Verify the **SM19** or **RSAU_CONFIG** transactions.
1. Enable any events as needed. 1. Verify whether messages arrive and exist in the SAP **SM20** or **RSAU_READ_LOG**, without any special errors appearing on the connector log.
If ABAP runtime errors appear on large systems, try setting a smaller chunk size
If you realize that you've entered an incorrect workspace ID or key in your [deployment script](sap-deploy-solution.md#create-key-vault-for-your-sap-credentials), update the credentials stored in Azure KeyVault.
+After verifying your credentials in Azure KeyVault, restart the container:
+
+```bash
+docker restart sapcon-[SID]
+```
+ ### Incorrect SAP ABAP user credentials in a fixed configuration A fixed configuration is when the password is stored directly in the **systemconfig.ini** configuration file.
Use base64 encryption to encrypt the user and password. You can use online encry
### Incorrect SAP ABAP user credentials in key vault
-Check your credentials and fix them as needed.
+Check your credentials and fix them as needed, applying the correct values to the **ABAPUSER** and **ABAPPASS** values in Azure Key Vault.
-Apply the correct values to the **ABAPUSER** and **ABAPPASS** values in Azure Key Vault.
+Then, restart the container:
+```bash
+docker restart sapcon-[SID]
+```
### Missing ABAP (SAP user) permissions
If your attempt to retrieve an audit log, without the [required change request](
While your system should automatically switch to compatibility mode if needed, you may need to switch it manually. To switch to compatibility mode manually: 1. In the **sapcon/SID** directory, edit the **systemconfig.ini** file+ 1. Define: `auditlogforcexal = True`
+1. Restart the Docker container:
+
+ ```bash
+ docker restart sapcon-[SID]
+ ```
+ ### SAPCONTROL or JAVA subsystems unable to connect Check that the OS user is valid and can run the following command on the target SAP system:
For example, use `javatz = GMT+12` or `abaptz = GMT-3**`.
If you're not able to import the [required SAP log change requests](sap-solution-detailed-requirements.md#required-sap-log-change-requests) and are getting an error about an invalid component version, add `ignore invalid component version` when you import the change request.
+### Audit log data not ingested past initial load
+
+If the SAP audit log data, visible in either the **RSAU_READ_LOAD** or **SM200** transactions, is not ingested into Azure Sentinel past the initial load, you may have a misconfiguration of the SAP system and the SAP host operating system.
+
+- Initial loads are ingested after a fresh installation of the SAP data connector, or after the **metadata.db** file is deleted.
+- A sample misconfiguration might be when your SAP system timezone is set to **CET** in the **STZAC** transaction, but the SAP host operating system time zone is set to **UTC**.
+
+To check for misconfigurations, run the **RSDBTIME** report in transaction **SE38**. If you find a mismatch between the SAP system and the SAP host operating system:
+
+1. Stop the Docker container. Run
+
+ ```bash
+ docker stop sapcon-[SID]
+ ```
+
+1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+
+ ```bash
+ rm ~/sapcon/[SID]/metadata.db
+ ```
+
+1. Update the SAP system and the SAP host operating system to have matching settings, such as the same time zone. For more information, see the [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/Basis/Time+zone+settings%2C+SAP+vs.+OS+level).
+
+1. Start the container again. Run:
+
+ ```bash
+ docker start sapcon-[SID]
+ ```
## Next steps
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-troubleshoot.md
If files fail to be recalled:
| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the Invoke-StorageSyncCloudTiering cmdlet. | | 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. | | 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you are certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](https://docs.microsoft.com/azure/storage/file-sync/file-sync-troubleshoot?tabs=portal1%2Cazure-portal#-2146762487) to resolve this issue. |
-| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from backup. To resolve this issue, please open a support request. |
+| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
### Tiered files are not accessible on the server after deleting a server endpoint Tiered files on a server will become inaccessible if the files are not recalled prior to deleting a server endpoint.
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/understanding-billing.md
When you provision a premium file share, you specify how many GiBs your workload
| Provisioning unit | 1 GiB | | Baseline IOPS formula | `MIN(400 + 1 * ProvisionedGiB, 100000)` | | Burst limit | `MIN(MAX(4000, 3 * BaselineIOPS), 100000)` |
+| Burst credits | `BurstLimit * 3600` |
| Ingress rate | `40 MiB/sec + 0.04 * ProvisionedGiB` | | Egress rate | `60 MiB/sec + 0.06 * ProvisionedGiB` | The following table illustrates a few examples of these formulae for the provisioned share sizes:
-| Capacity (GiB) | Baseline IOPS | Burst IOPS | Egress (MiB/s) | Ingress (MiB/s) |
-|-|-|-|-|-|
-| 100 | 500 | Up to 4,000 | 66 | 44 |
-| 500 | 900 | Up to 4,000 | 90 | 60 |
-| 1,024 | 1,424 | Up to 4,000 | 122 | 81 |
-| 5,120 | 5,520 | Up to 15,360 | 368 | 245 |
-| 10,240 | 10,640 | Up to 30,720 | 675 | 450 |
-| 33,792 | 34,192 | Up to 100,000 | 2,088 | 1,392 |
-| 51,200 | 51,600 | Up to 100,000 | 3,132 | 2,088 |
-| 102,400 | 100,000 | Up to 100,000 | 6,204 | 4,136 |
+| Capacity (GiB) | Baseline IOPS | Burst IOPS | Burst credits | Ingress (MiB/sec) | Egress (MiB/sec) |
+|-|-|-|-|-|-|
+| 100 | 500 | Up to 4,000 | 14,400,000 | 44 | 66 |
+| 500 | 900 | Up to 4,000 | 14,400,000 | 60 | 90 |
+| 1,024 | 1,424 | Up to 4,000 | 14,400,000 | 81 | 122 |
+| 5,120 | 5,520 | Up to 15,360 | 55,296,000 | 245 | 368 |
+| 10,240 | 10,640 | Up to 30,720 | 110,592,000 | 450 | 675 |
+| 33,792 | 34,192 | Up to 100,000 | 360,000,000 | 1,392 | 2,088 |
+| 51,200 | 51,600 | Up to 100,000 | 360,000,000 | 2,088 | 3,132 |
+| 102,400 | 100,000 | Up to 100,000 | 360,000,000 | 4,136 | 6,204 |
-Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a single Windows virtual machine without SMB Multichannel enabled, *Standard F16s_v2*, connected to premium file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. The same client can achieve up to \~3x performance if SMB Multichannel is enabled on the premium shares. To achieve maximum performance scale, [enable SMB Multichannel](storage-files-enable-smb-multichannel.md) and spread the load across multiple VMs. Refer to [SMB multichannel performance](storage-files-smb-multichannel-performance.md) and [troubleshooting guide](storage-troubleshooting-files-performance.md) for some common performance issues and workarounds.
+Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a single Windows virtual machine without SMB Multichannel enabled, *Standard F16s_v2*, connected to premium file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. The same client can achieve up to \~3x performance if SMB Multichannel is enabled on the premium shares. To achieve maximum performance scale, [enable SMB Multichannel](storage-files-enable-smb-multichannel.md) and spread the load across multiple VMs. Refer to [SMB Multichannel performance](storage-files-smb-multichannel-performance.md) and [troubleshooting guide](storage-troubleshooting-files-performance.md) for some common performance issues and workarounds.
### Bursting
-If your workload needs the extra performance to meet peak demand, your share can use burst credits to go above share baseline IOPS limit to offer share performance it needs to meet the demand. Premium file shares can burst their IOPS up to 4,000 or up to a factor of three, whichever is a higher value. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis and the burst limit is not a guarantee, file shares can burst *up to* the limit for a max duration of 60 minutes.
+If your workload needs the extra performance to meet peak demand, your share can use burst credits to go above share baseline IOPS limit to offer share performance it needs to meet the demand. Premium file shares can burst their IOPS up to 4,000 or up to a factor of three, whichever is a higher value. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis and the burst limit is not a guarantee.
Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. For example, a 100 GiB share has 500 baseline IOPS. If actual traffic on the share was 100 IOPS for a specific 1-second interval, then the 400 unused IOPS are credited to a burst bucket. Similarly, an idle 1 TiB share, accrues burst credit at 1,424 IOPS. These credits will then be used later when operations would exceed the baseline IOPS.
-Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst at the max allowed peak burst rate. Shares can continue to burst as long as credits are remaining, up to max 60 minutes duration but, this is based on the number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit and once all credits are consumed the share would return to the baseline IOPS.
+Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst at the max allowed peak burst rate. Shares can continue to burst as long as credits are remaining but, this is based on the number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit and once all credits are consumed the share would return to the baseline IOPS.
Share credits have three states:
synapse-analytics Performance Tuning Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/performance-tuning-materialized-views.md
The materialized views implemented in dedicated SQL pool also provide the follow
Compared to other data warehouse providers, the materialized views implemented in dedicated SQL pool also provide the following benefits:
+- Broad aggregate function support. See [CREATE MATERIALIZED VIEW AS SELECT (Transact-SQL)](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
+- The support for query-specific materialized view recommendation. See [EXPLAIN (Transact-SQL)](/sql/t-sql/queries/explain-transact-sql).
- Automatic and synchronous data refresh with data changes in base tables. No user action is required.-- Broad aggregate function support. See [CREATE MATERIALIZED VIEW AS SELECT (Transact-SQL)](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql?view=azure-sqldw-latest).-- The support for query-specific materialized view recommendation. See [EXPLAIN (Transact-SQL)](/sql/t-sql/queries/explain-transact-sql?view=azure-sqldw-latest).
+>[!note]
+> A materialized view created with CASE expressions stores values that meet the CASE criteria at the time of the view creation only. The materialized view does not reflect incremental data changes resulting from the CASE expressions after the view is created.
+
## Common scenarios Materialized views are typically used in the following scenarios:
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Segment](./media/sql-data-warehouse-partner-data-integration/segment_logo.png) |**Segment**<br>Segment is a data management and analytics solution that helps you make sense of customer data coming from various sources. It allows you to connect your data to over 200 tools to create better decisions, products, and experiences. Segment will transform and load multiple data sources into your warehouse for you using its built-in data connectors|[Product page](https://segment.com/)<br> | | ![Skyvia](./media/sql-data-warehouse-partner-data-integration/skyvia_logo.png) |**Skyvia (data integration)**<br>Skyvia data integration provides a wizard that automates data imports. This wizard allows you to migrate data between different kinds of sources - CRMs, application database, CSV files, and more. |[Product page](https://skyvia.com/)<br> | | ![SnapLogic](./media/sql-data-warehouse-partner-data-integration/snaplogic_logo.png) |**SnapLogic**<br>The SnapLogic Platform enables customers to quickly transfer data into and out of an Azure Synapse data warehouse. It offers the ability to integrate hundreds of applications, services, and IoT scenarios in one solution.|[Product page](https://www.snaplogic.com/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/snaplogic.snaplogic-elastic-integration-windows)<br> |
+| ![SnowMirror](./media/sql-data-warehouse-partner-data-integration/snowmirror-logo.png) |**SnowMirror by GuideVision**<br>SnowMirror is a smart data replication tool for ServiceNow. It loads data from a ServiceNow instance and stores it in an on-premise or cloud database. You can then use your replicated data for custom reporting and dashboards with tools like Power BI. Because your data is replicated, it reduces load on your ServiceNow cloud instance. It can be used for system integration, disaster recovery and more. SnowMirror can be used either on premises or in the cloud, and is compatible with all leading databases, including Microsoft SQL Server and Azure Synapse.|[Product page](https://www.snow-mirror.com/)|
| ![StreamSets](./media/sql-data-warehouse-partner-data-integration/streamsets_logo.png) |**StreamSets**<br>StreamSets provides a data integration platform for DataOps. It operationalizes the full design-deploy-operate lifecycle of integrating data into an Azure Synapse data warehouse. You can quickly ingest and integrate data to and from the warehouse via streaming, batch, or changed data capture. Also, you can ensure continuous operations with smart data pipelines that provide end-to-end data flow visibility and resiliency.|[Product page](https://streamsets.com/partners/microsoft)| | ![Talend](./media/sql-data-warehouse-partner-data-integration/talend-logo.png) |**Talend Cloud**<br>Talend Cloud is an enterprise data integration platform to connect, access, and transform any data across the cloud or on-premises. It's an integration platform-as-a-service that provides broad connectivity, built-in data quality, and native support for the latest big data and cloud technologies. |[Product page](https://www.talend.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendremoteengine?source=datamarket&tab=Overview) | | ![TimeXtender](./media/sql-data-warehouse-partner-data-integration/timextender-logo.png) |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [Product page](https://www.timextender.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) |
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
Previously updated : 08/11/2021 Last updated : 08/20/2021 # Connections to Azure AD-joined VMs
If you come across an error saying **The logon attempt failed** on the Windows S
- You are on a device that is Azure AD-joined or hybrid Azure AD-joined to the same Azure AD tenant as the session host OR - You are on a device running Windows 10 2004 or later that is Azure AD registered to the same Azure AD tenant as the session host - The [PKU2U protocol is enabled](/windows/security/threat-protection/security-policy-settings/network-security-allow-pku2u-authentication-requests-to-this-computer-to-use-online-identities) on both the local PC and the session host
+- [Per-user MFA is disabled](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) for the user account as it's not supported for Azure AD-joined VMs.
### The sign-in method you're trying to use isn't allowed
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-encryption.md
Customer-managed keys are available in all regions that managed disks are availa
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with managed disks isn't transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-To enable customer-managed keys for managed disks, see our articles covering how to enable it with either the [Azure PowerShell module](windows/disks-enable-customer-managed-keys-powershell.md), the [Azure CLI](linux/disks-enable-customer-managed-keys-cli.md) or the [Azure portal](disks-enable-customer-managed-keys-portal.md). To learn how to enable customer-managed keys with automatic key rotation, see [Set up an Azure Key Vault and DiskEncryptionSet with automatic key rotation (preview)](windows/disks-enable-customer-managed-keys-powershell.md#set-up-an-azure-key-vault-and-diskencryptionset-with-automatic-key-rotation-preview).
+To enable customer-managed keys for managed disks, see our articles covering how to enable it with either the [Azure PowerShell module](windows/disks-enable-customer-managed-keys-powershell.md), the [Azure CLI](linux/disks-enable-customer-managed-keys-cli.md) or the [Azure portal](disks-enable-customer-managed-keys-portal.md).
## Encryption at host - End-to-end encryption for your VM data