Updates from: 05/13/2022 01:10:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
Many of the following examples use tools like [Managed Identities](../managed-id
### PowerShell
+> [!IMPORTANT]
+> Due to the planned deprecation of PowerShell modules (MSOL & AAD) after December 2022, no further updates are planned for these modules to support new Conditional Access features. See recent announcements for more information: https://aka.ms/AzureADPowerShellDeprecation. New Conditional Access features may not be available or may not be functional within these PowerShell modules as a result of this announcement. Please consider [migrating to Microsoft Graph PowerShell](https://aka.ms/MigrateMicrosoftGraphPowerShell). Additional guidance and examples will be released soon.
+ For many administrators, PowerShell is already an understood scripting tool. The following example shows how to use the [Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD) to manage Conditional Access policies. - [Configure Conditional Access policies with Azure AD PowerShell commands](https://github.com/Azure-Samples/azure-ad-conditional-access-apis/tree/main/01-configure/powershell)
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
These claims are always included in v1.0 Azure AD tokens, but not included in v2
|||-|-| | `ipaddr` | IP Address | The IP address the client logged in from. | | | `onprem_sid` | On-Premises Security Identifier | | |
-| `pwd_exp` | Password Expiration Time | The datetime at which the password expires. | |
-| `pwd_url` | Change Password URL | A URL that the user can visit to change their password. | |
+| `pwd_exp` | Password Expiration Time | The number of seconds after the time in the iat claim at which the password expires. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | |
+| `pwd_url` | Change Password URL | A URL that the user can visit to change their password. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | |
| `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |
active-directory Access Reviews Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md
Here are some example license scenarios to help you determine the number of lice
| An administrator creates an access review of Group C with 50 member users and 25 guest users. Makes it a self-review. | 50 licenses for each user as self-reviewers.* | 50 | | An administrator creates an access review of Group D with 6 member users and 108 guest users. Makes it a self-review. | 6 licenses for each user as self-reviewers. Guest users are billed on a monthly active user (MAU) basis. No additional licenses are required. * | 6 |
-\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see Billing model for Azure AD External Identities.
+\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
## Next steps
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Here are some example license scenarios to help you determine the number of lice
| Scenario | Calculation | Number of licenses | | | | | | A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to 6 other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. 150 employees request the access packages. | 2,000 employees who **can** request the access packages | 2,000 |
-| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to 6 other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. Another policy specifies that some users from **Users from partner Contoso** (guests) can request the same access packages subject to approval. Contoso has 30,000 users. 150 employees request the access packages and 10,500 users from Contoso request access. | 2,000 employees + 500 guest users from Contoso that exceed the 1:5 ratio (10,500 - (2,000 * 5)) | 2,500 |
+| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to 6 other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. Another policy specifies that some users from **Users from partner Contoso** (guests) can request the same access packages subject to approval. Contoso has 30,000 users. 150 employees request the access packages and 10,500 users from Contoso request access. | 2,000 employees need licenses, guest users are billed on a monthly active user basis and no additional licenses are required for them. * | 2,000 |
+
+\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
+ ## Next steps
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
For each month, we truncate the SLA attainment at three places after the decimal
| | | | | January | | 99.999% | | February | 99.999% | 99.999% |
-| March | 99.568% | |
-| April | 99.999% | |
+| March | 99.568% | 99.999% |
+| April | 99.999% | 99.999% |
| May | 99.999% | | | June | 99.999% | | | July | 99.999% | |
All incidents that seriously impact Azure AD performance are documented in the [
* [Azure AD reports overview](overview-reports.md) * [Programmatic access to Azure AD reports](concept-reporting-api.md)
-* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md)
+* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol for: * Client side * Backend side
-* The 3DES ciphe
+* The 3DES cipher
This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
This guide shows you how to manage protocols and ciphers configuration for an Az
## Next steps * Learn more about [TLS](/dotnet/framework/network-programming/tls).
-* Check out more [videos](https://azure.microsoft.com/documentation/videos/index/?services=api-management) about API Management.
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
Now that both StatsD and Prometheus have been deployed, we can update the config
Here is a sample configuration: ```yaml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: contoso-gateway-environment
- data:
- config.service.endpoint: "<self-hosted-gateway-management-endpoint>"
- telemetry.metrics.local: "statsd"
- telemetry.metrics.local.statsd.endpoint: "10.0.41.179:8125"
- telemetry.metrics.local.statsd.sampling: "1"
- telemetry.metrics.local.statsd.tag-format: "dogStatsD"
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: contoso-gateway-environment
+data:
+ config.service.endpoint: "<self-hosted-gateway-management-endpoint>"
+ telemetry.metrics.local: "statsd"
+ telemetry.metrics.local.statsd.endpoint: "10.0.41.179:8125"
+ telemetry.metrics.local.statsd.sampling: "1"
+ telemetry.metrics.local.statsd.tag-format: "dogStatsD"
``` Update the YAML file of the self-hosted gateway deployment with the above configurations and apply the changes using the below command:
Now we have everything deployed and configured, the self-hosted gateway should r
Make some API calls through the self-hosted gateway, if everything is configured correctly, you should be able to view below metrics:
-| Metric | Description |
+| Metric | Description |
| - | - |
-| Requests | Number of API requests in the period |
-| DurationInMS | Number of milliseconds from the moment gateway received request until the moment response sent in full |
-| BackendDurationInMS | Number of milliseconds spent on overall backend IO (connecting, sending and receiving bytes) |
-| ClientDurationInMS | Number of milliseconds spent on overall client IO (connecting, sending and receiving bytes) |
+| requests_total | Number of API requests in the period |
+| request_duration_seconds | Number of milliseconds from the moment gateway received request until the moment response sent in full |
+| request_backend_duration_seconds | Number of milliseconds spent on overall backend IO (connecting, sending and receiving bytes) |
+| request_client_duration_seconds | Number of milliseconds spent on overall client IO (connecting, sending and receiving bytes) |
## Logs
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
To create a custom error page, you must have:
- an HTTP response status code. - corresponding location for the error page. -- error page should be internet accessible.
+- error page should be internet accessible and return 200 response.
- error page should be in \*.htm or \*.html extension type. - error page size must be less than 1 MB. You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same location are currently not supported.
-After you specify an error page, the application gateway downloads it from the storage blob location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
+After you specify an error page, the application gateway downloads it from the defined location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
## Portal configuration
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
attestation Policy Version 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md
issuancerules
// Verify if secureboot is enabled c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
-c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+c:[type=="efiConfigVariables", issuer=="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false); //Verfify in Defender ELAM is loaded.
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
Azure Automation provides an optional feature to "Disable local authentication"
In the Azure portal, you may receive a warning message on the landing page for the selected Automation account if authentication is disabled. To confirm if the local authentication policy is enabled, use the PowerShell cmdlet [Get-AzAutomationAccount](/powershell/module/az.automation/get-azautomationaccount) and check property `DisableLocalAuth`. A value of `true` means local authentication is disabled. Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests.+
+>[!NOTE]
+> Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
+To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](/azure/azure-portal/supportability/how-to-create-azure-support-request).
## Re-enable local authentication
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-app-configuration Manage Feature Flags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md
In the **Feature manager**, you can also change the state of a feature flag by c
## Access feature flags
-In the **Operations** menu, select **Feature manager**. You can select **Edits Columns** to add or remove columns, and change the column order.
+In the **Operations** menu, select **Feature manager**. You can select **Edit Columns** to add or remove columns, and change the column order.
create a label, lock or delete the feature flag. :::image type="content" source="media/edit-columns-feature-flag.png" alt-text="Screenshot of the Azure platform. Edit feature flag columns." lightbox="media/edit-columns-feature-flag-expanded.png":::
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022 #
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Some common issues and troubleshooting steps for Azure Key Vault secrets provide
Additional troubleshooting steps that are specific to the Secrets Store CSI Driver Interface can be referenced [here](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html).
+## Frequently asked questions
+
+### Is the extension of Azure Key Vault Secrets Provider zone redundant?
+
+Yes, all components of Azure Key Vault Secrets Provider are deployed on availability zones and are hence zone redundant.
+ ## Next steps > **Just want to try things out?**
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
The key benefits of the Azure Storage provider include:
* Lowest-cost serverless billing model - Azure Storage has a consumption-based pricing model based entirely on usage ([more information](durable-functions-billing.md#azure-storage-transactions)). * Best tooling support - Azure Storage offers cross-platform local emulation and integrates with Visual Studio, Visual Studio Code, and the Azure Functions Core Tools. * Most mature - Azure Storage was the original and most battle-tested storage backend for Durable Functions.
+* Preview support for using identity instead of secrets for connecting to the storage provider.
The source code for the DTFx components of the Azure Storage storage provider can be found in the [Azure/durabletask](https://github.com/Azure/durabletask/tree/main/src/DurableTask.AzureStorage) GitHub repo.
If no storage provider is explicitly configured in host.json, the Azure Storage
The Azure Storage provider is the default storage provider and doesn't require any explicit configuration, NuGet package references, or extension bundle references. You can find the full set of **host.json** configuration options [here](durable-functions-bindings.md#hostjson-settings), under the `extensions/durableTask/storageProvider` path.
+#### Connections
+
+The `connectionName` property in host.json is a reference to environment configuration which specifies how the app should connect to Azure Storage. It may specify:
+
+- The name of an application setting containing a connection string. To obtain a connection string, follow the steps shown at [Manage storage account access keys](../../storage/common/storage-account-keys-manage.md).
+- The name of a shared prefix for multiple application settings, together defining an [identity-based connection](#identity-based-connections-preview).
+
+If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact match is used. If no value is specified in host.json, the default value is "AzureWebJobsStorage".
+
+##### Identity-based connections (preview)
+
+If you are using [version 2.7.0 or higher of the extension](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) and the Azure storage provider, instead of using a connection string with a secret, you can have the app use an [Azure Active Directory identity](../../active-directory/fundamentals/active-directory-whatis.md). To do this, you would define settings under a common prefix which maps to the `connectionName` property in the trigger and binding configuration.
+
+To use an identity-based connection for Durable Functions, configure the following app settings:
+
+|Property | Environment variable template | Description | Example value |
+|-|--|--||
+| Blob service URI | `<CONNECTION_NAME_PREFIX>__blobServiceUri`| The data plane URI of the blob service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.blob.core.windows.net |
+| Queue service URI | `<CONNECTION_NAME_PREFIX>__queueServiceUri` | The data plane URI of the queue service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.queue.core.windows.net |
+| Table service URI | `<CONNECTION_NAME_PREFIX>__tableServiceUri` | The data plane URI of a table service of the storage account, using the HTTPS scheme. | https://<storage_account_name>.table.core.windows.net |
+
+Additional properties may be set to customize the connection. See [Common properties for identity-based connections](../functions-reference.md#common-properties-for-identity-based-connections).
+++ ### Configuring the Netherite storage provider To use the Netherite storage provider, you must first add a reference to the [Microsoft.Azure.DurableTask.Netherite.AzureFunctions](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps).
There are many significant tradeoffs between the various supported storage provi
| Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ❌ Not supported | ❌ Not supported | | Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) | | Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported |
+| Identity-based connections | ✅ Yes (preview) |❌ No | ❌ No |
## Next steps
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Identity-based connections are supported by the following components:
| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-service-bus.md) | | Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later](.//functions-bindings-cosmosdb-v2.md?tabs=extensionv4) | | Azure Tables (when using Azure Storage) - Preview | All | [Table API extension](./functions-bindings-storage-table.md#table-api-extension) |
+| Durable Functions storage provider (Azure Storage) - Preview | All | [Extension version 2.7.0 or later](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) |
| Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) |
-> [!NOTE]
-> Identity-based connections are not supported with Durable Functions.
- [!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)] Choose a tab below to learn about permissions for each component:
Choose a tab below to learn about permissions for each component:
[!INCLUDE [functions-table-permissions](../../includes/functions-table-permissions.md)]
+# [Durable Functions storage provider (preview)](#tab/durable)
++ # [Functions host storage (preview)](#tab/azurewebjobsstorage) [!INCLUDE [functions-azurewebjobsstorage-permissions](../../includes/functions-azurewebjobsstorage-permissions.md)]
To use an identity-based connection for "AzureWebJobsStorage", configure the fol
[Common properties for identity-based connections](#common-properties-for-identity-based-connections) may also be set as well.
-If you are configuring "AzureWebJobsStorage" using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The blob and queue endpoints will be inferred for this account. This will not work if the storage account is in a sovereign cloud or has a custom DNS.
+If you are configuring "AzureWebJobsStorage" using a storage account that uses the default DNS suffix and service name for global Azure, following the `https://<accountName>.blob/queue/file/table.core.windows.net` format, you can instead set `AzureWebJobsStorage__accountName` to the name of your storage account. The endpoints for each storage service will be inferred for this account. This will not work if the storage account is in a sovereign cloud or has a custom DNS.
| Setting | Description | Example value | |--|--||
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 04/08/2022 Last updated : 05/12/2022 # Azure guidance for secure isolation
All data in Azure irrespective of the type or storage location is associated wit
### Azure Active Directory The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. You can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD can't access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide extra protections from untrusted traffic.
-Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
+Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
Tenant isolation in Azure AD involves two primary elements: - Preventing data leakage and access across tenants, which means that data belonging to Tenant A can't in any way be obtained by users in Tenant B without explicit authorization by Tenant A. - Resource access isolation across tenants, which means that operations performed by Tenant A can't in any way impact access to resources for Tenant B.
-As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state (through the Directory Services API) and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through:
+As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state through the Directory Services API and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through:
-- Azure AD instances are discrete containers and there is no relationship between them.
+- Azure AD instances are discrete containers and there's no relationship between them.
- Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation. - Access isn't permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances.-- Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using Just-In-Time (JIT) privileged access management system.
+- Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using the Just-In-Time (JIT) privileged access management system.
- Azure AD users have no access to physical assets or locations, and therefore it isn't possible for them to bypass the logical Azure RBAC policy checks. :::image type="content" source="./media/secure-isolation-fig2.png" alt-text="Azure Active Directory logical tenant isolation":::
Azure has extensive support to safeguard your data using [data encryption](../se
Data encryption provides isolation assurances that are tied directly to encryption (cryptographic) key access. Since Azure uses strong ciphers for data encryption, only entities with access to cryptographic keys can have access to data. Deleting or revoking cryptographic keys renders the corresponding data inaccessible. More information about **data encryption in transit** is provided in *[Networking isolation](#networking-isolation)* section, whereas **data encryption at rest** is covered in *[Storage isolation](#storage-isolation)* section.
-Azure enables you to enforce [double encryption](../security/fundamentals/double-encryption.md) for both data at rest and data in transit. With this model, two or more layers of encryption are enabled to protect against compromises of any layer of encryption.
+Azure enables you to enforce **[double encryption](../security/fundamentals/double-encryption.md)** for both data at rest and data in transit. With this model, two or more layers of encryption are enabled to protect against compromises of any layer of encryption.
### Azure Key Vault Proper protection and management of cryptographic keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types that are described in the rest of this section:
You control access permissions and can extract detailed activity logs from the A
You can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it isn't necessary to write logs to Azure Blob storage. > [!NOTE]
-> For a comprehensive list of Azure Key Vault security recommendations, see the **[Security baseline for Azure Key Vault](../key-vault/general/security-baseline.md)**.
+> For a comprehensive list of Azure Key Vault security recommendations, see **[Azure security baseline for Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline)**.
#### Vault **[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). For a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication.
-Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for many Azure services. As mentioned previously, you can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for [many Azure services](../security/fundamentals/encryption-models.md#supporting-services). As mentioned previously, you can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling you to enroll and automatically renew certificates from supported public Certificate Authorities. Key Vault certificates support provides for the management of your X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
When you create a key vault in a resource group, you can [manage access](../key-
> You should control tightly who has Contributor role access to your key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy. > > *Extra resources:*
-> - How to **[secure access to a key vault](../key-vault/general/security-features.md)**
+> - How to **[secure access to a key vault](../key-vault/general/security-features.md)**.
#### Managed HSM
-**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by you and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure your control over cryptographic keys.
+**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It's most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by you and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure your control over cryptographic keys.
When a managed HSM is created, the requestor also provides a list of data plane administrators. Only these administrators are able to [access the managed HSM data plane](../key-vault/managed-hsm/access-control.md) to perform key operations and manage data plane role assignments (managed HSM local RBAC). The permission model for both the management and data planes uses the same syntax, but permissions are enforced at different levels, and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane-managed HSM local RBAC is enforced by the managed HSM itself.
As mentioned previously, managed HSM supports [importing keys generated](../key-
Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM. ## Compute isolation
-Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
+Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there's a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
:::image type="content" source="./media/secure-isolation-fig4.png" alt-text="Isolation of Hypervisor, Root VM, and Guest VMs"::: **Figure 4.** Isolation of Hypervisor, Root VM, and Guest VMs
-Physical servers hosting VMs are grouped into clusters, and they're independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads, and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
+Physical servers hosting VMs are grouped into clusters, and they're independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads, and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters, isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
-The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing uses decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
+The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing uses decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploits mitigations, and strong security assurance processes.
### Management network isolation There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:
Communication is permitted from the FC VLAN to the main VLAN but can't be initia
Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it can't attack nodes on either the FC or device VLANs.
-These controls ensure that the management consoles access to the Hypervisor is always valid and available.
+These controls ensure that management console's access to the Hypervisor is always valid and available.
:::image type="content" source="./media/secure-isolation-fig5.png" alt-text="VLAN isolation"::: **Figure 5.** VLAN isolation
-The Hypervisor and the Host OS provide network packet filters so untrusted VMs can't generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
+The Hypervisor and the Host OS provide network packet filters, which help ensure that untrusted VMs can't generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
### Management console and management plane The Azure Management Console and Management Plane follow strict security architecture principles of least privilege to secure and isolate tenant processing:
Figure 6 illustrates the management flow corresponding to a user command to stop
|**3.**|User issues &#8220;stop VM&#8221; request on Azure portal. Azure portal sends &#8220;stop VM&#8221; request to Azure Resource Manager and presents userΓÇÖs token that was provided by Azure AD. Azure Resource Manager verifies token using token signature and valid signing keys and that the user is authorized to perform the requested operation.|JSON Web Token (Azure AD)|TLS 1.2| |**4.**|Azure Resource Manager requests a token from dSTS server based on the client certificate that Azure Resource Manager has, enabling dSTS to grant a JSON Web Token with the correct identity and roles.|Client Certificate|TLS 1.2| |**5.**|Azure Resource Manager sends request to CRP. Call is authenticated via OAuth using a JSON Web Token representing the Azure Resource Manager system identity from dSTS, thus transition from user to system context.|JSON Web Token (dSTS)|TLS 1.2|
-|**6.**|CRP validates the request and determines which fabric controller can complete the request. CRP requests a certificate from dSTS based on its client certificate so that it can connect to the specific Fabric Controller (FC) that is the target of the command. Token will grant permissions only to that specific FC if CRP is allowed to communicate to that FC.|Client Certificate|TLS 1.2|
+|**6.**|CRP validates the request and determines which fabric controller can complete the request. CRP requests a certificate from dSTS based on its client certificate so that it can connect to the specific Fabric Controller (FC) that is the target of the command. Token will grant permissions only to that specific FC if CRP is allowed to communicate to that FC.|Client Certificate|TLS 1.2|
|**7.**|CRP then sends the request to the correct FC with the JSON Web Token that was created by dSTS.|JSON Web Token (dSTS)|TLS 1.2|
-|**8.**|FC then validates the command is allowed and comes from a trusted source. Then it establishes a secure TLS connection to the correct Fabric Agent (FA) in the cluster that can execute the command by using a certificate that is unique to the target FA and the FC. Once the secure connection is established, the command is transmitted.|Mutual Certificate|TLS 1.2|
+|**8.**|FC then validates the command is allowed and comes from a trusted source. Then it establishes a secure TLS connection to the correct Fabric Agent (FA) in the cluster that can execute the command by using a certificate that is unique to the target FA and the FC. Once the secure connection is established, the command is transmitted.|Mutual Certificate|TLS 1.2|
|**9.**|The FA again validates the command is allowed and comes from a trusted source. Once validated, the FA will establish a secure connection using mutual certificate authentication and issue the command to the Hypervisor Agent that is only accessible by the FA.|Mutual Certificate|TLS 1.2| |**10.**|Hypervisor Agent on the host executes an internal call to stop the VM.|System Context|N.A.|
-Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully and events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
+Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log, and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully and events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.
### Logical isolation implementation options Azure provides isolation of compute processing through a multi-layered approach, including:+ - **Hypervisor isolation** for services that provide cryptographically certain isolation by using separate virtual machines and using Azure Hypervisor isolation. Examples: *App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.*-- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*
+- **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*
- **User context-based isolation** for services that are composed solely of Microsoft-controlled code and customer code isn't allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Microsoft Defender for Cloud, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.* These logical isolation options are discussed in the rest of this section.
The Target of Evaluation (TOE) was composed of Microsoft Windows Server, Microso
More information is available from the [third-party certification report](https://www.niap-ccevs.org/MMO/Product/st_vid11087-vr.pdf). The critical Hypervisor isolation is provided through:+ - Strongly defined security boundaries enforced by the Hypervisor - Defense-in-depth exploits mitigations - Strong security assurance processes
The Azure Hypervisor acts like a micro-kernel, passing all hardware access reque
- **Emulated devices** ΓÇô The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition. - **Para-virtualized devices** ΓÇô The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as &#8220;synthetic&#8221; devices.-- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](../virtual-network/accelerated-networking-overview.md) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.
+- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](../virtual-network/accelerated-networking-overview.md) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation for each partition.
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation:
Listed below are some key design principles adopted by Microsoft to secure Hyper
- Every change going into Hyper-V is subject to design review. - Eliminate common vulnerability classes with safer coding - Some components such as the VMSwitch use a formally proven protocol parser.
- - Many components use `gsl::span` instead of raw pointers, which eliminates the possibility of buffer overflows and/or out-of-bounds memory accesses. For more information, see the [Guidelines Support Library (GSL)](https://github.com/isocpp/CppCoreGuidelines/blob/master/docs/gsl-intro.md) documentation.
+ - Many components use `gsl::span` instead of raw pointers, which eliminates the possibility of buffer overflows and/or out-of-bounds memory accesses. For more information, see the [Guidelines Support Library (GSL)](https://github.com/isocpp/CppCoreGuidelines/blob/master/docs/gsl-intro.md) documentation.
- Many components use [smart pointers](/cpp/cpp/smart-pointers-modern-cpp) to eliminate the risk of [use-after-free](https://owasp.org/www-community/vulnerabilities/Using_freed_memory) bugs. - Most Hyper-V kernel-mode code uses a heap allocator that zeros on allocation to eliminate uninitialized memory bugs. - Eliminate common vulnerability classes with compiler mitigations
Listed below are some key design principles adopted by Microsoft to secure Hyper
- [Code Integrity Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Only Microsoft signed code can be loaded in the VM Worker Process. - [Control Flow Guard (CFG)](/windows/win32/secbp/control-flow-guard) ΓÇô Provides course grained control flow protection to indirect calls and jumps. - NoChildProcess ΓÇô The worker process can't create child processes (useful for bypassing CFG).
- - NoLowImages / NoRemoteImages ΓÇô The worker process can't load DLLΓÇÖs over the network or DLLΓÇÖs that were written to disk by a sandboxed process.
+ - NoLowImages / NoRemoteImages ΓÇô The worker process can't load DLLs over the network or DLLs that were written to disk by a sandboxed process.
- NoWin32k ΓÇô The worker process can't communicate with Win32k, which makes sandbox escapes more difficult. - Heap randomization ΓÇô Windows ships with one of the most secure heap implementations of any operating system. - [Address Space Layout Randomization (ASLR)](https://en.wikipedia.org/wiki/Address_space_layout_randomization) ΓÇô Randomizes the layout of heaps, stacks, binaries, and other data structures in the address space to make exploitation less reliable.
Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. The
Moreover, Azure has adopted an assume-breach security strategy implemented via [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf). This approach relies on a dedicated team of security researchers and engineers who conduct continuous ongoing testing of Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the Azure infrastructure and platform engineering or operations teams. This approach tests security detection and response capabilities and helps identify production vulnerabilities in Azure Hypervisor and other systems, including configuration errors, invalid assumptions, or other security issues in a controlled manner. Microsoft invests heavily in these innovative security measures for continuous Azure threat mitigation. ##### *Strong security assurance processes*
-The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By using the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
+The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.com/2018/12/10/first-steps-in-hyper-v-research/). It has been the subject of [ongoing research](https://msrc-blog.microsoft.com/2019/09/11/attacking-the-vm-worker-process/) and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public [presentation at a Black Hat conference](https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_08_BlackHatUSA/A%20Dive%20in%20to%20Hyper-V%20Architecture%20and%20Vulnerabilities.pdf) in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a [$250,000 bug bounty program](https://www.microsoft.com/msrc/bounty-hyper-v) for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By using the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.
**Table 4.** Hyper-V attack surface details
The attack surface in Hyper-V is [well understood](https://msrc-blog.microsoft.c
|**Host partition kernel-mode components**|System in kernel mode: full system compromise with the ability to compromise other Guests|- Virtual Infrastructure Driver (VID) intercept handling </br>- Kernel-mode client library </br>- Virtual Machine Bus (VMBus) channel messages </br>- Storage Virtualization Service Provider (VSP) </br>- Network VSP </br>- Virtual Hard Disk (VHD) parser </br>- Azure Networking Virtual Filtering Platform (VFP) and Virtual Network (VNet)| |**Host partition user-mode components**|Worker process in user mode: limited compromise with ability to attack Host and elevate privileges|- Virtual devices (VDEVs)|
-To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
+To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provide high confidence in the Azure isolation guarantee. As described in *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest ΓÇô Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.
> [!NOTE] > Azure has adopted an industry leading approach to ensure Hypervisor-based tenant separation that has been strengthened and improved over two decades of Microsoft investments in Hyper-V technology for virtual machine isolation. The outcome of this approach is a robust Hypervisor that helps ensure tenant separation via 1) strongly defined security boundaries, 2) defense-in-depth exploits mitigations, and 3) strong security assurances processes. #### Drawbridge isolation
-For services that provide small units of processing using customer code, requests from multiple tenants are executed within a single VM and isolated using Microsoft [Drawbridge](https://www.microsoft.com/research/project/drawbridge/) technology. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (Library OS) inside a *pico-process*. A pico-process is a lightweight, secure isolation container with minimal kernel API surface and no direct access to services or resources of the Host system. The only external calls the pico-process can make are to the Drawbridge Security Monitor through the Drawbridge Application Binary Interface (ABI), as shown in Figure 8.
+For services that provide small units of processing using customer code, requests from multiple tenants are executed within a single VM and isolated using Microsoft [Drawbridge](https://www.microsoft.com/research/project/drawbridge/) technology. To provide security isolation, Drawbridge runs a user process together with a lightweight version of the Windows kernel (Library OS) inside a *pico-process*. A pico-process is a lightweight, secure isolation container with minimal kernel API surface and no direct access to services or resources of the Host system. The only external calls the pico-process can make are to the Drawbridge Security Monitor through the Drawbridge Application Binary Interface (ABI), as shown in Figure 8.
:::image type="content" source="./media/secure-isolation-fig8.png" alt-text="Process isolation using Drawbridge"::: **Figure 8.** Process isolation using Drawbridge
The ABI is implemented within two components:
- The Platform Adaptation Layer (PAL) runs as part of the pico-process. - The host implementation runs as part of the Host.
-Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and isn't able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes, and so on), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and can't directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
+Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it's run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor, and isn't able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes, and so on), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and can't directly attack the Host system or any colocated sandboxes. It's only able to affect objects inside its own sandbox.
When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process can't abuse the resources from the Host machine.
-In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.
+In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it's substantially faster than booting a VM while accomplishing logical isolation.
A normal Windows process can call more than 1200 functions that result in access to the Windows kernel; however, the entire interface for a pico-process consists of fewer than 50 calls down to the Host. Most application requests for operating system services are handled by the Library OS within the address space of the pico-process. By providing a significantly smaller interface to the kernel, Drawbridge creates a more secure and isolated operating environment in which applications are much less vulnerable to changes in the Host system and incompatibilities introduced by new OS releases. More importantly, a Drawbridge pico-process is a strongly isolated container within which untrusted code from even the most malicious sources can be run without risk of compromising the Host system. The Host assumes that no code running within the pico-process can be trusted. The Host validates all requests from the pico-process with security checks.
-Like a virtual machine, the pico-process is much easier to secure than a traditional OS interface because it is significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time.
+Like a virtual machine, the pico-process is much easier to secure than a traditional OS interface because it's significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time.
#### User context-based isolation In cases where an Azure service is composed of Microsoft-controlled code and customer code isn't allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code isn't allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.
In cases where an Azure service is composed of Microsoft-controlled code and cus
In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation you can use Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer. #### Azure Dedicated Host
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](https://azure.microsoft.com/services/virtual-machines/sql-server/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](/azure/azure-sql/virtual-machines/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
> [!NOTE]
-> You can deploy a dedicated host using the **[portal, Azure PowerShell, and the Azure CLI](../virtual-machines/dedicated-hosts-how-to.md)**.
+> You can deploy a dedicated host using the **[Azure portal, Azure PowerShell, and Azure CLI](../virtual-machines/dedicated-hosts-how-to.md)**.
You can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing you to opt in to a maintenance window to reduce potential impact to your provisioned services. Most maintenance events have little to no impact on your VMs; however, if you're in a highly regulated industry or with a sensitive workload, you may want to have control over any potential maintenance impact. > [!NOTE] > Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
-Table 5 summarizes available security guidance for customer virtual machines provisioned in Azure.
+Table 5 summarizes the available security guidance for your virtual machines provisioned in Azure.
**Table 5.** Security guidance for Azure virtual machines
Table 5 summarizes available security guidance for customer virtual machines pro
|**Linux**|[Secure policies](../virtual-machines/security-policy.md) <br/> [Azure Disk Encryption](../virtual-machines/linux/disk-encryption-overview.md) <br/> [Built-in security controls](../virtual-machines/linux/security-baseline.md) <br/> [Security recommendations](../virtual-machines/security-recommendations.md)| #### Isolated Virtual Machines
-Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
+Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization).
## Networking isolation The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
Azure provides network isolation for each deployment and enforces the following
- When you put a VM on a VNet, that VM gets its own address space that is invisible, and hence, not reachable from VMs outside of a deployment or VNet (unless configured to be visible via public IP addresses). Your environment is open only through the ports that you specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access. #### Packet flow and network path protection
-AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses several networking implementations to achieve these goals: a) flat addressing to allow service instances to be placed anywhere in the network; b) load balancing to spread traffic uniformly across network paths; and c) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
+AzureΓÇÖs hyperscale network is designed to provide:
+
+- Uniform high capacity between servers.
+- Performance isolation between services, including customers.
+- Ethernet Layer-2 semantics.
+
+Azure uses several networking implementations to achieve these goals:
+
+- Flat addressing to allow service instances to be placed anywhere in the network.
+- Load balancing to spread traffic uniformly across network paths.
+- End-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it isn't possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
-This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric - creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch.
+This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric, creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch.
The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation): - **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, and forward packets encapsulated with CAs along shortest paths.-- **Provider address (PA)** is the Azure assigned internal fabric address that isn't visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that isn't externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
+- **Provider address (PA)** is the Azure assigned internal fabric address that isn't visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that isn't externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
To route traffic between servers, which use PA addresses, on an underlying netwo
:::image type="content" source="./media/secure-isolation-fig10.png" alt-text="Separation of tenant network traffic using VNets"::: **Figure 10.** Separation of tenant network traffic using VNets
-**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When you place a public IP on your VNet gateway, traffic from the public Internet or your on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when you establish private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of your on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and your internal network (for example, via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to your on-premises resources as though it was directly on that network.
+**External traffic (orange line)** ΓÇô For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When you place a public IP on your VNet gateway, traffic from the public Internet or your on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when you establish private peering over an ExpressRoute connection, it's connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of your on-premises network. A cryptographically protected [IPsec/IKE tunnel](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) is established between Azure and your internal network (for example, via [Azure VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md) or [Azure ExpressRoute Private Peering](../virtual-wan/vpn-over-expressroute.md)), enabling the VM to connect securely to your on-premises resources as though it was directly on that network.
-At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.
+At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it's destined, enforcing isolation.
-**Internal traffic (blue line)** ΓÇô Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm: (1) the PA is a match, (2) the CA is located at this PA, and (3) the VNet identifier is a match. Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic (for example, to a VM endpoint). This approach provides VNet isolation assurance based on correct traffic routing between cloud services.
+**Internal traffic (blue line)** ΓÇô Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm:
+
+1. The PA is a match,
+2. The CA is located at this PA, and
+3. The VNet identifier is a match.
+
+Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic, for example, to a VM endpoint. This approach provides VNet isolation assurance based on correct traffic routing between cloud services.
Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including: - **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, and reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.-- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you're always operating within your unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that hasn't been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
+- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you're always operating within your unique address space, overlapping address spaces between tenants and the Azure network fabric. Anything that hasn't been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described previously, any encapsulated traffic not performed by the Azure network fabric is discarded.
- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users don't have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Therefore, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic. In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
Azure provides many options for [encrypting data in transit](../security/fundame
- Microsoft datacenters as part of expected Azure service operation #### End user's connection to Azure service
-**Transport Layer Security (TLS)** ΓÇô Azure uses the TLS protocol to help protect data when it is traveling between your end users and Azure services. Most of your end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft doesn't control or limit the regions from which you or your end users may access or move customer data.
+**Transport Layer Security (TLS)** ΓÇô Azure uses the TLS protocol to help protect data when it's traveling between your end users and Azure services. Most of your end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft doesn't control or limit the regions from which you or your end users may access or move customer data.
> [!IMPORTANT] > You can increase security by enabling encryption in transit. For example, you can use **[Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
TLS provides strong authentication, message privacy, and integrity. [Perfect For
> [!IMPORTANT] > You should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[Azure ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
-**Azure Storage transactions** ΓÇô When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, you can configure your storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
+**Azure Storage transactions** ΓÇô When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, you can configure your storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
-[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Therefore, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS.
+[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Therefore, when mounting a share over SMB or accessing it through the Azure portal (or Azure PowerShell, Azure CLI, and Azure SDKs), Azure Files will only allow the connection if it's made with SMB 3.0+ with encryption or over HTTPS.
#### Datacenter connection to Azure region **VPN encryption** ΓÇô [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of your internal (on-premises) network. With VNet, you choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they won't collide with addresses you're using elsewhere. You have options to securely connect to a VNet from your on-premises infrastructure or remote locations.
TLS provides strong authentication, message privacy, and integrity. [Perfect For
In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides you with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows you to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates the PSK when the VPN tunnel is created. You can change the autogenerated PSK to your own.
-**Azure ExpressRoute encryption** ΓÇô [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
+**Azure ExpressRoute encryption** ΓÇô [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it's guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
You can enable IPsec in addition to MACsec on your ExpressRoute Direct ports, as shown in Figure 11. Using VPN Gateway, you can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of your ExpressRoute circuit between your on-premises network and your Azure VNet. MACsec secures the physical connection between your on-premises network and Microsoft. IPsec secures the end-to-end connection between your on-premises network and your VNets in Azure. MACsec and IPsec can be enabled independently.
Azure services such as Storage and SQL Database can be configured for geo-replic
Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 250,000 km of lit fiber optic and undersea cable systems. > [!IMPORTANT]
-> You should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](/azure/azure-sql/database/security-overview#information-protection-and-encryption)**.
+> You should review Azure **[best practices for the protection of data in transit](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](/azure/azure-sql/database/security-overview#transport-layer-security-encryption-in-transit)**.
### Third-party network virtual appliances Azure provides you with many features to help you achieve your security and isolation goals, including [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [network security groups](../virtual-network/network-security-groups-overview.md), [Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Microsoft Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, you can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate your specific network isolation requirements while at the same time applying existing in-house skills. Azure supports many appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in your virtual networks and deployments.
The cumulative effect of network isolation restrictions is that each cloud servi
> - **[Azure network security white paper](https://azure.microsoft.com/resources/azure-network-security/)** ## Storage isolation
-Microsoft Azure separates your VM-based computation resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. So, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
+Microsoft Azure separates your VM-based compute resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows compute and storage to scale independently, making it easier to provide multi-tenancy and isolation. Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.
Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscriptions/) can have one or more storage accounts. Azure storage supports various [authentication options](/rest/api/storageservices/authorize-requests-to-azure-storage), including: -- **Shared symmetric keys:** Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. You can rotate and regenerate these keys at any point thereafter without coordination with your applications. -- **Azure AD-based authentication:** Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).-- **Shared access signatures (SAS):** Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.-- **User delegation SAS:** Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.-- **Anonymous public read access:** You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
+- **Shared symmetric keys** ΓÇô Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. You can rotate and regenerate these keys at any point thereafter without coordination with your applications.
+- **Azure AD-based authentication** ΓÇô Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper [Azure Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper).
+- **Shared access signatures (SAS)** ΓÇô Shared access signatures or ΓÇ£pre-signed URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
+- **User delegation SAS** ΓÇô Delegated authentication is similar to SAS but is [based on Azure AD tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.
+- **Anonymous public read access** ΓÇô You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
Azure Storage provides storage for a wide variety of workloads, including:
Transferring large volumes of data across the Internet is inherently unreliable.
You can upload blocks in any order and determine their sequence in the final blocklist commitment step. You can also upload a new block to replace an existing uncommitted block of the same block ID. #### Partition layer
-The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
+The partition layer is responsible for:
+
+- Managing higher-level data abstractions (Blob, Table, Queue),
+- Providing a scalable object namespace,
+- Providing transaction ordering and strong consistency for objects,
+- Storing object data on top of the stream layer, and
+- Caching object data to reduce disk I/O.
+
+This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
-Once a blob is ingested by the FE layer, the partition layer is responsible for tracking and storing where data is placed in the stream layer. Each storage tenant can have approximately 200 ΓÇô 300 individual partition layer nodes and each node is responsible for tracking and serving a partition of the data stored in that Storage tenant. The high throughput block blob (HTBB) feature enables data to be sharded within a single blob, which allows the workload for large blobs to be shared across multiple partition layer servers (Figure 14). Distributing the load among multiple partition layer servers greatly improves availability, throughput, and durability.
+Once a blob is ingested by the FE layer, the partition layer is responsible for tracking and storing where data is placed in the stream layer. Each storage tenant can have approximately 200 - 300 individual partition layer nodes, and each node is responsible for tracking and serving a partition of the data stored in that Storage tenant. The high throughput block blob (HTBB) feature enables data to be sharded within a single blob, which allows the workload for large blobs to be shared across multiple partition layer servers (Figure 14). Distributing the load among multiple partition layer servers greatly improves availability, throughput, and durability.
:::image type="content" source="./media/secure-isolation-fig14.png" alt-text="High throughput block blobs spread traffic and data across multiple partition servers and streams"::: **Figure 14.** High throughput block blobs spread traffic and data across multiple partition servers and streams #### Stream layer
-The stream layer stores the bits on disk and is responsible for distributing and replicating the data across many servers to keep data durable within a storage stamp. It acts as a distributed file system layer within a stamp. It handles files, called streams, which are ordered lists of data blocks called extents that are analogous to extents on physical hard drives. Large blob objects can be stored in multiple extents, potentially on multiple physical extent nodes (ENs). The data is stored in the stream layer, but it is accessible from the partition layer. Partition servers and stream servers are colocated on each storage node in a stamp.
+The stream layer stores the bits on disk, and is responsible for distributing and replicating the data across many servers to keep data durable within a storage stamp. It acts as a distributed file system layer within a stamp. It handles files, called streams, which are ordered lists of data blocks called extents that are analogous to extents on physical hard drives. Large blob objects can be stored in multiple extents, potentially on multiple physical extent nodes (ENs). The data is stored in the stream layer, but it's accessible from the partition layer. Partition servers and stream servers are colocated on each storage node in a stamp.
-The stream layer provides synchronous replication (intra-stamp) across different nodes in different fault domains to keep data durable within the stamp. It is responsible for creating the three local replicated copies of each extent. The stream layer manager makes sure that all three copies are distributed across different physical racks and nodes on different fault and upgrade domains so that copies are resilient to individual disk/node/rack failures and planned downtime due to upgrades.
+The stream layer provides synchronous replication (intra-stamp) across different nodes in different fault domains to keep data durable within the stamp. It's responsible for creating the three local replicated copies of each extent. The stream layer manager makes sure that all three copies are distributed across different physical racks and nodes on different fault and upgrade domains so that copies are resilient to individual disk/node/rack failures and planned downtime due to upgrades.
**Erasure Coding** ΓÇô Azure Storage uses a technique called [Erasure Coding](https://www.microsoft.com/research/wp-content/uploads/2016/02/LRC12-cheng20webpage.pdf), which allows for the reconstruction of data even if some of the data is missing due to disk failure. This approach is similar to the concept of RAID striping for individual disks where data is spread across multiple disks so that if a disk is lost, the missing data can be reconstructed using the parity bits from the data on the other disks. Erasure Coding splits an extent into equal data and parity fragments that are stored on separate ENs, as shown in Figure 15. :::image type="content" source="./media/secure-isolation-fig15.png" alt-text="Erasure Coding further shards extent data across EN servers to protect against failure"::: **Figure 15.** Erasure Coding further shards extent data across EN servers to protect against failure
-All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for &#8220;bit rot&#8221;. If a bad extent is found a new copy of that extent is created to replace the bad extent.
+All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for *bit rot*. If a bad extent is found a new copy of that extent is created to replace the bad extent.
Your data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. You can choose between Microsoft-managed encryption keys (also known as platform-managed encryption keys) or customer-managed encryption keys (CMK). The handling of data encryption and decryption is transparent to customers, as discussed in the next section.
However, you can also choose to manage encryption with your own keys by specifyi
- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on your premises to meet regulatory compliance requirements. Customer-provided keys enable you to pass an encryption key to Storage service using Blob APIs as part of read or write operations. > [!NOTE]
-> You can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. You can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
+> You can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal, Azure PowerShell, or Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)**. You can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
Storage service encryption is enabled by default for all new and existing storage accounts and it [can't be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest: -- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. You must use Azure Key Vault to store your customer-managed keys for Storage service encryption.
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It's never exposed directly to the Azure Storage service or other services. You must use Azure Key Vault to store your customer-managed keys for Storage service encryption.
- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, that is, cluster of storage hardware. This key is used to perform the final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)). These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it can't be disabled.
Storage accounts are encrypted regardless of their performance tier (standard or
Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables [double encryption of data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md). #### Azure Disk encryption
-Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
+Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
-Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
+Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it's commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data can't be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key (BYOK)* scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host*
-Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled are not encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure. ### Data deletion
-Storage is allocated sparsely, which means that when a virtual disk is created, disk space isn't allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
+Storage is allocated sparsely, which means that when a virtual disk is created, disk space isn't allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table. For more information, see [Azure data security ΓÇô data cleansing and leakage](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage).
When you delete a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data, if you provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage). At the primary location, you can immediately try to access the blob or entity, and you wonΓÇÖt find it in your index, since Azure provides strong consistency for the delete. So, you can verify directly that the data has been deleted.
-In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they're written ΓÇô new versions of pointers are also written sequentially. A side effect of this design is that it isn't possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
+In Azure Storage, all disk writes are sequential. This approach minimizes the amount of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they're written ΓÇô new versions of pointers are also written sequentially. A side effect of this design is that it isn't possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there's no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
-The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process isn't deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
+The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process isn't deterministic and there's no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
- A customer can't read deleted data of another customer. - If anyone tries to read a region on a virtual disk that they haven't yet written to, physical space won't have been allocated for that region and therefore only zeroes would be returned.
-Customers aren't provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
+Customers aren't provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there's no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
-Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. For non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
+Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it's the SQL Database software that does this enforcement. For Azure Storage, it's the Azure Storage software. For non-durable drives of a VM, it's the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under your control (that is, customer-managed key ΓÇô CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and you can choose to have encryption keys under your own control. In this manner, you can also prevent access to your data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK. ### Data retention
-Always during the term of your Azure subscription, you have the ability to access, extract, and delete your data stored in Azure.
+During the term of your Azure subscription, you always have the ability to access, extract, and delete your data stored in Azure.
-If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It is recommended that you don't terminate your service until you have extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something.
+If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It's recommended that you don't terminate your service until you've extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something.
-If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they're deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription.
+If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it's permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they're deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription.
For accidental deletion involving Azure SQL Database, you should check backups that the service makes automatically and use point-in-time restore. For example, full database backup is done weekly, and differential database backups are done hourly. Also, individual services (such as Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen). ### Data destruction
-If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data can't be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
+If a disk drive used for storage suffers a hardware failure, it's securely [erased or destroyed](https://www.microsoft.com/trust-center/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data can't be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
- **Purge** ΓÇô &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, and so on) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three-pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary. - **Destroy** ΓÇô &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;can't be reused as originally intended.&#8221;
-Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or destroyed.
+Purge and Destroy operations must be performed using tools and processes approved by Microsoft Security. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or destroyed.
In addition to technical implementation details that enable Azure compute, networking, and storage isolation, Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems, as described in the next section. ## Security assurance processes and practices
-Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
+Azure isolation assurances are further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share MicrosoftΓÇÖs learnings with the broader industry and incorporate industry feedback to create a stronger security development process. - **Tooling and processes** ΓÇô All Azure code is subject to an extensive set of both static and dynamic analysis tools that identify potential vulnerabilities, ineffective security patterns, memory corruption, user privilege issues, and other critical security problems.
Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of t
- *Threat modeling* ΓÇô A core element of the Microsoft SDL. ItΓÇÖs an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle. - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, they can lead to unintended system configuration changes. - **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training and any extra training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year plus STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.-- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect AzureΓÇÖs infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000.-- **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
+- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect AzureΓÇÖs infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $60,000.
+- **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve Azure security. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
-If you're accustomed to traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
+If you're accustomed to a traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
## Logical isolation considerations A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you're migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
Table 6 provides a summary of key security considerations for physically isolate
|Security consideration|On-premises|Azure| ||||
-|**Firewalls, networking**|- Physical network enforcement (switches, and so on) </br>- Physical host-based firewall can be manipulated by compromised application </br>- 2 layers of enforcement|- Physical network enforcement (switches, and so on) </br>- Hyper-V host virtual network switch enforcement can't be changed from inside VM </br>- VM host-based firewall can be manipulated by compromised application </br>- 3 layers of enforcement|
+|**Firewalls, networking**|- Physical network enforcement (switches, and so on) </br>- Physical host-based firewall can be manipulated by compromised application </br>- two layers of enforcement|- Physical network enforcement (switches, and so on) </br>- Hyper-V host virtual network switch enforcement can't be changed from inside VM </br>- VM host-based firewall can be manipulated by compromised application </br>- three layers of enforcement|
|**Attack surface area**|- Large hardware attack surface exposed to complex workloads, enables firmware based advanced persistent threat (APT)|- Hardware not directly exposed to VM, no potential for APT to persist in firmware from VM </br>- Small software-based Hyper-V attack surface area with low historical bug counts exposed to VM| |**Side channel attacks**|- Side channel attacks may be a factor, although reduced vs. shared hardware|- Side channel attacks assume control over VM placement across applications; may not be practical in large cloud service| |**Patching**|- Varied effective patching policy applied across host systems </br>- Highly varied/fragile updating for hardware and firmware|- Uniform patching policy applied across host and VMs| |**Security analytics**|- Security analytics dependent on host-based security solutions, which assume host/security software hasn't been compromised|- Outside VM (hypervisor based) forensics/snapshot capability allows assessment of potentially compromised workloads| |**Security policy**|- Security policy verification (patch scanning, vulnerability scanning, and so on) subject to tampering by compromised host </br>- Inconsistent security policy applied across customer entities|- Outside VM verification of security policies </br>- Possible to enforce uniform security policies across customer entities| |**Logging and monitoring**|- Varied logging and security analytics solutions|- Common Azure platform logging and security analytics solutions </br>- Most existing on-premises / varied logging and security analytics solutions also work|
-|**Malicious insider**|- Persistent threat caused by system admins having elevated access rights typically for during employment|- Greatly reduced threat because admins have no default access rights|
+|**Malicious insider**|- Persistent threat caused by system admins having elevated access rights typically for duration of employment|- Greatly reduced threat because admins have no default access rights|
Listed below are key risks that are unique to shared cloud environments that may need to be addressed when accommodating sensitive data and workloads. ### Exploitation of vulnerabilities in virtualization technologies Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs don't have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Your software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect your service from attacks by its own end users. You can modify these permissions, and you can also choose to configure your VMs to allow remote administrative access.
-PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that haven't even been detected. This approach makes it more difficult for a compromise to persist.
+PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it's a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that haven't even been detected. This approach makes it more difficult for a compromise to persist.
-When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
+When VMs belonging to different customers are running on the same physical server, it's the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used. - Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM. - All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal. - You can provision VMs on hardware dedicated to a single customer by using [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) or [Isolated VMs](../virtual-machines/isolation.md), as described in *[Physical isolation](#physical-isolation)* section.
-Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
+Overall, PaaS ΓÇô or any workload that autocreates VMs ΓÇô contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.
## Summary A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
+
+ Title: Authentication and authorization best practices in Azure Maps
+
+description: Learn tips & tricks to optimize the use of Authentication and Authorization in your Azure Maps applications.
++ Last updated : 05/11/2022+++++
+# Authentication and authorization best practices
+
+The single most important part of your application is its security. No matter how good the user experience might be, if your application isn't secure a hacker can ruin it.
+
+The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. For more information, See the [introduction to Azure security](/azure/security/fundamentals/overview).
+
+## Understanding security threats
+
+If a hacker gains access to your Azure Maps account, they can potentially use it to make an unlimited number of unauthorized requests, which could result in decreased performance due to QPS limits and significant billable transactions to your account.
+
+When considering best practices for securing your Azure Maps applications, you'll need to understand the different authentication options available.
+++++
+## Authentication best practices in Azure Maps
+
+When creating a publicly facing client application with Azure Maps using any of the available SDKs whether it be Android, iOS or the Web SDK, you must ensure that your authentication secrets aren't publicly accessible.
+
+Subscription key-based authentication (Shared Key) can be used in either client side applications or web services, however it is the least secure approach to securing your application or web service. This is because the key grants access to all Azure Maps REST API that are available in the SKU (Pricing Tier) selected when creating the Azure Maps account and the key can be easily obtained from an HTTP request. If you do use subscription keys, be sure to [rotate them regularly](how-to-manage-authentication.md#manage-and-rotate-shared-keys) and keep in mind that Shared Key doesn't allow for configurable lifetime, it must be done manually. You should also consider using [Shared Key authentication with Azure Key Vault](how-to-secure-daemon-app.md#scenario-shared-key-authentication-with-azure-key-vault), which enables you to securely store your secret in Azure.
+
+If using [Azure Active Directory (Azure AD) authentication](/azure/active-directory/fundamentals/active-directory-whatis) or [Shared Access Signature (SAS) Token authentication](azure-maps-authentication.md#shared-access-signature-token-authentication) (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)](azure-maps-authentication.md#authorization-with-role-based-access-control). RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
+
+> [!TIP]
+>
+> For more information on configuring token lifetimes see:
+> - [Configurable token lifetimes in the Microsoft identity platform (preview)](/azure/active-directory/develop/active-directory-configurable-token-lifetimes)
+> - [Create SAS tokens](azure-maps-authentication.md#create-sas-tokens)
+
+### Public client and confidential client applications
+
+There are different security concerns between public and confidential client applications. See [Public client and confidential client applications](/azure/active-directory/develop/msal-client-applications) in the Microsoft identity platform documentation for more information about what is considered a *public* versus *confidential* client application.
+
+### Public client applications
+
+For apps that run on devices or desktop computers or in a web browser, you should consider defining which domains have access to your Azure Map account using [Cross origin resource sharing (CORS)](azure-maps-authentication.md#cross-origin-resource-sharing-cors). CORS instructs the clients' browser on which origins such as "https://microsoft.com" are allowed to request resources for the Azure Map account.
+
+> [!NOTE]
+> If you're developing a web server or service, your Azure Maps account does not need to be configured with CORS. If you have JavaScript code in the client side web application, CORS does apply.
+
+### Confidential client applications
+
+For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview). Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Azure Active Directory (Azure AD) authentication. In this case, your web service will use that identity to obtain the required Azure AD tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles](/azure/active-directory/roles/delegate-by-task) possible.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication with Azure Maps](azure-maps-authentication.md)
+
+> [!div class="nextstepaction"]
+> [Manage authentication in Azure Maps](how-to-manage-authentication.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Add app authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
Here are some example scenarios where custom roles can improve application secur
| :- | : | | A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` | | An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
-| A role for a security principal, which requests reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
+| A role for a security principal, which requests a reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
| A role for a security principal, which requires reading, writing, and deleting of Creator based map data. This can be defined as a map data editor role, but does not allow access to other REST APIs like base map tiles. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/data/write`, `Microsoft.Maps/accounts/services/data/delete` | ### Understand scope
The first table shows 1 token which has a maximum request per second of 500, and
| :- | :- | : | : | :- | | token | 500 | 500 | 60 | ~15000 |
-For example, if two SAS tokens are created in, and use the same location as an Azure Maps account, each token now shares the default rate limit of 250 QPS. If each token are used at the same time with the same throughput token 1 and token 2 would successfully grant 7500 successful transactions each.
+For example, if two SAS tokens are created in, and use the same location as an Azure Maps account, each token now shares the default rate limit of 250 QPS. If each token is used at the same time with the same throughput token 1 and token 2 would successfully grant 7500 successful transactions each.
| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions | | : | :- | : | : | :- |
After the account has been successfully created or updated with the managed iden
Next, you'll need to create a SAS token using the Azure Management SDK tooling, List SAS operation on Account Management API, or the Azure portal Shared Access Signature page of the Map account resource.
-SAS token parameters :
+SAS token parameters:
| Parameter Name | Example Value | Description | | : | :-- | :- |
After the application receives a SAS token, the Azure Maps SDK and/or applicatio
| : | :- | | Authorization | jwt-sas eyJ0e….HNIVN |
-> [!NOTE]
+> [!NOTE]
> `jwt-sas` is the authentication scheme to denote using SAS token. Do not include `x-ms-client-id` or other Authorization headers or `subscription-key` query string parameter. ## Cross origin resource sharing (CORS)
See [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps)
## Next steps
+To learn more about security best practices, see
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization best practices](authentication-best-practices.md)
+ To learn more about authenticating an application with Azure AD and Azure Maps, see
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Manage authentication in Azure Maps](./how-to-manage-authentication.md) To learn more about authenticating the Azure Maps Map Control with Azure AD, see
-> [!div class="nextstepaction"]
-> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
+> [!div class="nextstepaction"]
+> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
The Azure Maps Web SDK provides a powerful canvas for rendering large spatial da
Generally, when looking to improve performance of the map, look for ways to reduce the number of layers and sources, and the complexity of the data sets and rendering styles being used.
-## Security basics
+## Security best practices
-The single most important part of your application is its security. If your application isn't secure a hacker can ruin any application, no matter how good the user experience might be. The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. See this document for an [introduction to Azure security](../security/fundamentals/overview.md).
-
-> [!IMPORTANT]
-> Azure Maps provides two methods of authentication.
->
-> * Subscription key-based authentication
-> * Azure Active Directory authentication
->
-> Use Azure Active Directory in all production applications.
-> Subscription key-based authentication is simple and what most mapping platforms use as a light way method to measure your usage of the platform for billing purposes. However, this is not a secure form of authentication and should only be used locally when developing apps. Some platforms provide the ability to restrict which IP addresses and/or HTTP referrer is in requests, however, this information can easily be spoofed. If you do use subscription keys, be sure to [rotate them regularly](how-to-manage-authentication.md#manage-and-rotate-shared-keys).
-> Azure Active Directory is an enterprise identity service that has a large selection of security features and settings for all sorts of application scenarios. Microsoft recommends that all production applications using Azure Maps use Azure Active Directory for authentication.
-> Learn more about [managing authentication in Azure Maps](how-to-manage-authentication.md) in this document.
-
-### Secure your private data
-
-When data is added to the Azure Maps interactive map SDKs, it is rendered locally on the end user's device and is never sent back out to the internet for any reason.
-
-If your application is loading data that should not be publicly accessible, make sure that the data is stored in a secure location, is accessed in a secure manner, and that the application itself is locked down and only available to your desired users. If any of these steps are skipped, an unauthorized person has the potential to access this data. Azure Active Directory can assist you with locking this down.
-
-See this tutorial on [adding authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
+For security best practices, see [Authentication and authorization best practices](authentication-best-practices.md).
### Use the latest versions of Azure Maps
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Internet Information Services (IIS) logs counts of all request reaching IIS and
## No data from my server * I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.* * A firewall issue is most likely the cause. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
-* IIS Server might be missing some prerequisites, like .NET Extensibility 4.5 or ASP.NET 4.5.
*I [installed Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on my web server to monitor existing apps. I don't see any results.*
On February 5 2018, we announced that we removed logging of the Client IP addres
The city, region, and country dimensions are derived from IP addresses and aren't always accurate. These IP addresses are processed for location first and then changed to 0.0.0.0 to be stored. ## Exception "method not found" on running in Azure Cloud Services
-Did you build for .NET 4.6? 4.6 isn't automatically supported in Azure Cloud Services roles. [Install 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
+Did you build for .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)? Earlier versions aren't automatically supported in Azure Cloud Services roles. [Install LTS on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
## Troubleshooting Logs
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
# [Windows](#tab/Windows) > [!IMPORTANT]
-> The following versions of ASP.NET Core are supported for auto-instrumentation on Windows: ASP.NET Core 3.1, 5.0 and 6.0. Versions 2.0, 2.1, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
+> Only .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) is supported for auto-instrumentation on Windows.
> [!NOTE] > Auto-instrumentation used to be known as "codeless attach" before October 2021.
The table below provides a more detailed explanation of what these values mean,
|Problem Value |Explanation |Fix | |- |-|| | `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio may add references to `Microsoft.ApplicationInsights`. |
-|`AppAlreadyInstrumented:true` | If the application is targeting ASP.NET Core 2.1 or 2.2, this value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off | Customers on .NET Core 2.1,2.2 are [recommended](https://github.com/aspnet/Announcements/issues/287) to use Microsoft.AspNetCore.App meta-package instead. In addition, turn on "Interop with Application Insights SDK" in portal (see the instructions above).|
|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of Microsoft.ApplicationsInsights dll in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). | |`IKeyExists:false`|This value indicates that the instrumentation key is not present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
The table below provides a more detailed explanation of what these values mean,
|Problem Value|Explanation|Fix |- |-|| | `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `System.Diagnostics.DiagnosticSource`, `Microsoft.AspNet.TelemetryCorrelation`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio may add references to `Microsoft.ApplicationInsights`.
-|`AppAlreadyInstrumented:true` | If the application is targeting ASP.NET Core 2.1 or 2.2, this value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off | Customers on .NET Core 2.1,2.2 are [recommended](https://github.com/aspnet/Announcements/issues/287) to use Microsoft.AspNetCore.App meta-package instead. In addition, turn on "Interop with Application Insights SDK" in portal (see the instructions above).|
|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of the above dlls in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). |`AppContainsAspNetTelemetryCorrelationAssembly: true` | This value indicates that extension detected references to `Microsoft.AspNet.TelemetryCorrelation` in the application, and will back-off. | Remove the reference. |`AppContainsDiagnosticSourceAssembly**:true`|This value indicates that extension detected references to `System.Diagnostics.DiagnosticSource` in the application, and will back-off.| For ASP.NET remove the reference.
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/cloudservices.md
If you have a client mobile app, use [App Center](../app/mobile-center-quickstar
[The example](https://github.com/MohanGsk/ApplicationInsights-Home/tree/master/Samples/AzureEmailService) monitors a service that has a web role and two worker roles. ## Exception "method not found" on running in Azure cloud services
-Did you build for .NET 4.6? .NET 4.6 is not automatically supported in Azure cloud services roles. [Install .NET 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
+Did you build for .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)? Earlier versions aren't automatically supported in Azure cloud services roles. [Install .NET LTS on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
## Next steps * [Configure sending Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
You need a subscription with [Microsoft Azure](https://azure.com). Sign in with a Microsoft account, which you might have for Windows, Xbox Live, or other Microsoft cloud services. Your team might have an organizational subscription to Azure: ask the owner to add you to it using your Microsoft account. > [!NOTE]
-> It is *highly recommended* to use the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used in .NET Core 2.1 or higher, and .NET Framework 4.7.2 or higher.
+> It is *highly recommended* to use the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
We do not recommend explicitly setting your application to only use TLS 1.2 unle
| | | | | Azure App Services | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). | | Azure Function Apps | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
-|.NET | Supported, configuration varies by version. | For detailed configuration info for .NET 4.7 and earlier versions, refer to [these instructions](/dotnet/framework/network-programming/tls#support-for-tls-12). |
+|.NET | Supported, Long Term Support (LTS) | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |
|Status Monitor | Supported, configuration required | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2. |Node.js | Supported, in v10.5.0, configuration may be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. | |Java | Supported, JDK support for TLS 1.2 was added in [JDK 6 update 121](https://www.oracle.com/technetwork/java/javase/overview-156328.html#R160_121) and [JDK 7](https://www.oracle.com/technetwork/java/javase/7u131-relnotes-3338543.html). | JDK 8 uses [TLS 1.2 by default](https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default). |
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
[`EventCounter`](/dotnet/core/diagnostics/event-counters) is .NET/.NET Core mechanism to publish and consume counters or statistics. EventCounters are supported in all OS platforms - Windows, Linux, and macOS. It can be thought of as a cross-platform equivalent for the [PerformanceCounters](/dotnet/api/system.diagnostics.performancecounter) that is only supported in Windows systems.
-While users can publish any custom `EventCounters` to meet their needs, .NET Core 3.0 and higher runtime publishes a set of these counters by default. This document will walk through the steps required to collect and view `EventCounters` (system defined or user defined) in Azure Application Insights.
+While users can publish any custom `EventCounters` to meet their needs, .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and higher runtime publishes a set of these counters by default. This document will walk through the steps required to collect and view `EventCounters` (system defined or user defined) in Azure Application Insights.
## Using Application Insights to collect EventCounters
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
In summary `GetMetric()` is the recommended approach since it does pre-aggregati
## Getting started with GetMetric
-For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you would like to replicate the test environment used with these examples, follow steps 1-6 of the [monitoring worker service article](./worker-service.md#net-core-30-worker-service-application). These steps will add Application Insights to a basic worker service project template and the concepts apply to any general application where the SDK can be used including web apps and console apps.
-
+For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you would like to replicate the test environment used with these examples, follow steps 1-6 of the [monitoring worker service article](worker-service.md#net-core-lts-worker-service-application). These steps will add Application Insights to a basic worker service project template and the concepts apply to any general application where the SDK can be used including web apps and console apps.
### Sending metrics Replace the contents of your `worker.cs` file with the following code:
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Download the [applicationinsights-agent-3.2.11.jar](https://github.com/microsoft
> [!WARNING] >
-> If you're upgrading from 3.0 Preview:
+> If you're upgrading from 3.2.x to 3.3.0-BETA:
+>
+> - Starting from 3.3.0-BETA, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
>
-> - Review all [configuration options](./java-standalone-config.md) carefully. The JSON structure has completely changed. The file name is now all lowercase.
+> If you're upgrading from 3.1.x:
+>
+> - Starting from 3.2.0, controller "InProc" dependencies are not captured by default. For details on how to enable this, please see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
+> - Database dependency names are now more concise with the full (sanitized) query still present in the `data` field. HTTP dependency names are now more descriptive.
+> This change can affect custom dashboards or alerts if they relied on the previous values.
+> For details, see the [3.2.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0).
> > If you're upgrading from 3.0.x: >
Download the [applicationinsights-agent-3.2.11.jar](https://github.com/microsoft
> This change can affect custom dashboards or alerts if they relied on the previous values. > For details, see the [3.1.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0). >
-> If you're upgrading from 3.1.x:
-> - Starting from 3.2.0, controller "InProc" dependencies are not captured by default. For details on how to enable this, please see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
-> - Database dependency names are now more concise with the full (sanitized) query still present in the `data` field. HTTP dependency names are now more descriptive.
-> This change can affect custom dashboards or alerts if they relied on the previous values.
-> For details, see the [3.2.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0).
+ #### Point the JVM to the jar file
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
These are the valid `level` values that you can specify in the `applicationinsig
> If an exception object is passed to the logger, then the log message (and exception object details) > will show up in the Azure portal under the `exceptions` table instead of the `traces` table.
+### LoggingLevel
+
+Starting from version 3.3.0-BETA, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
+
+If needed, you can re-enable the previous behavior:
+
+```json
+{
+ "preview": {
+ "captureLoggingLevelAsCustomDimension": true
+ }
+}
+```
+
+We will remove this configuration option in 4.0.0.
+ ## Auto-collected Micrometer metrics (including Spring Boot Actuator metrics) If your application uses [Micrometer](https://micrometer.io),
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters. > [!IMPORTANT]
-> Monitoring ASP.NET Core 3.X applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in the Live Metrics Stream.
+> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in the Live Metrics Stream.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
However, if you recognize and trust all the connected servers, you can try the c
| Language | Basic Metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process | |-|:--|:--|:--|:--|:|
-| .NET Framework | Supported (V2.7.2+) | Supported (V2.7.2+) | Supported (V2.7.2+) | Supported (V2.7.2+) | Supported (V2.7.2+) |
-| .NET Core (target=.NET Framework)| Supported (V2.4.1+) | Supported (V2.4.1+) | Supported (V2.4.1+) | Supported (V2.4.1+) | Supported (V2.4.1+) |
-| .NET Core (target=.NET Core) | Supported (V2.4.1+) | Supported* | Supported (V2.4.1+) | Supported (V2.4.1+) | **Not Supported** |
+| .NET Framework | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) |
+| .NET Core (target=.NET Framework)| Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) |
+| .NET Core (target=.NET Core) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported* | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | Supported ([LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)) | **Not Supported** |
| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not Supported** | | Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not Supported** | Supported (V3.2.0+) | **Not Supported** | | Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not Supported** | Supported (V1.3.0+) | **Not Supported** |
Basic metrics include request, dependency, and exception rate. Performance metri
\* PerfCounters support varies slightly across versions of .NET Core that do not target the .NET Framework: - PerfCounters metrics are supported when running in Azure App Service for Windows. (AspNetCore SDK Version 2.4.1 or higher)-- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or On-prem etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core 2.0 or higher.-- PerfCounters are supported when app is running ANYWHERE (Linux, Windows, app service for Linux, containers, etc.) in the latest versions (i.e. AspNetCore SDK Version 2.8.0 or higher), but only for apps targeting .NET Core 2.0 or higher.
+- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or On-prem etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters are supported when app is running ANYWHERE (Linux, Windows, app service for Linux, containers, etc.) in the latest versions, but only for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
## Troubleshooting
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-servicefabric.md
Application Insights Profiler is included with Azure Diagnostics. You can instal
To set up your environment, take the following actions:
-1. Profiler supports .NET Framework and .Net Core. If you're using .NET Framework, make sure you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later. It's sufficient to confirm that the deployed OS is `Windows Server 2012 R2` or later. Profiler supports .NET Core 2.1 and newer applications.
+1. Profiler supports .NET Framework and .Net Core. If you're using .NET Framework, make sure you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later. It's sufficient to confirm that the deployed OS is `Windows Server 2012 R2` or later. Profiler supports .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer applications.
1. Search for the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) extension in the deployment template file.
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-troubleshooting.md
Profiler writes trace messages and custom events to your Application Insights re
### Other things to check * Make sure that your app is running on .NET Framework 4.6.
-* If your web app is an ASP.NET Core application, it must be running at least ASP.NET Core 2.0.
+* If your web app is an ASP.NET Core application, it must be running at least ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
* If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days. * Make sure that proxies or a firewall haven't blocked access to https://gateway.azureserviceprofiler.net. * Profiler isn't supported on free or shared app service plans. If you're using one of those plans, try scaling up to one of the basic plans and Profiler should start working.
azure-monitor Resource Manager Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-function-app.md
resource site 'Microsoft.Web/sites@2021-03-01' = {
} { name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0]};EndpointSuffix=core.windows.net'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value};EndpointSuffix=core.windows.net'
} ] alwaysOn: alwaysOn
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about classic Application Insights resources](../app/create-new-resource.md).
+* [Learn more about classic Application Insights resources](../app/create-new-resource.md).
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
4. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for example, older versions of the .NET platform), you might need to [configure exception collection](./asp-net-exceptions.md#exceptions) to see exceptions with snapshots in the portal.
-## Configure snapshot collection for applications using ASP.NET Core 2.0 or above
+## Configure snapshot collection for applications using ASP.NET Core LTS or above
1. [Enable Application Insights in your ASP.NET Core web app](./asp-net-core.md), if you haven't done it yet.
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger.md
Debug snapshots are stored for 15 days. This retention policy is set on a per-ap
## Enable Application Insights Snapshot Debugger for your application Snapshot collection is available for:
-* .NET Framework and ASP.NET applications running .NET Framework 4.5 or later.
-* .NET Core and ASP.NET Core applications running .NET Core 2.1 (LTS) or 3.1 (LTS) on Windows.
-* .NET 5.0 applications on Windows.
+* .NET Framework and ASP.NET applications running .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or later.
+* .NET Core and ASP.NET Core applications running .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) on Windows.
+* .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications on Windows.
-We don't recommend using .NET Core 2.0, 2.2 or 3.0 since they're out of support.
+We don't recommend using .NET Core versions prior to LTS since they're out of support.
The following environments are supported:
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 05/11/2020 Last updated : 05/12/2022 # Application Insights for Worker Service applications (non-HTTP applications)
The new SDK doesn't do any telemetry collection by itself. Instead, it brings in
## Supported scenarios
-The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core 3.0 Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core 2.1/2.2](/aspnet/core/fundamentals/host/hosted-services), Console apps (.NET Core/ .NET Framework), etc.
+The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core](/aspnet/core/fundamentals/host/hosted-services), Console apps (.NET Core/ .NET Framework), etc.
## Prerequisites
A valid Application Insights connection string. This string is required to send
Specific instructions for each type of application are described in the following sections.
-## .NET Core 3.0 worker service application
+## .NET Core LTS worker service application
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
-1. Download and install [.NET Core 3.0](https://dotnet.microsoft.com/download/dotnet-core/3.0)
+1. Download and install .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
2. Create a new Worker Service project either by using Visual Studio new project template or command line `dotnet new worker` 3. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
Typically `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection strin
## ASP.NET Core background tasks with hosted services
-[This](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) document describes how to create backgrounds tasks in ASP.NET Core 2.1/2.2 application.
+[This](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) document describes how to create backgrounds tasks in ASP.NET Core application.
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService)
Following is the code for `TimedHostedService` where the background task logic r
``` 3. Set up the connection string.
- Use the same `appsettings.json` from the .NET Core 3.0 Worker Service example above.
+ Use the same `appsettings.json` from the .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example above.
## .NET Core/.NET Framework Console application
-As mentioned in the beginning of this article, the new package can be used to enable Application Insights Telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used for console apps in .NET Core 2.0 or higher, and .NET Framework 4.7.2 or higher.
+As mentioned in the beginning of this article, the new package can be used to enable Application Insights Telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp)
Dependency collection is enabled by default. [This](asp-net-dependencies.md#auto
### EventCounter
-`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core 3.0 apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on customizing the list.
+`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on customizing the list.
### Manually tracking other telemetry
Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Cor
### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
-No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports ASP.NET 4.x only.
+No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only.
### Are all features supported if I run my application in Linux?
using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
Use this sample if you're using a Console Application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher) [ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService)
-Use this sample if you are in ASP.NET Core 2.1/2.2, and creating background tasks as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services)
+Use this sample if you are in ASP.NET Core, and creating background tasks as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services)
-[.NET Core 3.0 Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
-Use this sample if you have a .NET Core 3.0 Worker Service application as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template)
+[.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
+Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template)
## Open-source SDK
azure-monitor Tutorial Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/tutorial-outages.md
+
+ Title: Tutorial - Track a web app outage using Change Analysis
+description: Tutorial on how to identify the root cause of a web app outage using Azure Monitor Change Analysis.
+++
+ms.contributor: cawa
+ Last updated : 05/12/2022++++
+# Tutorial: Track a web app outage using Change Analysis
+
+When issues happen, one of the first things to check is what changed in application, configuration and resources to triage and root cause issues. Change Analysis provides a centralized view of the changes in your subscriptions for up to the past 14 days to provide the history of changes for troubleshooting issues.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Enable Change Analysis to track changes for Azure resources and for Azure Web App configurations
+> * Troubleshoot a Web App issue using Change Analysis
+
+## Pre-requisites
+
+An Azure Web App with a Storage account dependency. Follow instructions at [ChangeAnalysis-webapp-storage-sample](https://github.com/Azure-Samples/changeanalysis-webapp-storage-sample) if you haven't already deployed one.
+
+## Enable Change Analysis
+
+In the Azure portal, navigate to theChange Analysis service home page.
+
+If this is your first time using Change Analysis service, the page may take up to a few minutes to register the `Microsoft.ChangeAnalysis` resource provider in your selected subscriptions.
++
+Once the Change Analysis page loads, you can see resource changes in your subscriptions. To view detailed web app in-guest change data:
+
+- Select **Enable now** from the banner, or
+- Select **Configure** from the top menu.
+
+In the web app in-guest enablement pane, select the web app you'd like to enable:
++
+Now Change Analysis is fully enabled to track both resources and web app in-guest changes.
+
+## Simulate a web app outage
+
+In a typical team environment, multiple developers can work on the same application without notifying the other developers. Simulate this scenario and make a change to the web app setting:
+
+```azurecli
+az webapp config appsettings set -g {resourcegroup_name} -n {webapp_name} --settings AzureStorageConnection=WRONG_CONNECTION_STRING
+```
+
+Visit the web app URL to view the following error:
++
+## Troubleshoot the outage using Change Analysis
+
+In the Azure portal, navigate to the Change Analysis overview page. Since you've triggered a web app outage, you'll see an entry of change for `AzureStorageConnection`:
++
+Since the connection string is a secret value, we hide this on the overview page for security purposes. With sufficient permission to read the web app, you can select the change to view details around the old and new values:
++
+The change details blade also shows important information, including who made the change.
+
+Now that you've discovered the web app in-guest change and understand next steps, you can proceed with troubleshooting the issue.
+
+## Next steps
+
+Learn more about [Change Analysis](./change-analysis.md).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions| |cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based databases.|No Dimensions| |deadlock|Yes|Deadlocks|Count|Total|Deadlocks. Not applicable to data warehouses.|No Dimensions|
-|delta_num_of_bytes_read|Yes|Remote data reads|Bytes|Total|Remote data reads in bytes|No Dimensions|
-|delta_num_of_bytes_total|Yes|Total remote bytes read and written|Bytes|Total|Total remote bytes read and written by compute|No Dimensions|
-|delta_num_of_bytes_written|Yes|Remote log writes|Bytes|Total|Remote log writes in bytes|No Dimensions|
|diff_backup_size_bytes|Yes|Differential backup storage size|Bytes|Maximum|Cumulative differential backup storage size. Applies to vCore-based databases. Not applicable to Hyperscale databases.|No Dimensions| |dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based databases.|No Dimensions| |dtu_limit|Yes|DTU Limit|Count|Average|DTU Limit. Applies to DTU-based databases.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer (formerly Azure Video Analyzer for Media) release not
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer (formerly Azure Video Analyzer for Media). Previously updated : 04/27/2022 Last updated : 05/12/2022
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
-## April 2022 release updates
+## May 2022 release updates
+
+### Line breaking in transcripts
+
+Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer portal, such as adding a new line and editing the lineΓÇÖs timestamp.
+
+### Azure Monitor integration
+
+Azure Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
+
+The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure Video Indexer]().
+
+### Video Insights improvements
+
+Object Character Reader (OCR) is improved by 60%. Face Detection is improved by 20%. Label accuracy is improved by 30% over a wide variety of videos. These improvements are available immediately in all regions and do not require any changes by the customer.
+
+### Service tag
+
+Azure Video Indexer is now part of [Network Service Tags](network-security.md). Video Indexer often needs to access other Azure resources (for example, Storage). If you secure your inbound traffic to your resources with a Network Security Group you can now select Video Indexer as part of the built-in Service Tags. This will simplify security management as we populate the Service Tag with our public IPs.
+
+### Celebrity recognition toggle
+
+You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the account settings > and toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
+
+### Azure Video Indexer repository Name
+
+As of May 1st, our new updated repository of Azure Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead
+
+## April 2022
### Renamed **Azure Video Analyzer for Media** back to **Azure Video Indexer**
You can now create an Azure Video Indexer paid account in the US North Central,
### New source languages support for speech-to-text (STT), translation and search
-Azure Video Indexer now support STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Azure Video Indexer website.
+Azure Video Indexer now supports STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Azure Video Indexer website.
### Search by Topic in Azure Video Indexer Website
You can now create an Azure Video Indexer paid account in the India Central regi
### New Dark Mode for the Azure Video Indexer website experience
-The Azure Video Indexer website experiences is now available in dark mode.
+The Azure Video Indexer website experience is now available in dark mode.
To enable the dark mode open the settings panel and toggle on the **Dark Mode** option. :::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting":::
The Azure Video Indexer website experience is now supporting mobile devices. The
### Accessibility improvements and bug fixes
-As part of WCAG (Web Content Accessibility guidelines), the Azure Video Indexer website experiences is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
+As part of WCAG (Web Content Accessibility guidelines), the Azure Video Indexer website experience is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
## July 2020
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 05/11/2022 Last updated : 05/12/2022
Archive tier supports the following clients:
| Workloads | Preview | Generally available | | | | | | SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. |
-| Azure Virtual Machines | US Gov Virginia, US Gov Texas, US Gov Arizona, UAE North, China North 2, China East 2. | All public regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West, UAE North. |
+| Azure Virtual Machines | US Gov Virginia, US Gov Texas, US Gov Arizona, China North 2, China East 2. | All public regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West, UAE North. |
## How Azure Backup moves recovery points to the Vault-archive tier?
backup Backup Support Matrix Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mars-agent.md
The operating systems must be 64 bit and should be running the latest services p
**Operating system** | **Files/folders** | **System state** | **Software/Module requirements** | | | Windows 10 (Enterprise, Pro, Home) | Yes | No | Check the corresponding server version for software/module requirements
+Windows Server 2022 (Standard, Datacenter, Essentials) | Yes | Yes | Check the corresponding server version for software/module requirements
Windows 8.1 (Enterprise, Pro)| Yes |No | Check the corresponding server version for software/module requirements Windows 8 (Enterprise, Pro) | Yes | No | Check the corresponding server version for software/module requirements Windows Server 2016 (Standard, Datacenter, Essentials) | Yes | Yes | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/20/2022 Last updated : 05/10/2022
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachine | | Supported OS Types | Windows, Linux |
-| Description | Shuts down a VM for the duration of the fault and optionally restarts the VM at the end of the fault duration or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
+| Description | Shuts down a VM for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. Only Azure Resource Manager VMs are supported. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachine:shutdown/1.0 | | Parameters (key, value) | |
Known issues on Linux:
| Capability Name | Shutdown-1.0 | | Target type | Microsoft-VirtualMachineScaleSet | | Supported OS Types | Windows, Linux |
-| Description | Shuts down or kill a virtual machine scale set instance for the duration of the fault and optionally restarts the VM at the end of the fault duration or if the experiment is canceled. |
+| Description | Shuts down or kill a virtual machine scale set instance for the duration of the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachineScaleSet:shutdown/1.0 | | Parameters (key, value) | |
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 4/30/2022 Last updated : 5/11/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+>[!NOTE]
+
+>The May Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the May Guest OS. This list is subject to change.
+
+## May 2022 Guest OS
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-05 | [5013941] | Latest Cumulative Update(LCU) | 6.44 | May 10, 2022 |
+| Rel 22-05 | [5011486] | IE Cumulative Updates | 2.123, 3.110, 4.103 | Mar 8, 2022 |
+| Rel 22-05 | [5013944] | Latest Cumulative Update(LCU) | 7.12 | May 10, 2022 |
+| Rel 22-05 | [5013952] | Latest Cumulative Update(LCU) | 5.68 | May 10, 2022 |
+| Rel 22-05 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.123 | May 10, 2022 |
+| Rel 22-05 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | 2.123 | Apr 12, 2022 |
+| Rel 22-05 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.103 | May 10, 2022 |
+| Rel 22-05 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | 4.103 | Apr 12, 2022 |
+| Rel 22-05 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.110 | May 10, 2022 |
+| Rel 22-05 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | 3.110 | Apr 12, 2022 |
+| Rel 22-05 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.44 | May 10, 2022 |
+| Rel 22-05 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.12 | May 10, 2022 |
+| Rel 22-05 | [5014012] | Monthly Rollup | 2.123 | May 10, 2022 |
+| Rel 22-05 | [5014017] | Monthly Rollup | 3.110 | May 10, 2022 |
+| Rel 22-05 | [5014011] | Monthly Rollup | 4.103 | May 10, 2022 |
+| Rel 22-05 | [5014027] | Servicing Stack update | 3.110 | May 10, 2022 |
+| Rel 22-05 | [5014025] | Servicing Stack update | 4.103 | May 10, 2022 |
+| Rel 22-05 | [4578013] | Standalone Security Update | 4.103 | Aug 19, 2020 |
+| Rel 22-05 | [5014026] | Servicing Stack update | 5.68 | May 10, 2022 |
+| Rel 22-05 | [5011649] | Servicing Stack update | 2.123 | Mar 8, 2022 |
+| Rel 22-05 | [4494175] | Microcode | 5.68 | Sep 1, 2020 |
+| Rel 22-05 | [4494174] | Microcode | 6.44 | Sep 1, 2020 |
+
+[5013941]: https://support.microsoft.com/kb/5013941
+[5011486]: https://support.microsoft.com/kb/5011486
+[5013944]: https://support.microsoft.com/kb/5013944
+[5013952]: https://support.microsoft.com/kb/5013952
+[5013637]: https://support.microsoft.com/kb/5013637
+[5012141]: https://support.microsoft.com/kb/5012141
+[5013638]: https://support.microsoft.com/kb/5013638
+[5012142]: https://support.microsoft.com/kb/5012142
+[5013635]: https://support.microsoft.com/kb/5013635
+[5012140]: https://support.microsoft.com/kb/5012140
+[5013641]: https://support.microsoft.com/kb/5013641
+[5013630]: https://support.microsoft.com/kb/5013630
+[5014012]: https://support.microsoft.com/kb/5014012
+[5014017]: https://support.microsoft.com/kb/5014017
+[5014011]: https://support.microsoft.com/kb/5014011
+[5014027]: https://support.microsoft.com/kb/5014027
+[5014025]: https://support.microsoft.com/kb/5014025
+[4578013]: https://support.microsoft.com/kb/4578013
+[5014026]: https://support.microsoft.com/kb/5014026
+[5011649]: https://support.microsoft.com/kb/5011649
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+ ## April 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
Batch transcription jobs are scheduled on a best-effort basis. You can't estimat
## Prerequisites
-As with all features of the Speech service, you create a subscription key from the [Azure portal](https://portal.azure.com) by following our [Get started guide](overview.md#try-the-speech-service-for-free).
+As with all features of the Speech service, you create a Speech resource from the [Azure portal](https://portal.azure.com).
>[!NOTE]
-> To use batch transcription, you need a standard subscription (S0) for Speech service. Free subscription keys (F0) don't work. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> To use batch transcription, you need a standard Speech resource (S0) in your subscription. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (the `self` property). A deployed custom endpoint is *not needed* for the batch transcription service.
+If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (the `self` property). A deployed custom endpoint isn't needed for the batch transcription service.
>[!NOTE] > As a part of the REST API, batch transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription). It's a good idea to review these. To take full advantage of the ability to efficiently transcribe a large number of audio files, send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The service transcribes the files concurrently, which reduces the turnaround time. For more information, see the [Configuration](#configuration) section of this article.
For full details about the preceding calls, see the [Speech-to-text REST API v3.
This sample uses an asynchronous setup to post audio and receive transcription status. The `PostTranscriptions` method sends the audio file details, and the `GetTranscriptions` method receives the states. `PostTranscriptions` returns a handle, and `GetTranscriptions` uses it to create a handle to get the transcription status.
-This sample code doesn't specify a custom model. The service uses the baseline model for transcribing the file or files. To specify the model, you can pass on the same method the model reference for the custom model.
+This sample code doesn't specify a custom model. The service uses the base model for transcribing the file or files. To specify the model, you can pass on the same method the model reference for the custom model.
> [!NOTE]
-> For baseline transcriptions, you don't need to declare the ID for the baseline model.
+> For baseline transcriptions, you don't need to declare the ID for the base model.
## Next steps
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-transcription.md
# Speech service for telephony data
-Telephony data that's generated through landlines, mobile phones, and radios is ordinarily of low quality. This data is also narrowband, in the range of 8&nbsp;KHz, which can create challenges when you're converting speech to text.
+Telephony data that's generated through landlines, mobile phones, and radios is ordinarily of low quality. This data is also narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text.
The latest Speech service speech-recognition models excel at transcribing this telephony data, even when the data is difficult for a human to understand. These models are trained with large volumes of telephony data, and they have best-in-market recognition accuracy, even in noisy environments.
Internally, Microsoft uses the previously mentioned technologies to analyze Micr
## About interactive voice responses
-You can easily integrate Speech service into any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](./overview.md#reference-docs). However, call center transcription might require additional technologies. Ordinarily, a connection between an IVR system and Azure is required. Although we don't offer such components, the next paragraph describes what a connection to an IVR entails.
+You can easily integrate Speech service into any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](https://signature.centralus.cts.speech.microsoft.com/UI/https://docsupdatetracker.net/index.html). However, call center transcription might require additional technologies. Ordinarily, a connection between an IVR system and Azure is required. Although we don't offer such components, the next paragraph describes what a connection to an IVR entails.
Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be applied to enable an inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that's used with Speech service. Outbound responses, such as a conversation transcription or connections with the Bot Framework, can be synthesized with the Microsoft text-to-speech service and returned to the IVR for playback.
Another scenario is direct integration with the Session Initiation Protocol (SIP
The Speech service works well with built-in models. However, you might want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand. After you've built a custom model, you can use it with any of the Speech service features in real-time or batch mode.
-| Speech service | Model | Description |
+| Speech feature | Model | Description |
| -- | -- | -- | | Speech-to-text | [Acoustic model](./how-to-custom-speech-train-model.md) | Create a custom acoustic model for applications, tools, or devices that are used in particular environments, such as in a car or on a factory floor, each with its own recording conditions. Examples include accented speech, background noises, or using a specific microphone for recording. | | | [Language model](./how-to-custom-speech-train-model.md) | Create a custom language model to improve transcription of industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |
Sample code is available on GitHub for each of the Speech service features. Thes
## Next steps > [!div class="nextstepaction"]
-> [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
+> [Learn about batch transcription](./batch-transcription.md)
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
zone_pivot_groups: programming-languages-speech-sdk-cli
# Captioning with speech to text
-In this guide, you learn how to create captions with speech to text. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing. Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios.
+In this guide, you learn how to create captions with speech to text. Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing.
Here are some common captioning scenarios: - Online courses and instructional videos
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Previously updated : 01/23/2022 Last updated : 05/08/2022 # What is Custom Speech?
-With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. Follow the links in this article to start creating a custom speech-to-text experience.
+With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products.
-## What's in Custom Speech?
+Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This base model is pre-trained with dialects and phonetics representing a variety of common domains. As a result, consuming the base model requires no additional configuration and works very well in most scenarios. When you make a speech recognition request, the current base model for each [supported language](language-support.md) is used by default.
-Before you can do anything with Custom Speech, you'll need an Azure account and a Speech service subscription. After you have an account, you can prep your data, train and test your models, inspect recognition quality, evaluate accuracy, and ultimately deploy and use the custom speech-to-text model.
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions
-This diagram highlights the pieces that make up the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech).
+For more information, see [Choose a model for Custom Speech](how-to-custom-speech-choose-model.md).
-![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
-
-Here's more information about the sequence of steps that the diagram shows:
-
-1. [Subscribe and create a project](#set-up-your-azure-account). Create an Azure account and subscribe to the Speech service. This unified subscription gives you access to speech-to-text, text-to-speech, speech translation, and the [Speech Studio](https://speech.microsoft.com/customspeech). Then use your Speech service subscription to create your first Custom Speech project.
-
-1. [Upload test data](./how-to-custom-speech-test-and-train.md). Upload test data (audio files) to evaluate the Microsoft speech-to-text offering for your applications, tools, and products.
-
-1. [Inspect recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://speech.microsoft.com/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. For quantitative measurements, see [Inspect data](how-to-custom-speech-inspect-data.md).
-
-1. [Evaluate and improve accuracy](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The [Speech Studio](https://speech.microsoft.com/customspeech) provides a *word error rate*, which you can use to determine if additional training is required. If you're satisfied with the accuracy, you can use the Speech service APIs directly. If you want to improve accuracy by a relative average of 5 through 20 percent, use the **Training** tab in the portal to upload additional training data, like human-labeled transcripts and related text.
-
-1. [Train and deploy a model](how-to-custom-speech-train-model.md). Improve the accuracy of your speech-to-text model by providing written transcripts (from 10 to 1,000 hours), and related text (<200 MB), along with your audio test data. This data helps to train the speech-to-text model. After training, retest. If you're satisfied with the result, you can deploy your model to a custom endpoint.
+## How does it work?
-## Set up your Azure account
+With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
-You need to have an Azure account and Speech service subscription before you can use the [Speech Studio](https://speech.microsoft.com/customspeech) to create a custom model. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
-
-If you plan to train a custom model with audio data, pick one of the following regions that have dedicated hardware available for training. This reduces the time it takes to train a model and allows you to use more audio for training. In these regions, the Speech service will use up to 20 hours of audio for training; in other regions, it will only use up to 8 hours.
-
-* Australia East
-* Canada Central
-* Central India
-* East US
-* East US 2
-* North Central US
-* North Europe
-* South Central US
-* Southeast Asia
-* UK South
-* US Gov Arizona
-* US Gov Virginia
-* West Europe
-* West US 2
-
-After you create an Azure account and a Speech service subscription, sign in to the [Speech Studio](https://speech.microsoft.com/customspeech), and connect your subscription.
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select the subscription in which you need to work, and create a speech project.
-1. If you want to modify your subscription, select the cog icon in the top menu.
-
-## Create a project
-
-Content like data, models, tests, and endpoints are organized into *projects* in the [Speech Studio](https://speech.microsoft.com/customspeech). Each project is specific to a domain and country or language. For example, you might create a project for call centers that use English in the United States.
+![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
-To create your first project, select **Speech-to-text/Custom speech**, and then select **New Project**. Follow the instructions provided by the wizard to create your project. After you create a project, you should see four tabs: **Data**, **Testing**, **Training**, and **Deployment**. Use the links provided in [Next steps](#next-steps) to learn how to use each tab.
+Here's more information about the sequence of steps shown in the previous diagram:
-## Model and endpoint lifecycle
+1. [Choose a model](how-to-custom-speech-choose-model.md) and create a Custom Speech project. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal.
+1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products.
+1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://speech.microsoft.com/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
+1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required.
+1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
+1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint.
-Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after one year for adaptation, and two years for decoding. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+If you will train a custom model with audio data, choose a Speech resource [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training audio data. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After a model is trained, you can copy it to a Speech resource that's in another region as needed for deployment.
## Next steps
-* [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
-* [Evaluate and improve model accuracy](how-to-custom-speech-evaluate-data.md)
-* [Train and deploy a model](how-to-custom-speech-train-model.md)
+* [Choose a model](how-to-custom-speech-choose-model.md)
+* [Upload test data](./how-to-custom-speech-upload-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/customize-pronunciation.md
Title: Create phonetic pronunciation data
+ Title: Structured text phonetic pronunciation data
description: Use phonemes to customize pronunciation of words in Speech-to-Text. Previously updated : 03/01/2022 Last updated : 05/08/2022
-# Create phonetic pronunciation data
+# Structured text phonetic pronunciation data
-Custom speech allows you to provide different pronunciations for specific words using the Universal Phone Set. The Universal Phone Set (UPS) is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is used by linguists world-wide and is accepted as a standard.
+You can specify the phonetic pronunciation of words using the Universal Phone Set (UPS) in a [structured text data](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) file. The UPS is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is a standard used by linguists world-wide.
UPS pronunciations consist of a string of UPS phones, each separated by whitespace. The phone set is case-sensitive. UPS phone labels are all defined using ASCII character strings.
-For steps on implementing UPS, see [Structured text data for training phone sets](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview). Structured text data is not the same as [pronunciation files](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together.
+For steps on implementing UPS, see [Structured text phonetic pronunciation](how-to-custom-speech-test-and-train.md#structured-text-data-for-training). Structured text phonetic pronunciation data is separate from [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together. The first one is "sounds-like" or spoken-form data, and is input as a separate file, and trains the model what the spoken form sounds like
+
+ [Structured text phonetic pronunciation data](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) is specified per syllable in a markdown file. Separately, [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training) it input on its own, and trains the model what the spoken form sounds like. You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model with both of those datasets as input.
See the sections in this article for the Universal Phone Set for each locale.
See the sections in this article for the Universal Phone Set for each locale.
## Next steps - [Upload your data](how-to-custom-speech-upload-data.md)-- [Inspect your data](how-to-custom-speech-inspect-data.md)
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md
Direct Line Speech offers the highest levels of customization and sophistication
## Getting started with Direct Line Speech
-The first steps for creating a voice assistant using Direct Line Speech are to [get a speech subscription key](overview.md#try-the-speech-service-for-free), create a new bot associated with that subscription, and connect the bot to the Direct Line Speech channel.
+To create a voice assistant using Direct Line Speech, create a Speech resource and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
![Conceptual diagram of the Direct Line Speech orchestration service flow](media/voice-assistants/overview-directlinespeech.png "The Speech Channel flow")
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
The next sections cover how to create an Azure account and get a Speech resource
### Step 1: Create an Azure account
-To work with Audio Content Creation, you need a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/). To set up the accounts, see the "Try the Speech service for free" section in [What is the Speech service?](./overview.md#try-the-speech-service-for-free).
+To work with Audio Content Creation, you need a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/).
[The Azure portal](https://portal.azure.com/) is the centralized place for you to manage your Azure account. You can create the Speech resource, manage the product access, and monitor everything from simple web apps to complex cloud deployments. ### Step 2: Create a Speech resource
-After you sign up for the Azure account, you need to create a Speech resource in your Azure account to access Speech services. For instructions, see [how to create a Speech resource](./overview.md#create-the-azure-resource).
+After you sign up for the Azure account, you need to create a Speech resource in your Azure account to access Speech services. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
It takes a few moments to deploy your new Speech resource. After the deployment is complete, you can start using the Audio Content Creation tool.
You can get your content into the Audio Content Creation tool in either of two w
| File name | Each file must have a unique name. Duplicate files aren't supported. | | Text length | Character limit is 20,000. If your files exceed the limit, split them according to the instructions in the tool. | | SSML restrictions | Each SSML file can contain only a single piece of SSML. |
- | | |
+
\* **Plain text example**:
After you've reviewed your audio output and are satisfied with your tuning and a
| | | | | | | wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm | | mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
- | | |
+
1. To view the status of the task, select the **Export task** tab.
If you want to allow a user to grant access to other users, you need to assign t
1. Search for the user's Microsoft account, go to their detail page, and then select **Assigned roles**. 1. Select **Add assignments** > **Directory Readers**. If the **Add assignments** button is unavailable, it means that you don't have access. Only the global administrator of this directory can add assignments to users.
-## See also
+## Next steps
* [Long Audio API](./long-audio-api.md)
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Speech Studio](https://speech.microsoft.com)
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
This article shows how to use Azure AD authentication with the Speech SDK. You'l
> - Create the appropriate SDK configuration object. ## Create a Speech resource
-To create a Speech resource, see [Try Speech For Free](overview.md#try-the-speech-service-for-free).
+To create a Speech resource in the [Azure portal](https://portal.azure.com), see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource)
## Configure the Speech resource for Azure AD authentication
cognitive-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-control-connections.md
ms.devlang: cpp, csharp, java
-# How to: Monitor and control service connections with the Speech SDK
+# How to monitor and control service connections with the Speech SDK
`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech Service when it's appropriate. Sometimes, you may either want additional control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
cognitive-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-send-activity-to-client.md
You complete the following tasks:
## Prerequisites > [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide uses Visual Studio 2019
-> * An Azure subscription key for Speech service:
-[Get one for free](overview.md#try-the-speech-service-for-free) or create it on the [Azure portal](https://portal.azure.com)
+> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
> * A previously [created Custom Commands app](quickstart-custom-commands-application.md) > * A Speech SDK enabled client app: [How-to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
A Custom Commands application is required to complete this article. If you haven
You'll also need: > [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide is based on Visual Studio 2019.
-> * An Azure subscription key for Speech Services. [Get one for free](overview.md#try-the-speech-service-for-free) or create it on the [Azure portal](https://portal.azure.com)
+> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
> * [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development) ## Step 1: Publish Custom Commands application
cognitive-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints.md
In this article, you will learn how to setup web endpoints in a Custom Commands
> [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/)
-> * An Azure subscription key for Speech service:
-[Get one for free](overview.md#try-the-speech-service-for-free) or create it on the [Azure portal](https://portal.azure.com)
+> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
> * A Custom Commands app (see [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md)) > * A Speech SDK enabled client app (see [Integrate with a client application using Speech SDK](how-to-custom-commands-setup-speech-sdk.md))
cognitive-services How To Custom Speech Choose Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-choose-model.md
+
+ Title: Choose a model for Custom Speech - Speech service
+
+description: Learn about how to choose a model for Custom Speech.
++++++ Last updated : 05/08/2022++++
+# Choose a model for Custom Speech
+
+Custom Speech models are created by customizing a base model with data from your particular customer scenario. Once you create a custom model, the speech recognition accuracy and quality will remain consistent, even when a new base model is released.
+
+Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. But with Custom Speech you can take a snapshot of a particular base model without training it. In this case, "custom" means that speech recognition is pinned to a base model from a particular point in time.
+
+New base models are released periodically to improve accuracy and quality. We recommend that you chose the latest base model when creating your custom model. If a required customization capability is only available with an older model, then you can choose an older base model.
+
+> [!NOTE]
+> The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio
+
+A model deployed to an endpoint using Custom Speech is fixed until you decide to update it. You can also choose to deploy a base model without training, which means that base model is fixed. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
+
+Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+## Choose your model
+
+There are a few approaches to using speech-to-text models:
+- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios).
+- A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain.
+- Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.
+
+One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can use the Speech Studio, Speech CLI, or REST API to compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) score. If there are multiple incorrect word substitutions when evaluating the results, then training a custom model to recognize those words is recommended.
+
+Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesnΓÇÖt need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera.
+
+## Create a Custom Speech project
+
+Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a country or language. For example, you might create a project for English in the United States.
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select the subscription and Speech resource to work with.
+1. Select **Custom speech** > **Create a new project**.
+1. Follow the instructions provided by the wizard to create your project.
+
+Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
+
+If you want to use a base model right away, you can skip the training and testing steps. See [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) to start using a base or custom model.
+
+## Next steps
+
+* [Training and testing datasets](./how-to-custom-speech-test-and-train.md)
+* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Previously updated : 06/09/2020 Last updated : 05/08/2022
Along the way, the workflows should name and store data, tests, test files, mode
### CI workflow for testing data updates
-The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-custom-speech-accuracy) (WER) has improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
+The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER) has improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
The CI workflow for testing data updates should retest the current benchmark model with the updated test data to calculate the revised WER. This ensures that when the WER of a new model is compared to the WER of the benchmark, both models have been tested against the same test data and you're comparing like with like.
The [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Servic
- Copy the template repository to your GitHub account, then create Azure resources and a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the GitHub Actions CI/CD workflows. - Walk through the "[dev inner loop](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)." Update training and testing data from a feature branch, test the changes with a temporary development model, and raise a pull request to propose and review the changes. - When training data is updated in a pull request to *main*, train models with the GitHub Actions CI workflow.-- Perform automated accuracy testing to establish a model's [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-custom-speech-accuracy) (WER). Store the test results in Azure Blob.
+- Perform automated accuracy testing to establish a model's [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER). Store the test results in Azure Blob.
- Execute the CD workflow to create an endpoint when the WER improves. ## Next steps
-Learn more about DevOps with Speech:
- - Use the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) to implement DevOps for Custom Speech with GitHub Actions.
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
+
+ Title: Deploy a Custom Speech model - Speech service
+
+description: Learn how to deploy Custom Speech models.
++++++ Last updated : 05/08/2022+++
+# Deploy a Custom Speech model
+
+In this article, you'll learn how to deploy an endpoint for a Custom Speech model. A custom endpoint that you deploy is required to use a Custom Speech model.
+
+> [!NOTE]
+> You can deploy an endpoint to use a base model without training or testing. For example, you might want to quickly create a custom endpoint to start testing your application. The endpoint can be [updated](#change-model-and-redeploy-endpoint) later to use a trained and tested model.
+
+## Add a deployment endpoint
+
+To create a custom endpoint, follow these steps:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+
+ If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
+
+1. Select **Deploy model** to start the new endpoint wizard.
+
+1. On the **New endpoint** page, enter a name and description for your custom endpoint.
+
+1. Select the custom model that you want to associate with the endpoint.
+
+1. Optionally, you can check the box to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic.
+
+ :::image type="content" source="./media/custom-speech/custom-speech-deploy-model.png" alt-text="Screenshot of the New endpoint page that shows the checkbox to enable logging.":::
+
+1. Select **Add** to save and deploy the endpoint.
+
+ > [!NOTE]
+ > Endpoints used by `F0` Speech resources are deleted after seven days.
+
+On the main **Deploy models** page, details about the new endpoint are displayed in a table, such as name, description, status, and expiration date. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Succeeded**, the endpoint is ready to use.
+
+Note the model expiration date, and update the endpoint's model before that date to ensure uninterrupted service. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code.
+
+## Change model and redeploy endpoint
+
+An endpoint can be updated to use another model that was created by the same Speech resource. As previously mentioned, you must update the endpoint's model before the [model expires](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+To use a new model and redeploy the custom endpoint:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link to an endpoint by name, and then select **Change model**.
+1. Select the new model that you want the endpoint to use.
+1. Select **Done** to save and redeploy the endpoint.
+
+The redeployment takes several minutes to complete. In the meantime, your endpoint will use the previous model without interruption of service.
+
+## View logging data
+
+Logging data is available for export if you configured it while creating the endpoint.
+
+To download the endpoint logs:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link by endpoint name.
+1. Under **Content logging**, select **Download log**.
+
+Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If your own storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
+
+## Next steps
+
+- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
+- [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Title: "Evaluate and improve Custom Speech accuracy - Speech service"
+ Title: Test accuracy of a Custom Speech model - Speech service
-description: "In this article, you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model."
+description: In this article, you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model.
Previously updated : 01/23/2022 Last updated : 05/08/2022
-# Evaluate and improve Custom Speech accuracy
+# Test accuracy of a Custom Speech model
-In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
+In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
-## Evaluate Custom Speech accuracy
+## Create a test
+
+You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy a Microsoft speech-to-text base model or another custom model.
+
+Follow these steps to create a test:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Create new test**.
+1. Select **Evaluate accuracy** > **Next**.
+1. Select one audio + human-labeled transcription dataset, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
+
+ > [!NOTE]
+ > It's important to select an acoustic dataset that's different from the one you used with your model. This approach can provide a more realistic sense of the model's performance.
+
+1. Select up to two models to evaluate, and then select **Next**.
+1. Enter the test name and description, and then select **Next**.
+1. Review the test details, and then select **Save and close**.
+
+After your test has been successfully created, you can compare the [word error rate (WER)](#evaluate-word-error-rate) and recognition results side by side.
+
+## Side-by-side comparison
+
+After the test is complete, as indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select the test name to view the test details page. This page lists all the utterances in your dataset and the recognition results of the two models, alongside the transcription from the submitted dataset.
+
+To inspect the side-by-side comparison, you can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which display the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and determine where additional training and improvements are required.
+
+## Evaluate word error rate
The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, divides the sum by the total number of words provided in the human-labeled transcript (shown in the following formula as N), and then multiplies that quotient by 100 to calculate the error rate as a percentage.
How the errors are distributed is important. When many deletion errors are encou
By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level will help you target improvements.
-## Create a test
-
-If you want to test the quality of the Microsoft speech-to-text baseline model or a custom model that you've trained, you can compare two models side by side. The comparison includes WER and recognition results. A custom model is ordinarily compared with the Microsoft baseline model.
-
-To evaluate models side by side, do the following:
-
-1. Sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-
-1. Select **Speech-to-text** > **Custom Speech** > **\<name of project>** > **Testing**.
-1. Select **Add Test**.
-1. Select **Evaluate accuracy**. Give the test a name and description, and then select your audio + human-labeled transcription dataset.
-1. Select up to two models that you want to test.
-1. Select **Create**.
-
-After your test has been successfully created, you can compare the results side by side.
-
-### Side-by-side comparison
-
-After the test is complete, as indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select the test name to view the test details page. This page lists all the utterances in your dataset and the recognition results of the two models, alongside the transcription from the submitted dataset.
-
-To inspect the side-by-side comparison, you can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which display the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and determine where additional training and improvements are required.
-
-## Improve Custom Speech accuracy
+## Example scenario outcomes
Speech recognition scenarios vary by audio quality and language (vocabulary and speaking style). The following table examines four common scenarios:
Speech recognition scenarios vary by audio quality and language (vocabulary and
| Voice assistant, such as Cortana, or a drive-through window | High, 16&nbsp;kHz | Entity-heavy (song titles, products, locations) | Clearly stated words and phrases | | Dictation (instant message, notes, search) | High, 16&nbsp;kHz | Varied | Note-taking | | Video closed captioning | Varied, including varied microphone use, added music | Varied, from meetings, recited speech, musical lyrics | Read, prepared, or loosely structured |
-| | |
-Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [WER](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario.
+Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [WER](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario. The insertion, substitution, and deletion error rates help you determine what kind of data to add to improve the model.
| Scenario | Speech recognition quality | Insertion errors | Deletion errors | Substitution errors | | | | | | |
Different scenarios produce different quality outcomes. The following table exam
| Voice assistant | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | Medium, due to song titles, product names, or locations | | Dictation | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | High | | Video closed captioning | Depends on video type (can be <&nbsp;50%&nbsp;WER) | Low | Can be high because of music, noises, microphone quality | Jargon might cause these errors |
-| | |
-
-Determining the components of the WER (number of insertion, deletion, and substitution errors) helps determine what kind of data to add to improve the model. Use the [Custom Speech portal](https://speech.microsoft.com/customspeech) to view the quality of a baseline model. The portal reports insertion, substitution, and deletion error rates that are combined in the WER quality rate.
-## Improve model recognition
-
-You can reduce recognition errors by adding training data in the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-
-Plan to maintain your custom model by adding source materials periodically. Your custom model needs additional training to stay aware of changes to your entities. For example, you might need updates to product names, song names, or new service locations.
-
-The following sections describe how each kind of additional training data can reduce errors.
-
-### Add plain text data
-
-When you train a new custom model, start by adding plain text sentences of related text to improve the recognition of domain-specific words and phrases. Related text sentences can primarily reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
-
-> [!NOTE]
-> Avoid related text sentences that include noise such as unrecognizable characters or words.
-
-### Add structured text data
-
-You can use structured text data in markdown format as you would with plain text sentences, but you would use structured text data when your data follows a particular pattern in particular utterances that differ only by words or phrases from a list. For more information, see [Structured text data for training](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview).
-
-> [!NOTE]
-> Training with structured text is supported only for these locales: en-US, de-DE, en-UK, en-IN, fr-FR, fr-CA, es-ES, and es-MX. You must use the latest base model for these locales. See [Language support](language-support.md) for a list of base models that support training with structured text data.
->
-> For locales that donΓÇÖt support training with structured text, the service will take any training sentences that donΓÇÖt reference classes as part of training with plain text data.
-
-### Add audio with human-labeled transcripts
-
-Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get the most calls about swimwear and sunglasses during summer months. Ensure that your sample includes the full scope of speech that you want to detect.
-
-Consider these details:
-
-* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
-* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
-* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
-* Avoid samples that include transcription errors, but do include a diversity of audio quality.
-* Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model.
-* When the transcript quality varies, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
-* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
-* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a [region that has dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
--
-> [!NOTE]
-> Not all base models support training with audio. If a base model doesn't support audio, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
-
-> [!NOTE]
-> When you change the base model that's used for training, and you have audio in the training dataset, *always* check to see whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model didn't support training with audio data, and the training dataset contains audio, training time with the new base model will *drastically* increase. The duration might easily go from several hours to several days or longer. This is especially true if your Speech service subscription is *not* in a [region that has the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
->
-> If you face this issue, you can decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. We recommend the latter option if your Speech service subscription is *not* in a region that has such dedicated hardware.
-
-### Add new words with pronunciation
-
-Words that are made up or highly specialized might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize *Xbox*, pronounce it as *X box*. This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
-
-> [!NOTE]
-> This technique is available for only certain languages at this time. To see which languages support customization of pronunciation, search for "Pronunciation" in the **Customizations** column in the [speech-to-text table](language-support.md#speech-to-text).
-
-## Sources by scenario
-
-The following table shows voice recognition scenarios and lists source materials to consider within the three previously mentioned training content categories.
-
-| Scenario | Plain text data and <br> structured text data | Audio + human-labeled transcripts | New words with pronunciation |
-| | | | |
-| Call center | Marketing documents, website, product reviews related to call center activity | Call center calls transcribed by humans | Terms that have ambiguous pronunciations (see the *Xbox* example in the preceding section) |
-| Voice assistant | Lists of sentences that use various combinations of commands and entities | Recorded voices speaking commands into device, transcribed into text | Names (movies, songs, products) that have unique pronunciations |
-| Dictation | Written input, such as instant messages or emails | Similar to preceding examples | Similar to preceding examples |
-| Video closed captioning | TV show scripts, movies, marketing content, video summaries | Exact transcripts of videos | Similar to preceding examples |
-| | |
## Next steps
-* [Train and deploy a model](how-to-custom-speech-train-model.md)
-
-## Additional resources
-
-* [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
+* [Deploy a model](how-to-custom-speech-deploy-model.md)
+* [Use the online transcription editor](how-to-custom-speech-transcription-editor.md)
cognitive-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
Previously updated : 02/12/2021 Last updated : 05/08/2022 # How to create human-labeled transcriptions
-Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced.
+Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
-A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of transcription data. The Speech service will use up to 20 hours of audio for training. On this page, we'll review guidelines designed to help you create high-quality transcriptions. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
+A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service will use up to 20 hours of audio for training. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
-> [!NOTE]
-> Not all base models support customization with audio files. If a base model does not support it, training will just use the text of the transcriptions in the same way as related text is used. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+The transcriptions for all WAV files are contained in a single plain-text file. Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
-> [!NOTE]
-> In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
->
-> If you face the issue described in the paragraph above, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. The latter option is highly recommended if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+For example:
-## US English (en-US)
+```tsv
+speech01.wav speech recognition is awesome
+speech02.wav the quick brown fox jumped all over the place
+speech03.wav the lazy dog was not amused
+```
+
+The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the dataset.
+
+Human-labeled transcriptions for languages other than English and Mandarin Chinese, must be UTF-8 encoded with a byte-order marker. For other locales transcription requirements, see the sections below.
+
+## en-US
Human-labeled transcriptions for English audio must be provided as plain text, only using ASCII characters. Avoid the use of Latin-1 or Unicode punctuation characters. These characters are often inadvertently added when copying text from a word-processing application or scraping data from web pages. If these characters are present, make sure to update them with the appropriate ASCII substitution.
Here are a few examples of normalization automatically performed on the transcri
| Tune to 102.7 | tune to one oh two point seven | | Pi is about 3.14 | pi is about three point one four |
-## Mandarin Chinese (zh-CN)
+## de-DE
+
+Human-labeled transcriptions for German audio must be UTF-8 encoded with a byte-order marker.
+
+### Text normalization for German
+
+Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
+
+- Write decimal points as "," and not ".".
+- Write time separators as ":" and not "." (for example: 12:00 Uhr).
+- Abbreviations such as "ca." aren't replaced. We recommend that you use the full spoken form.
+- The four main mathematical operators (+, -, \*, and /) are removed. We recommend replacing them with the written form: "plus," "minus," "mal," and "geteilt."
+- Comparison operators are removed (=, <, and >). We recommend replacing them with "gleich," "kleiner als," and "gr├╢sser als."
+- Write fractions, such as 3/4, in written form (for example: "drei viertel" instead of 3/4).
+- Replace the "Γé¼" symbol with its written form "Euro."
+
+Here are a few examples of normalization that you should perform on the transcription:
+
+| Original text | Text after user normalization | Text after system normalization |
+| - | -- | - |
+| Es ist 12.23 Uhr | Es ist 12:23 Uhr | es ist zw├╢lf uhr drei und zwanzig uhr |
+| {12.45} | {12,45} | zw├╢lf komma vier f├╝nf |
+| 2 + 3 - 4 | 2 plus 3 minus 4 | zwei plus drei minus vier |
+
+The following normalization rules are automatically applied to transcriptions:
+
+- Use lowercase letters for all text.
+- Remove all punctuation, including various types of quotation marks ("test", 'test', "test„, and «test» are OK).
+- Discard rows with any special characters from this set: ¢ ¤ ¥ ¦ § © ª ¬ ® ° ± ² µ × ÿ ج¬.
+- Expand numbers to spoken form, including dollar or Euro amounts.
+- Accept umlauts only for a, o, and u. Others will be replaced by "th" or be discarded.
+
+Here are a few examples of normalization automatically performed on the transcription:
+
+| Original text | Text after normalization |
+| - | |
+| Frankfurter Ring | frankfurter ring |
+| ¡Eine Frage! | eine frage |
+| Wir, haben | wir haben |
+
+## ja-JP
+
+In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences will be discarded. To add longer text, insert a period in between.
+
+## zh-CN
Human-labeled transcriptions for Mandarin Chinese audio must be UTF-8 encoded with a byte-order marker. Avoid the use of half-width punctuation characters. These characters can be included inadvertently when you prepare the data in a word-processing program or scrape data from web pages. If these characters are present, make sure to update them with the appropriate full-width substitution.
Here are some examples of automatic transcription normalization:
| 下午 5:00 的航班 | 下午 五点 的 航班 | | 我今年 21 岁 | 我 今年 二十 一 岁 |
-## German (de-DE) and other languages
-
-Human-labeled transcriptions for German audio (and other non-English or Mandarin Chinese languages) must be UTF-8 encoded with a byte-order marker. One human-labeled transcript should be provided for each audio file.
-
-### Text normalization for German
-
-Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
--- Write decimal points as "," and not ".".-- Write time separators as ":" and not "." (for example: 12:00 Uhr).-- Abbreviations such as "ca." aren't replaced. We recommend that you use the full spoken form.-- The four main mathematical operators (+, -, \*, and /) are removed. We recommend replacing them with the written form: "plus," "minus," "mal," and "geteilt."-- Comparison operators are removed (=, <, and >). We recommend replacing them with "gleich," "kleiner als," and "gr├╢sser als."-- Write fractions, such as 3/4, in written form (for example: "drei viertel" instead of 3/4).-- Replace the "Γé¼" symbol with its written form "Euro."-
-Here are a few examples of normalization that you should perform on the transcription:
-
-| Original text | Text after user normalization | Text after system normalization |
-| - | -- | - |
-| Es ist 12.23 Uhr | Es ist 12:23 Uhr | es ist zw├╢lf uhr drei und zwanzig uhr |
-| {12.45} | {12,45} | zw├╢lf komma vier f├╝nf |
-| 2 + 3 - 4 | 2 plus 3 minus 4 | zwei plus drei minus vier |
-
-The following normalization rules are automatically applied to transcriptions:
--- Use lowercase letters for all text.-- Remove all punctuation, including various types of quotation marks ("test", 'test', "test„, and «test» are OK).-- Discard rows with any special characters from this set: ¢ ¤ ¥ ¦ § © ª ¬ ® ° ± ² µ × ÿ ج¬.-- Expand numbers to spoken form, including dollar or Euro amounts.-- Accept umlauts only for a, o, and u. Others will be replaced by "th" or be discarded.-
-Here are a few examples of normalization automatically performed on the transcription:
-
-| Original text | Text after normalization |
-| - | |
-| Frankfurter Ring | frankfurter ring |
-| ¡Eine Frage! | eine frage |
-| Wir, haben | wir haben |
-
-### Text normalization for Japanese
-
-In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences will be discarded. To add longer text, insert a period in between.
- ## Next Steps -- [Inspect your data](how-to-custom-speech-inspect-data.md)-- [Evaluate your data](how-to-custom-speech-evaluate-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
Title: Inspect data quality for Custom Speech - Speech service
+ Title: Test recognition quality of a Custom Speech model - Speech service
-description: Custom Speech lets you visually inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
+description: Custom Speech lets you qualitatively inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
Previously updated : 02/12/2021 Last updated : 05/08/2022
-# Inspect Custom Speech data
+# Test recognition quality of a Custom Speech model
-Custom Speech lets you visually inspect the recognition quality of a model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. This tool helps you inspect quality of Microsoft's baseline speech-to-text model, inspect a trained custom model, or compare transcription by two models.
+You can inspect the recognition quality of a Custom Speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test has been successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
-This article describes how to visually inspect the quality of Microsoft's baseline speech-to-text model or custom models that you've trained. You'll also see how to use the online transcription editor to create and refine labeled audio datasets.
-
-## Prerequisites
-
-You've read [Prepare test data for Custom Speech](./how-to-custom-speech-test-and-train.md) and have uploaded a dataset for inspection.
+> [!TIP]
+> You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
## Create a test
Follow these instructions to create a test:
1. Navigate to **Speech Studio** > **Custom Speech** and select your project name from the list. 1. Select **Test models** > **Create new test**. 1. Select **Inspect quality (Audio-only data)** > **Next**.
-1. Choose an audio dataset that you'd like to use for testing, then select **Next**.
+1. Choose an audio dataset that you'd like to use for testing, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
:::image type="content" source="media/custom-speech/custom-speech-choose-test-data.png" alt-text="Review your keyword"::: 1. Choose one or two models to evaluate and compare accuracy.
-1. Enter the test name and description, then select **Next**.
-1. Review your settings, then select **Save and close**.
-
-After a test has been successfully created, you can see how a model transcribes the audio dataset you specified, or compare results from two models side by side.
+1. Enter the test name and description, and then select **Next**.
+1. Review your settings, and then select **Save and close**.
[!INCLUDE [service-pricing-advisory](includes/service-pricing-advisory.md)]
-## Side-by-side model comparisons
-
-When the test status is _Succeeded_, select the test item name to see details of the test. This detail page lists all the utterances in your dataset, and shows the recognition results of the two models you are comparing.
-
-To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column (showing human-labeled transcription and the results of two speech-to-text models), you can decide which model meets your needs and where improvements are needed.
-
-Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, requiring transcribed audio, follow the instructions found in [Evaluate Accuracy](how-to-custom-speech-evaluate-data.md).
-
-## Online transcription editor
-
-The online transcription editor allows you to easily work with audio transcriptions in Custom Speech. The main use cases of the editor are as follows:
-
-* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
-* You already have audio + human-labeled datasets, but there are errors or defects in the transcription. The editor allows you to quickly modify the transcriptions to get best training accuracy.
-
-The only requirement to use the transcription editor is to have audio data uploaded (either audio-only, or audio + transcription).
-
-### Import datasets to Editor
-To import data into the Editor, first navigate to **Custom Speech > [Your project] > Speech datasets > Editor**
-
-Next, use the following steps to import data.
-
-1. Select **Import data**
-1. Create a new dataset(s) and give it a description
-1. Select datasets. You can select audio data only, audio + human-labeled data, or both.
-1. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
-1. Select **Import**
-
-After data has been successfully imported, you can select datasets and start editing.
-
-> [!TIP]
-> You can also import datasets into the Editor directly by selecting datasets and selecting **Export to Editor**
-
-### Edit transcription by listening to audio
-
-After the data upload has succeeded, select each item name to see details of the data. You can also use **Previous** and **Next** to move between each file.
-
-The detail page lists all the segments in each audio file, and you can select the desired utterance. For each utterance, you can play back the audio and examine the transcripts, and edit the transcriptions if you find any insertion, deletion, or substitution errors. See the [data evaluation how-to](how-to-custom-speech-evaluate-data.md) for more detail on error types.
--
-After you've made edits, select **Save**.
+## Side-by-side model comparisons
-### Export datasets from the Editor
+When the test status is *Succeeded*, select the test item name to see details of the test. This detail page lists all the utterances in your dataset, and shows the recognition results of the two models you are comparing.
-To export datasets back to the **Data** tab, navigate to the data detail page and select **Export** to export all the files as a new dataset. You can also filter the files by last edited time, audio durations, etc. to partially select the desired files.
+To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column (showing human-labeled transcription and the results of two speech-to-text models), you can decide which model meets your needs and where improvements are needed.
-The files exported to Data will be used as a brand-new dataset and will not affect any of the existing data/training/testing entities.
+Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, requiring transcribed audio, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
## Next steps -- [Evaluate your data](how-to-custom-speech-evaluate-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
- [Train your model](how-to-custom-speech-train-model.md)-- [Improve your model](./how-to-custom-speech-evaluate-data.md) - [Deploy your model](./how-to-custom-speech-train-model.md)
-## Additional resources
--- [Prepare test data for Custom Speech](./how-to-custom-speech-test-and-train.md)
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
Title: Model and Endpoint Lifecycle of Custom Speech - Speech service
+ Title: Model lifecycle of Custom Speech - Speech service
-description: Custom Speech provides base models for adaptation and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
+description: Custom Speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
Previously updated : 04/2/2021 Last updated : 05/08/2022
-# Model and endpoint lifecycle
+# Custom Speech model lifecycle
-Our standard (not customized) speech is built upon AI models that we call base models. In most cases, we train a different base model for each spoken language we support. We update the speech service with new base models every few months to improve accuracy and quality.
-With Custom Speech, custom models are created by adapting a chosen base model with data from your particular customer scenario. Once you create a custom model, that model will not be updated or changed, even if the corresponding base model from which it was adapted gets updated in the standard speech service.
-This policy allows you to keep using a particular custom model for a long time after you have a custom model that meets your needs. But we recommend that you periodically recreate your custom model so you can adapt from the latest base model to take advantage of the improved accuracy and quality.
+Speech recognition models that are provided by Microsoft are referred to as base models. When you make a speech recognition request, the current base model for each [supported language](language-support.md) is used by default. Base models are updated periodically to improve accuracy and quality.
-Other key terms related to the model lifecycle include:
+You can use a custom model for some time after it's trained and deployed. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
-* **Adaptation**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data.
-* **Decoding**: Using a model and performing speech recognition (decoding audio into text).
-* **Endpoint**: A user-specific deployment of either a base model or a custom model that's accessible *only* to a given user.
+Some key terms related to the model lifecycle include:
-### Expiration timeline
+* **Training**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data. In some contexts such as the REST API properties, training is also referred to as **adaptation**.
+* **Transcription**: Using a model and performing speech recognition (decoding audio into text).
+* **Endpoint**: A specific deployment of either a base model or a custom model that only you can access.
-As new models and new functionality become available and older, less accurate models are retired, see the following timelines for model and endpoint expiration:
+## Expiration timeline
-**Base models**
+When new models are made available, the older models are retired. Here are timelines for model adaptation and transcription expiration:
-* Adaptation: Available for one year. After the model is imported, it's available for one year to create custom models. After one year, new custom models must be created from a newer base model version.
-* Decoding: Available for two years after import. So you can create an endpoint and use batch transcription for two years with this model.
-* Endpoints: Available on the same timeline as decoding.
+- Training is available for one year after the quarter when the base model was created by Microsoft.
+- Transcription with a base model is available for two years after the quarter when the base model was created by Microsoft.
+- Transcription with a custom model is available for two years after the quarter when you created the custom model.
-**Custom models**
+In this context, quarters end on January 15th, April 15th, July 15th, and October 15th.
-* Decoding: Available for two years after the model is created. So you can use the custom model for two years (batch/realtime/testing) after it's created. After two years, *you should retrain your model* because the base model will usually have been deprecated for adaptation.
-* Endpoints: Available on the same timeline as decoding.
+## What happens when models expire and how to update them
+
+When a custom model or base model expires, typically speech recognition requests will fall back to the most recent base model for the same language. In this case, your implementation won't break, but recognition might not accurately transcribe your domain data.
+
+You can change the model that is used by your custom speech endpoint without downtime:
+ - In the Speech Studio, go to your Custom Speech project and select **Deploy models**. Select the endpoint name to see its details, and then select **Change model**. Choose a new model and select **Done**.
+ - Update the endpoint's model property via the [`UpdateEndpoint`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint) REST API.
+
+[Batch transcription](batch-transcription.md) requests for retired models will fail with a 4xx error. In the [`CreateTranscription`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, update the `model` parameter to use a base model or custom model that hasn't yet retired. Otherwise you can remove the `model` entry from the JSON to always use the latest base model.
+
+## Find out when a model expires
+You can get the adaptation and transcription expiration dates for a model via the Speech Studio and REST API.
-When either a base model or custom model expires, it will always fall back to the *newest base model version*. So your implementation will never break, but it might become less accurate for *your specific data* if custom models reach expiration. You can see the expiration for a model in the following places in the Custom Speech area of the Speech Studio:
+### Model expiration dates via Speech Studio
+Here's an example adaptation expiration date shown on the train new model dialog:
-* Model training summary
-* Model training detail
-* Deployment summary
-* Deployment detail
-Here is an example form the model training summary:
+Here's an example transcription expiration date shown on the deployment detail page:
-![Model training summary](media/custom-speech/custom-speech-model-training-with-expiry.png)
-And also from the model training detail page:
-![Model training detail](media/custom-speech/custom-speech-model-details-with-expiry.png)
+### Model expiration dates via REST API
+You can also check the expiration dates via the [`GetBaseModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) and [`GetModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) REST API. The `deprecationDates` property in the JSON response includes the adaptation and transcription expiration dates for each model
-You can also check the expiration dates via the [`GetModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) and [`GetBaseModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) custom speech APIs under the `deprecationDates` property in the JSON response.
+Here's an example base model retrieved via [`GetBaseModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel):
-Here is an example of the expiration data from the GetModel API call. The **DEPRECATIONDATES** show when the model expires:
```json {
- "SELF": "HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/{id}",
- "BASEMODEL": {
- "SELF": HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/BASE/{id}
- },
- "DATASETS": [
- {
- "SELF": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/{id}
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789",
+ "datasets": [],
+ "links": {
+ "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789/manifest"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-01-15T00:00:00Z"
}
- ],
- "LINKS": {
- "MANIFEST": "HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/{id}/MANIFEST",
- "COPYTO": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{id}/copyto
- },
- "PROJECT": {
- "SELF": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/{id}
- },
- "PROPERTIES": {
- "DEPRECATIONDATES": {
- "ADAPTATIONDATETIME": "2022-01-15T00:00:00Z", // last date the base model can be used for adaptation
- "TRANSCRIPTIONDATETIME": "2023-03-01T21:27:29Z" // last date this model can be used for decoding
+ },
+ "lastActionDateTime": "2021-10-29T07:19:01Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-10-29T06:58:14Z",
+ "locale": "en-US",
+ "displayName": "20211012 (CLM public preview)",
+ "description": "en-US base model"
+}
+```
+
+Here's an example custom model retrieved via [`GetModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel). The custom model was trained from the previously mentioned base model (`e065c68b-21d3-4b28-ae61-eb4c7e797789`):
+
+```json
+{
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}",
+ "baseModel": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789"
+ },
+ "datasets": [
+ {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/f1a72db2-1e89-496d-859f-f1af7a363bb5"
}
- },
- "LASTACTIONDATETIME": "2021-03-01T21:27:40Z",
- "STATUS": "SUCCEEDED",
- "CREATEDDATETIME": "2021-03-01T21:27:29Z",
- "LOCALE": "EN-US",
- "DISPLAYNAME": "EXAMPLE MODEL",
- "DESCRIPTION": "",
- "CUSTOMPROPERTIES": {
- "PORTALAPIVERSION": "3"
+ ],
+ "links": {
+ "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}/manifest",
+ "copyTo": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}/copyto"
+ },
+ "project": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/ee3b1c83-c194-490c-bdb1-b6b1a6be6f59"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-04-15T00:00:00Z"
}
+ },
+ "lastActionDateTime": "2022-02-27T13:03:54Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-02-27T13:03:46Z",
+ "locale": "en-US",
+ "displayName": "Custom model A",
+ "description": "My first custom model",
+ "customProperties": {
+ "PortalAPIVersion": "3"
+ }
} ```
-Note that you can upgrade the model on a custom speech endpoint without downtime by changing the model used by the endpoint in the deployment section of the Speech Studio, or via the custom speech API.
-
-## What happens when models expire and how to update them
-What happens when a model expires and how to update the model depends on how it is being used.
-### Batch transcription
-If a model expires that is used with [batch transcription](batch-transcription.md) transcription requests will fail with a 4xx error. To prevent this update the `model` parameter in the JSON sent in the **Create Transcription** request body to either point to a more recent base model or more recent custom model. You can also remove the `model` entry from the JSON to always use the latest base model.
-### Custom speech endpoint
-If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. To update a model you are using, you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) REST API.
## Next steps
-* [Train and deploy a model](how-to-custom-speech-train-model.md)
-
-## Additional resources
-
-* [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
+- [Train a model](how-to-custom-speech-train-model.md)
+- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Title: "Prepare data for Custom Speech - Speech service"
+ Title: "Training and testing datasets - Speech service"
-description: Learn about types of data for a Custom Speech model, along with how to use and manage that data.
+description: Learn about types of training and testing data for a Custom Speech project, along with how to use and manage that data.
Previously updated : 02/02/2022 Last updated : 05/08/2022
-# Prepare data for Custom Speech
+# Training and testing datasets
-When you're testing the accuracy of Microsoft speech recognition or training your custom models, you need audio and text data. This article covers the types of data that a Custom Speech model needs.
+In a Custom Speech project, you can upload datasets for training, qualitative inspection, and quantitative measurement. This article covers the types of training and testing data that you can use for Custom Speech.
-## Data diversity
-
-Text and audio that you use to test and train a custom model need to include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
+Text and audio that you use to test and train a custom model should include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
* Include text and audio data to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes. * Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day. * Include samples from different environments, for example, indoor, outdoor, and road noise, where your model will be used.
-* Record audio with hardware devices that the production system will use. If your model needs to identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
-* Keep the dataset diverse and representative of your project needs. You can add more data to your model later.
-* Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition needs can harm recognition quality overall.
-
-A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize.
-
-> [!TIP]
-> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training.
->
-> To quickly get started, consider using sample data. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
+* Record audio with hardware devices that the production system will use. If your model must identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
+* Keep the dataset diverse and representative of your project requirements. You can add more data to your model later.
+* Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition requirements can harm recognition quality overall.
## Data types
The following table lists accepted data types, when each data type should be use
| [Audio only](#audio-data-for-testing) | Yes (visual inspection) | 5+ audio files | No | Not applicable | | [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text |
-| [Structured text](#structured-text-data-for-training-public-preview) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
+| [Structured text](#structured-text-data-for-training) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
| [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text |
-Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can contain only a single data type.
+Training with plain text or structured text usually finishes within a few minutes.
> [!TIP]
-> When you train a new model, start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days).
+> Start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days).
+>
+> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-### Training with audio data
+If you will train a custom model with audio data, choose a Speech resource [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware available for training audio data. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) REST API.
-Not all base models support [training with audio data](language-support.md#speech-to-text). If a base model doesn't support it, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
+## Consider datasets by scenario
-Even if a base model supports training with audio data, the service might use only part of the audio. But it will use all the transcripts.
+A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. The following table shows datasets to consider for some speech recognition scenarios:
-If you change the base model that's used for training, and you have audio in the training dataset, *always* check whether the new selected base model supports training with audio data. If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will drastically increase. It could easily go from several hours to several days and more. This is especially true if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+| Scenario | Plain text data and structured text data | Audio + human-labeled transcripts | New words with pronunciation |
+| | | | |
+| Call center | Marketing documents, website, product reviews related to call center activity | Call center calls transcribed by humans | Terms that have ambiguous pronunciations (see the *Xbox* example in the preceding section) |
+| Voice assistant | Lists of sentences that use various combinations of commands and entities | Recorded voices speaking commands into device, transcribed into text | Names (movies, songs, products) that have unique pronunciations |
+| Dictation | Written input, such as instant messages or emails | Similar to preceding examples | Similar to preceding examples |
+| Video closed captioning | TV show scripts, movies, marketing content, video summaries | Exact transcripts of videos | Similar to preceding examples |
-If you face the problem described in the previous paragraph, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text.
+To help determine which dataset to use to address your problems, refer to the following table:
-In regions with dedicated hardware for training, the Speech service will use up to 20 hours of audio for training. In other regions, it will only use up to 8 hours of audio.
+| Use case | Data type |
+| -- | |
+| Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. | Plain text or structured text data |
+| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. | Pronunciation data or phonetic pronunciation in structured text |
+| Improve recognition accuracy on speaking styles, accents, or specific background noises. | Audio + human-labeled transcripts |
## Audio + human-labeled transcript data for training or testing
-You can use audio + human-labeled transcript data for both training and testing purposes. You must provide human-labeled transcriptions (word by word) for comparison:
+You can use audio + human-labeled transcript data for both [training](how-to-custom-speech-train-model.md) and [testing](how-to-custom-speech-evaluate-data.md) purposes. You must provide human-labeled transcriptions (word by word) for comparison:
- To improve the acoustic aspects like slight accents, speaking styles, and background noises. - To measure the accuracy of Microsoft's speech-to-text accuracy when it's processing your audio files.
-Although human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind that the improvements in recognition will only be as good as the data that you provide. For that reason, it's important to upload only high-quality transcripts.
+For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
+
+> [!IMPORTANT]
+> If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
+
+Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get the most calls about swimwear and sunglasses during summer months. Ensure that your sample includes the full scope of speech that you want to detect.
+
+Consider these details:
+
+* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
+* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
+* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
+* Avoid samples that include transcription errors, but do include a diversity of audio quality.
+* Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model.
+* When the transcript quality varies, you can duplicate exceptionally good sentences, such as excellent transcriptions that include key phrases, to increase their weight.
+* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
+* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region that has dedicated hardware for training.
-Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition will only be as good as the data that you provide. You should only upload only high-quality transcripts.
+
+Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+
+Custom Speech projects require audio files with these properties:
| Property | Value | |--|-|
Audio files can have silence at the beginning and end of the recording. If possi
| Archive format | .zip | | Maximum zip size | 2 GB or 10,000 files | -
-> [!NOTE]
-> When you're uploading training and testing data, the .zip file size can't exceed 2 GB. You can test from only a *single* dataset, so be sure to keep it within the appropriate file size. Additionally, each training file can't exceed 60 seconds, or it will error out.
-
-To address problems like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results.
-
-The transcriptions for all WAV files are contained in a single plain-text file. Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
-
-For example:
-
-```tsv
-speech01.wav speech recognition is awesome
-speech02.wav the quick brown fox jumped all over the place
-speech03.wav the lazy dog was not amused
-```
-
-> [!IMPORTANT]
-> Transcription should be encoded as UTF-8 byte order mark (BOM).
-
-The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the data to <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio</a>. For the appropriate language to use when you prepare your transcriptions, see [How to create human-labeled transcriptions](how-to-custom-speech-human-labeled-transcriptions.md).
-
-After you've gathered your audio files and corresponding transcriptions, package them as a single .zip file before uploading to Speech Studio. The following example dataset has three audio files and a human-labeled transcription file:
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot that shows audio files and a transcription file in Speech Studio.](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
-
-For a list of recommended regions for your Speech service subscriptions, see [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account). Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day, compared to just 1 hour per day in other regions. If model training can't be completed within a week, the model will be marked as failed.
-
-Not all base models support training with audio data. If the base model doesn't support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
- ## Plain-text data for training
-You can use domain-related sentences to improve accuracy in recognizing product names or industry-specific jargon. Provide sentences in a single text file. To improve accuracy, use text data that's closer to the expected spoken utterances. Training with plain text usually finishes within a few minutes.
+You can add plain text sentences of related text to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
-To create a custom model by using sentences, you'll need to provide a list of sample utterances. Utterances _do not_ need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect in production. If you want certain terms to have increased weight, add several sentences that include these specific terms.
+Provide domain-related sentences in a single text file. Use text data that's close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize. When possible, try to have one sentence or keyword controlled on a separate line. To increase the weight of a term such as product names, add several sentences that include the term. But don't copy too much - it could affect the overall recognition rate.
-As general guidance, model adaptation is most effective when the training text is as close as possible to the real text expected in production. Domain-specific jargon and phrases that you're targeting to enhance should be included in training text. When possible, try to have one sentence or keyword controlled on a separate line. For keywords and phrases that are important to you (for example, product names), you can copy them a few times. But don't copy too much - it could affect the overall recognition rate.
+> [!NOTE]
+> Avoid related text sentences that include noise such as unrecognizable characters or words.
-Use this table to ensure that your related data file for utterances is formatted correctly:
+Use this table to ensure that your plain text dataset file is formatted correctly:
| Property | Value | |-|-|
Use this table to ensure that your related data file for utterances is formatted
| Number of utterances per line | 1 | | Maximum file size | 200 MB |
-You'll also want to account for the following restrictions:
+You must also adhere to the following restrictions:
* Avoid repeating characters, words, or groups of words more than three times, as in "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions. * Don't use special characters or UTF-8 characters above `U+00A1`. * URIs will be rejected.
-* For some languages (for example, Japanese or Korean), importing large amounts of text data can take a long time or can time out. Consider dividing the uploaded data into text files of up to 20,000 lines each.
+* For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each.
+
+## Structured-text data for training
-## Structured-text data for training (public preview)
+> [!NOTE]
+> Structured-text data for training is in public preview.
+
+Use structured text data when your data follows a particular pattern in particular utterances that differ only by words or phrases from a list. To simplify the creation of training data and to enable better modeling inside the Custom Language model, you can use a structured text in Markdown format to define lists of items and phonetic pronunciation of words. You can then reference these lists inside your training utterances.
Expected utterances often follow a certain pattern. One common pattern is that utterances differ only by words or phrases from a list. Examples of this pattern could be: * "I have a question about `product`," where `product` is a list of possible products. * "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
-For a list of supported base models and locales for training with structured text, see [Language support](language-support.md#speech-to-text). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
+For a list of supported base models and locales for training with structured text, see [Language support](language-support.md#speech-to-text). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
+
+The structured-text file should have an .md extension. The maximum file size is 200 MB, and the text encoding must be UTF-8 BOM. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
-To simplify the creation of training data and to enable better modeling inside the Custom Language model, you can use a structured text in Markdown format to define lists of items. You can then reference these lists inside your training utterances. The Markdown format also supports specifying the phonetic pronunciation of words.
+Here are key details about the supported Markdown format:
-The Markdown file should have an .md extension. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
+| Property | Description | Limits |
+|-|-|--|
+|`@list`|A list of items that can be referenced in an example sentence.|Maximum of 10 lists. Maximum of 4,000 items per list.|
+|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of 2 pronunciations per word.|
+|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum of 50,000 example sentences|
+|`//`|Comments follow a double slash (`//`) .|Not applicable|
-Here's an example of the Markdown format:
+Here's an example structured text file:
```markdown
-// This is a comment
+// This is a comment because it follows a double slash (`//`).
// Here are three separate lists of items that can be referenced in an example sentence. You can have up to 10 of these. @ list food =
Here's an example of the Markdown format:
@ list pet = - cat - dog
+- fish
@ list sports = - soccer
Here's an example of the Markdown format:
- baseball - football
-// This is a list of phonetic pronunciations.
-// This adjusts the pronunciation of every instance of these words in a list or example training sentences.
+// List of phonetic pronunciations
@ speech:phoneticlexicon - cat/k ae t-- cat/f i l ai n
+- fish/f ih sh
-// Here are example training sentences. They are grouped into two sections to help organize them.
-// You can refer to one of the lists we declared earlier by using {@listname}. You can refer to multiple lists in the same training sentence.
-// A training sentence does not have to refer to a list.
-# SomeTrainingSentence
+// Here are two sections of training sentences.
+#TrainingSentences_Section1
- you can include sentences without a class reference - what {@pet} do you have - I like eating {@food} and playing {@sports} - my {@pet} likes {@food}
-# SomeMoreSentence
+#TrainingSentences_Section2
- you can include more sentences without a class reference - or more sentences that have a class reference like {@pet}
-```
-
-Like plain text, training with structured text typically takes a few minutes. Also, your example sentences and lists should reflect the type of spoken input that you expect in production. For pronunciation entries, see the description of the [Universal Phone Set](phone-sets.md).
-
-The following table specifies the limits and other properties for the Markdown format:
-
-| Property | Value |
-|-|-|
-| Text encoding | UTF-8 BOM |
-| Maximum file size | 200 MB |
-| Maximum number of example sentences | 50,000 |
-| Maximum number of list classes | 10 |
-| Maximum number of items in a list class | 4,000 |
-| Maximum number of `speech:phoneticlexicon` entries | 15,000 |
-| Maximum number of pronunciations per word | 2 |
-
+```
## Pronunciation data for training
-If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see **Pronunciation** in the **Customizations** column in [the Speech-to-text table](language-support.md#speech-to-text).
+Specialized or made up words might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize "Xbox", pronounce it as "X box". This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
+
+You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md#speech-to-text).
> [!NOTE]
-> You can't combine this type of pronunciation file with structured-text training data. For structured-text data, use the phonetic pronunciation capability that's included in the structured-text Markdown format.
+> You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model where you select both of those datasets as input.
The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
CNTK c n t k
IEEE i triple e ```
-Refer to the following table to ensure that your related data file for pronunciations is correctly formatted. The size of pronunciation files should be limited to a few kilobytes.
+Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
| Property | Value | |-|-|
Refer to the following table to ensure that your related data file for pronuncia
Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
-Custom Speech requires audio files with these properties:
+Custom Speech projects require audio files with these properties:
| Property | Value | |--|--|
Custom Speech requires audio files with these properties:
| Archive format | .zip | | Maximum archive size | 2 GB or 10,000 files | - > [!NOTE] > When you're uploading training and testing data, the .zip file size can't exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can test from only a *single* dataset.
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
| Check the audio file format. | `sox --i <filename>` | | Convert the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
+### Audio data for training
+
+Not all base models support [training with audio data](language-support.md#speech-to-text). For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
+
+Even if a base model supports training with audio data, the service might use only part of the audio. In [regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware available for training audio data, the Speech service will use up to 20 hours of your audio training data. In other regions, the Speech service uses up to 8 hours of your audio data.
+ ## Next steps
-* [Upload your data](how-to-custom-speech-upload-data.md)
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
-* [Train a custom model](how-to-custom-speech-train-model.md)
+- [Upload your data](how-to-custom-speech-upload-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Title: Train and deploy a Custom Speech model - Speech service
+ Title: Train a Custom Speech model - Speech service
-description: Learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model or a custom model.
+description: Learn how to train Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft base model or a custom model.
Previously updated : 01/23/2022 Last updated : 05/08/2022
-# Train and deploy a Custom Speech model
+# Train a Custom Speech model
-In this article, you'll learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model. You use human-labeled transcriptions and related text to train a model. And you use these datasets, along with previously uploaded audio data, to refine and train the speech-to-text model.
+In this article, you'll learn how to train a Custom Speech model to improve recognition accuracy from the Microsoft base model. Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test and refine the model with more data.
-> [!NOTE]
-> You pay to use Custom Speech models, but you are not charged for training a model.
-
-## Use training to resolve accuracy problems
-
-If you're encountering recognition problems with a base model, you can use human-labeled transcripts and related data to train a custom model and help improve accuracy. To determine which dataset to use to address your problems, refer to the following table:
-
-| Use case | Data type |
-| -- | |
-| Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. | Plain text or structured text data |
-| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. | Pronunciation data or phonetic pronunciation in structured text |
-| Improve recognition accuracy on speaking styles, accents, or specific background noises. | Audio + human-labeled transcripts |
-| | |
-
-## Train and evaluate a model
-
-The first step in training a model is to upload training data. For step-by-step instructions for preparing human-labeled transcriptions and related text (utterances and pronunciations), see [Prepare and test your data](./how-to-custom-speech-test-and-train.md). After you upload training data, follow these instructions to start training your model:
-
-1. Sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech). If you plan to train a model with audio + human-labeled transcription datasets, pick a Speech subscription in a [region with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
-
-1. Go to **Speech-to-text** > **Custom Speech** > **[name of project]** > **Training**.
-
-1. Select **Train model**.
-
-1. Give your training a **Name** and **Description**.
-
-1. In the **Scenario and Baseline model** list, select the scenario that best fits your domain. If you're not sure which scenario to choose, select **General**. The baseline model is the starting point for training. The most recent model is usually the best choice.
-
-1. On the **Select training data** page, choose one or more related text datasets or audio + human-labeled transcription datasets that you want to use for training.
-
- > [!NOTE]
- > When you train a new model, start with related text. Training with audio + human-labeled transcription might take much longer (up to [several days](how-to-custom-speech-evaluate-data.md#add-audio-with-human-labeled-transcripts)).
-
- > [!NOTE]
- > Not all base models support training with audio. If a base model doesn't support it, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
-
- > [!NOTE]
- > In cases when you change the base model used for training, and you have audio in the training dataset, *always* check to see whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model didn't support training with audio data, and the training dataset contains audio, training time with the new base model will *drastically* increase, and might easily go from several hours to several days and more. This is especially true if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
- >
- > If you face the issue described in the preceding paragraph, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. We recommend the latter option if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
-
-1. After training is complete, you can do accuracy testing on the newly trained model. This step is optional.
-1. Select **Create** to build your custom model.
-
-The **Training** table displays a new entry that corresponds to the new model. The table also displays the status: *Processing*, *Succeeded*, or *Failed*.
-
-For more information, see the [how-to article](how-to-custom-speech-evaluate-data.md) about evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model. This approach can provide a more realistic sense of the model's performance.
+You can use a custom model for a limited time after it's trained and [deployed](how-to-custom-speech-deploy-model.md). You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
> [!NOTE]
-> Both base models and custom models can be used only up to a certain date (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio displays this date in the **Expiration** column for each model and endpoint. After that date, a request to an endpoint or to batch transcription might fail or fall back to base model.
->
-> Retrain your model by using the most recent base model to benefit from accuracy improvements and to avoid allowing your model to expire.
-
-## Deploy a custom model
-
-After you upload and inspect data, evaluate accuracy, and train a custom model, you can deploy a custom endpoint to use with your apps, tools, and products.
-
-1. To create a custom endpoint, sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-
-1. On the **Custom Speech** menu at the top of the page, select **Deployment**.
-
- If this is your first run, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
-
-1. Select **Add endpoint**, and then enter a **Name** and **Description** for your custom endpoint.
-
-1. Select the custom model that you want to associate with the endpoint.
-
- You can also enable logging from this page. Logging allows you to monitor endpoint traffic. If logging is disabled, traffic won't be stored.
-
- ![Screenshot showing the selected `Log content from this endpoint` checkbox on the `New endpoint` page.](./media/custom-speech/custom-speech-deploy-model.png)
-
- > [!NOTE]
- > Remember to accept the terms of use and pricing details.
+> You pay to use Custom Speech models, but you are not charged for training a model.
-1. Select **Create**.
+## Train the model
- This action returns you to the **Deployment** page. The table now includes an entry that corresponds to your custom endpoint. The endpointΓÇÖs status shows its current state. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Complete**, the endpoint is ready to use.
+If you plan to train a model with audio data, use a Speech resource in a [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training.
- After your endpoint is deployed, the endpoint name appears as a link.
+After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
-1. Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code.
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Train a new model**.
+1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list.
- Note the expiration date, and update the endpoint's model before that date to ensure uninterrupted service.
+ > [!IMPORTANT]
+ > Take note of the **Expiration for adaptation** date. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-## View logging data
+1. On the **Choose data** page, select one or more datasets that you want to use for training. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
+1. Enter a name and description for your custom model, and then select **Next**.
+1. Optionally, check the **Add test in the next step** box. If you skip this step, you can run the same tests later. For more information, see [Test recognition quality](how-to-custom-speech-inspect-data.md) and [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
+1. Select **Save and close** to kick off the build for your custom model.
-Logging data is available for export from the endpoint's page, under **Deployments**.
+On the main **Train custom models** page, details about the new model are displayed in a table, such as name, description, status (*Processing*, *Succeeded*, or *Failed*), and expiration date.
-> [!NOTE]
-> Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If a customer-owned storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
+> [!IMPORTANT]
+> Take note of the date in the **Expiration** column. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-## Additional resources
+## Next steps
-- [Learn how to use your custom model](how-to-specify-source-language.md)-- [Inspect your data](how-to-custom-speech-inspect-data.md)-- [Evaluate your data](how-to-custom-speech-evaluate-data.md)
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Deploy a model](how-to-custom-speech-deploy-model.md)
cognitive-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-transcription-editor.md
+
+ Title: How to use the online transcription editor for Custom Speech - Speech service
+
+description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech.
++++++ Last updated : 05/08/2022+++
+# How to use the online transcription editor
+
+The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech. The main use cases of the editor are as follows:
+
+* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
+* You already have audio + human-labeled datasets, but there are errors or defects in the transcription. The editor allows you to quickly modify the transcriptions to get best training accuracy.
+
+The only requirement to use the transcription editor is to have audio data uploaded, with or without corresponding transcriptions.
+
+You can find the **Editor** tab next to the **Training and testing dataset** tab on the main **Speech datasets** page.
++
+Datasets in the **Training and testing dataset** tab can't be updated. You can import a copy of a training or testing dataset to the **Editor** tab, add or edit human-labeled transcriptions to match the audio, and then export the edited dataset to the **Training and testing dataset** tab. Please also note that you can't use a dataset that's in the Editor to train or test a model.
+
+## Import datasets to the Editor
+
+To import a dataset to the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select **Import data**
+1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
+1. Enter a name and description for the new dataset, and then select **Next**.
+1. Review your settings, and then select **Import and close** to kick off the import process. After data has been successfully imported, you can select datasets and start editing.
+
+> [!NOTE]
+> You can also select a dataset from the main **Speech datasets** page and export them to the Editor. Select a dataset and then select **Export to Editor**.
+
+## Edit transcription to match audio
+
+Once a dataset has been imported to the Editor, you can start editing the dataset. You can add or edit human-labeled transcriptions to match the audio as you hear it. You do not edit any audio data.
+
+To edit a dataset's transcription in the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select the link to a dataset by name.
+1. From the **Audio + text files** table, select the link to an audio file by name.
+1. After you've made edits, select **Save**.
+
+If there are multiple files in the dataset, you can select **Previous** and **Next** to move from file to file. Edit and save changes to each file as you go.
+
+The detail page lists all the segments in each audio file, and you can select the desired utterance. For each utterance, you can play and compare the audio with the corresponding transcription. Edit the transcriptions if you find any insertion, deletion, or substitution errors. For more information about word error types, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
+
+## Export datasets from the Editor
+
+Datasets in the Editor can be exported to the **Training and testing dataset** tab, where they can be used to train or test a model.
+
+To export datasets from the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select the link to a dataset by name.
+1. Select one or more rows from the **Audio + text files** table.
+1. Select **Export** to export all of the selected files as one new dataset.
+
+The files are exported as a new dataset, and will not impact or replace other training or testing datasets.
+
+## Next steps
+
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
+- [Deploy your model](./how-to-custom-speech-train-model.md)
+
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
Title: "Upload data for Custom Speech - Speech service"
+ Title: "Upload training and testing datasets for Custom Speech - Speech service"
description: Learn about how to upload data to test or train a Custom Speech model.
Previously updated : 03/03/2022 Last updated : 05/08/2022
-# Upload data for Custom Speech
+# Upload training and testing datasets for Custom Speech
-You need audio or text data for testing the accuracy of Microsoft speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Prepare data for Custom Speech](how-to-custom-speech-test-and-train.md).
+You need audio or text data for testing the accuracy of Microsoft speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
-## Upload data in Speech Studio
+## Upload datasets in Speech Studio
-To upload your data:
+To upload your own datasets in Speech Studio, follow these steps:
-1. Go to [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. After you create a project, go to the **Speech datasets** tab. Select **Upload data** to start the wizard and create your first dataset.
-1. Select a speech data type for your dataset, and upload your data.
+1. Sign in to the [Speech Studio](https://speech.microsoft.com/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**.
+1. Select the **Training data** or **Testing data** tab.
+1. Select a dataset type, and then select **Next**.
+1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob public access URL.
+1. Enter the dataset name and description, and then select **Next**.
+1. Review your settings, and then select **Save and close**.
-1. Specify whether the dataset will be used for **Training** or **Testing**.
+After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md)
- There are many types of data that can be uploaded and used for **Training** or **Testing**. Each dataset that you upload must be correctly formatted before uploading, and it must meet the requirements for the data type that you choose. Requirements are listed in the following sections.
+### Upload datasets via REST API
-1. After your dataset is uploaded, you can either:
+You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md) to upload a dataset by using the [CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
- * Go to the **Train custom models** tab to train a custom model.
- * Go to the **Test models** tab to visually inspect quality with audio-only data or evaluate accuracy with audio + human-labeled transcription data.
-
-### Upload data by using Speech-to-text REST API v3.0
-
-You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset.
-
-To create and upload a dataset, use a [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
-
-A dataset that you create by using the Speech-to-text REST API v3.0 won't be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project isn't required for any model customization operations, if you perform them by using the REST API.
-
-When you log on to Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference). The interface will also offer to connect such objects to an existing project.
-
-To connect the new dataset to an existing project in Speech Studio during its upload, use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) and fill out the request body according to the following format:
+To connect the dataset to an existing project, fill out the request body according to the following format:
```json {
To connect the new dataset to an existing project in Speech Studio during its up
} ```
-You can obtain the project URL that's required for the `project` element by using the [Get Projects](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+You can get a list of existing project URLs that can be used in the `project` element by using the [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+
+> [!NOTE]
+> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you won't be able to train or test a model in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
## Next steps
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
-* [Evaluate your data](how-to-custom-speech-evaluate-data.md)
+* [Test recognition quality](how-to-custom-speech-inspect-data.md)
+* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
* [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
The Speech service uses a time-based pricing model (rather than a transaction-ba
If you or your organization have applications in development or production that use a Bing Speech API, you should update them to use the Speech service as soon as possible. See the [Speech service documentation](index.yml) for available SDKs, code samples, and tutorials.
-The Speech service [REST APIs](./overview.md#reference-docs) are compatible with the Bing Speech APIs. If you're currently using the Bing Speech REST APIs, you need only change the REST endpoint, and switch to a Speech service subscription key.
+The Speech service REST APIs are compatible with the Bing Speech APIs. If you're currently using the Bing Speech REST APIs, you need only change the REST endpoint, and switch to a Speech service subscription key.
If you're using a Bing Speech client library for a specific programming language, migrating to the [Speech SDK](speech-sdk.md) requires changes to your application, because the API is different. The Speech SDK can make your code simpler, while also giving you access to new features. The Speech SDK is available in a wide variety of programming languages. APIs on all platforms are similar, easing multi-platform development.
For Speech service, SDK, and API support, visit the Speech service [support page
## Next steps
-* [Try out Speech service for free](overview.md#try-the-speech-service-for-free)
* [Get started with speech-to-text](get-started-speech-to-text.md) * [Get started with text-to-speech](get-started-text-to-speech.md)
cognitive-services How To Migrate From Translator Speech Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-from-translator-speech-api.md
If you or your organization have applications in development or production that
## Next steps
-* [Try out Speech service for free](overview.md#try-the-speech-service-for-free)
+* [Learn about the Speech service](overview.md)
* [Quickstart: Recognize speech in a UWP app using the Speech SDK](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=uwp)
-## See also
-
-* [What is the Speech service](overview.md)
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.o
## Next steps
-> [!div class="nextstepaction"]
-> [Explore samples on GitHub](https://aka.ms/csspeech/samples)
-
-## See also
+- [Explore samples on GitHub](https://aka.ms/csspeech/samples)
- [Customize acoustic models](./how-to-custom-speech-train-model.md) - [Customize language models](./how-to-custom-speech-train-model.md)
cognitive-services How To Specify Source Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-specify-source-language.md
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
::: zone-end
-## See also
-
-For a list of supported languages and locales for speech-to-text, see [Language support](language-support.md).
- ## Next steps
-See the [Speech SDK reference documentation](speech-sdk.md).
+- [Language support](language-support.md)
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
The overall workflow of viseme is depicted in the following flowchart:
|--|-| | Viseme ID | An integer number that specifies a viseme.<br>For English (US), we offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes). | | Audio offset | The start time of each viseme, in ticks (100 nanoseconds). |
-| | |
+ ## Get viseme events with the Speech SDK
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
You can transcribe meetings and other conversations with the ability to add, rem
## Prerequisites
-This article assumes that you have an Azure account and Speech service subscription. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+This article assumes that you have an Azure Cognitive Services Speech resource key and region. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
> [!NOTE] > The Speech SDK for C++, Java, Objective-C, and Swift support Conversation Transcription, but we haven't yet included a guide here.
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
For a complete voice assistant experience, the application will need a dialog se
These are the requirements to create a basic dialog service using Direct Line Speech. -- **Speech Services subscription:** A subscription for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Try Speech Services for free [here](./overview.md#try-the-speech-service-for-free).
+- **Speech resource:** A subscription for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot". ## Try out the sample app
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md
Sample code for intent recognition:
## Next steps
-* Complete the intent recognition [quickstart](get-started-intent-recognition.md)
-* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
+* [Intent recognition quickstart](get-started-intent-recognition.md)
* [Get the Speech SDK](speech-sdk.md)
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
### Using Speech-to-text custom models ::: zone pivot="programming-language-csharp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Train and deploy a Custom Speech model](how-to-custom-speech-train-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
```csharp var sourceLanguageConfigs = new SourceLanguageConfig[]
var autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-cpp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Train and deploy a Custom Speech model](how-to-custom-speech-train-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
```cpp std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
auto autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-java"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Train and deploy a Custom Speech model](how-to-custom-speech-train-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
```java List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-python"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Train and deploy a Custom Speech model](how-to-custom-speech-train-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
```Python en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
This sample shows how to use language detection with a custom endpoint. If the d
::: zone-end ::: zone pivot="programming-language-objectivec"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Train and deploy a Custom Speech model](how-to-custom-speech-train-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
```Objective-C SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Language support varies by Speech service functionality. The following tables su
The Speech service supports the languages (locales) in the following tables.
-To improve accuracy, customization is available for some languages and baseline model versions by uploading audio + human-labeled transcripts, plain text, structured text, and pronunciation. By default, plain text customization is supported for all available baseline models. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
+To improve accuracy, customization is available for some languages and base model versions by uploading audio + human-labeled transcripts, plain text, structured text, and pronunciation. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
### [Speech-to-text](#tab/speechtotext)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Title: What is the Speech service?
-description: The Speech service is the unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs.
+description: The Speech service provides speech-to-text, text-to-speech, and speech translation capabilities with an Azure resource. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs.
Previously updated : 03/09/2022 Last updated : 04/21/2022 # What is the Speech service?
-The Speech service is the unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](#reference-docs).
+The Speech service provides speech-to-text and text-to-speech capabilities with an [Azure Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text-to-speech voices, translate spoken audio, and use speaker recognition during conversations.
-The following features are part of the Speech service. Use the links in this table to learn more about common use-cases for each feature. You can also browse the API reference.
-| Service | Feature | Description | SDK | REST |
-|||-|--||
-| [Speech-to-text](speech-to-text.md) | Real-time speech-to-text | Speech-to-text transcribes or translates audio streams or local files to text in real time that your applications, tools, or devices can consume or display. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
-| | [Batch speech-to-text](batch-transcription.md) | Batch speech-to-text enables asynchronous speech-to-text transcription of large volumes of speech audio data stored in Azure Blob Storage. In addition to converting speech audio to text, batch speech-to-text allows for diarization and sentiment analysis. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| | [Multidevice conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation.| Yes | No |
-| | [Conversation transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No |
-| | [Create custom speech models](#customize-your-speech-experience) | If you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| | [Pronunciation assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text-short.md#pronunciation-assessment-parameters) |
-| [Text-to-speech](text-to-speech.md) | Prebuilt neural voices | Text-to-speech converts input text into humanlike synthesized speech by using the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are humanlike voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
-| | [Custom neural voices](#customize-your-speech-experience) | Create custom neural voice fonts unique to your brand or product. | No | [Yes](#reference-docs) |
-| [Speech translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multilanguage translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
-| [Language identification](language-identification.md) | Language identification | Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech-to-text recognition, or with speech translation. | [Yes](./speech-sdk.md) | No |
-| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated custom commands service for task completion. | [Yes](voice-assistants.md) | No |
-| [Speaker recognition](speaker-recognition-overview.md) | Speaker verification and identification | Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?". | Yes | [Yes](/rest/api/speakerrecognition/) |
+Create custom voices, add specific words to your base vocabulary, or build your own models. Run Speech anywhere, in the cloud or at the edge in containers. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](./rest-speech-to-text.md).
-## Try the Speech service for free
+Speech is available for many [languages](language-support.md), [regions](regions.md), and [price points](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-For the following steps, you need a Microsoft account and an Azure account. If you don't have a Microsoft account, you can sign up for one free of charge at the [Microsoft account portal](https://account.microsoft.com/account). Select **Sign in with Microsoft**. When you're asked to sign in, select **Create a Microsoft account**. Follow the steps to create and verify your new Microsoft account.
+## Speech scenarios
-After you have a Microsoft account, go to the [Azure sign-up page](https://azure.microsoft.com/free/ai/) and select **Start free**. Create a new Azure account by using a Microsoft account. Here's a video of [how to sign up for an Azure free account](https://www.youtube.com/watch?v=GWT2R1C\_uUU).
+Common scenarios for speech include:
+- [Captioning](./captioning-concepts.md): Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios.
+- [Audio Content Creation](text-to-speech.md#more-about-neural-text-to-speech-features): You can use neural voices to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.
+- [Call center transcription](call-center-transcription.md): Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case.
+- [Voice assistants](voice-assistants.md): Create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation.
-### Create the Azure resource
+Microsoft uses Speech for many scenarios, such as captioning in Teams, dictation in Office 365, and Read Aloud in the Edge browser.
-To add a Speech service resource to your Azure account by using the free or paid tier:
-1. Sign in to the [Azure portal](https://portal.azure.com/) by using your Microsoft account.
+## Speech capabilities
-1. Select **Create a resource** at the top left of the portal. If you don't see **Create a resource**, you can always find it by selecting the collapsed menu in the upper-left corner of the screen.
+Speech feature summaries are provided below with links for more information.
-1. In the **New** window, enter **speech** in the search box and select **Enter**.
+### Speech-to-text
-1. In the search results, select **Speech**.
-
- :::image type="content" source="media/index/speech-search.png" alt-text="Screenshot that shows creating a Speech resource in the Azure portal.":::
+Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in real time or asynchronously.
-1. Select **Create** and then:
+Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarisation to determine who said what and when. Get readable transcripts with automatic formatting and punctuation.
- 1. Give a unique name for your new resource. The name helps you distinguish among multiple subscriptions tied to the same service.
- 1. Choose the Azure subscription that the new resource is associated with to determine how the fees are billed. Here's the introduction for [how to create an Azure subscription](../../cost-management-billing/manage/create-subscription.md#create-a-subscription-in-the-azure-portal) in the Azure portal.
- 1. Choose the [region](regions.md) where the resource will be used. Azure is a global cloud platform that's generally available in many regions worldwide. To get the best performance, select a region that's closest to you or where your application runs. The Speech service availabilities vary among different regions. Make sure that you create your resource in a supported region. For more information, see [region support for Speech services](./regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation).
- 1. Choose either a free (F0) or paid (S0) pricing tier. For complete information about pricing and usage quotas for each tier, select **View full pricing details** or see [Speech services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). For limits on resources, see [Azure Cognitive Services limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-cognitive-services-limits).
- 1. Create a new resource group for this Speech subscription or assign the subscription to an existing resource group. Resource groups help you keep your various Azure subscriptions organized.
- 1. Select **Create**. This action takes you to the deployment overview and displays deployment progress messages.
+The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
-It takes a few moments to deploy your new Speech resource.
+You can try speech to text with [this demo web app](https://azure.microsoft.com/services/cognitive-services/speech-to-text/#features) or in the [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
-### Find keys and location/region
+### Text-to-speech
-To find the keys and location/region of a completed deployment:
+With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
-1. Sign in to the [Azure portal](https://portal.azure.com/) by using your Microsoft account.
+- Prebuilt neural voice: Highly natural out-of-the-box voices. Check the prebuilt neural voice samples [here](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+- Custom neural voice: Besides the pre-built neural voices that come out of the box, you can also create a [custom neural voice](custom-neural-voice.md) that is recognizable and unique to your brand or product. Custom neural voices are private and can offer a competitive advantage. Check the custom neural voice samples [here](https://aka.ms/customvoice).
-1. Select **All resources**, and select the name of your Cognitive Services resource.
+### Speech translation
-1. On the left pane, under **RESOURCE MANAGEMENT**, select **Keys and Endpoint**.
+[Speech translation](speech-translation.md) enables real-time, multilingual translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation.
- 1. Each subscription has two keys. You can use either key in your application. To copy and paste a key to your code editor or other location, select the copy button next to each key and switch windows to paste the clipboard contents to the desired location.
+### Language identification
- 1. Copy the `LOCATION` value, which is your region ID, for example, `westus` or `westeurope`, for SDK calls.
+[Language identification](language-identification.md) is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech-to-text recognition, or with speech translation.
-> [!IMPORTANT]
-> These subscription keys are used to access your Cognitive Services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service.
+### Speaker recognition
+[Speaker recognition](speaker-recognition-overview.md) provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?".
-## Complete a quickstart
+### Pronunciation assessment
-We offer quickstarts in most popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature:
+[Pronunciation assessment](./how-to-pronunciation-assessment.md) evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence.
-* [Speech-to-text quickstart](get-started-speech-to-text.md)
-* [Text-to-speech quickstart](get-started-text-to-speech.md)
-* [Speech translation quickstart](./get-started-speech-translation.md)
-* [Intent recognition quickstart](./get-started-intent-recognition.md)
-* [Speaker recognition quickstart](./get-started-speaker-recognition.md)
+### Intent recognition
-After you've had a chance to get started with the Speech service, try our tutorials that show you how to solve various scenarios:
+[Intent recognition](./intent-recognition.md): Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands.
-- [Tutorial: Recognize intents from speech with the Speech SDK and LUIS, C#](how-to-recognize-intents-from-speech-csharp.md)-- [Tutorial: Voice enable your bot with the Speech SDK, C#](tutorial-voice-enable-your-bot-speech-sdk.md)-- [Tutorial: Build a Flask app to translate text, analyze sentiment, and synthesize translated text to speech, REST](/learn/modules/python-flask-build-ai-web-app/)
+## Delivery and presence
-## Get sample code
+You can deploy Azure Cognitive Services Speech features in the cloud or on-premises.
-Sample code is available on GitHub for the Speech service. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples:
+With [containers](speech-container-howto.md), you can bring the service closer to your data for compliance, security, or other operational reasons.
-- [Speech-to-text, text-to-speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)-- [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)-- [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples)
+Speech service deployment in sovereign clouds is available for some government entities and their partners. For example, the Azure Government cloud is available to US government entities and their partners. Azure China cloud is available to organizations with a business presence in China. For more information, see [sovereign clouds](sovereign-clouds.md).
++
+## Use Speech in your application
-## Customize your speech experience
+The [Speech Studio](speech-studio-overview.md) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-The Speech service works well with built-in models. But you might want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand.
+The [Speech CLI](spx-overview.md) is a command-line tool for using Speech service without having to write any code. Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI.
-Other products offer speech models tuned for specific purposes, like healthcare or insurance, but are available to everyone equally. Customization in Azure Speech becomes part of *your unique* competitive advantage that's unavailable to any other user or customer. In other words, your models are private and custom-tuned for your use case only.
+The [Speech SDK](./speech-sdk.md) exposes many of the Speech service capabilities you can use to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
-| Speech service | Platform | Description |
-| -- | -- | -- |
-| Speech-to-text | [Custom Speech](./custom-speech-overview.md) | Customize speech recognition models to your needs and available data. Overcome speech recognition barriers such as speaking style, vocabulary, and background noise. |
-| Text-to-speech | [Custom Voice](https://aka.ms/customvoice) | Build a recognizable, one-of-a-kind neural voice for your text-to-speech apps with your speaking data available. You can further fine-tune the neural voice outputs by adjusting a set of neural voice parameters. |
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use REST APIs for [batch transcription](batch-transcription.md) and [speaker recognition](/rest/api/speakerrecognition/) REST APIs.
-## Deploy on-premises by using Docker containers
+## Get started
-[Use Speech service containers](speech-container-howto.md) to deploy API features on-premises. By using these Docker containers, you can bring the service closer to your data for compliance, security, or other operational reasons.
+We offer quickstarts in many popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature:
-## Reference docs
+* [Speech-to-text quickstart](get-started-speech-to-text.md)
+* [Text-to-speech quickstart](get-started-text-to-speech.md)
+* [Speech translation quickstart](./get-started-speech-translation.md)
+* [Intent recognition quickstart](./get-started-intent-recognition.md)
+* [Speaker recognition quickstart](./get-started-speaker-recognition.md)
+
+## Code samples
-- [Speech SDK](./speech-sdk.md)-- [REST API: Speech-to-text](rest-speech-to-text.md)-- [REST API: Text-to-speech](rest-text-to-speech.md)-- [REST API: Batch transcription and customization](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
+Sample code for the Speech service is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples:
+
+- [Speech-to-text, text-to-speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+- [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
+- [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+- [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples)
## Next steps
-* [Get started with speech-to-text](./get-started-speech-to-text.md)
+* [Get started with speech-to-text](get-started-speech-to-text.md)
* [Get started with text-to-speech](get-started-text-to-speech.md)
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
# Speech service supported regions
-The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech portal](https://speech.microsoft.com).
+The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://speech.microsoft.com).
Keep in mind the following points: * If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a speech configuration. Make sure the region matches the region of your subscription.
-* If your application uses one of the Speech service's [REST APIs](./overview.md#reference-docs), the region is part of the endpoint URI you use when making requests.
+* If your application uses one of the Speech service REST APIs, the region is part of the endpoint URI you use when making requests.
* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors. > [!NOTE]
The Speech service is available in these regions for speech-to-text, pronunciati
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)]
-If you plan to train a custom model with audio data, use one of the [regions with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later.
+If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later.
> [!TIP] > For pronunciation assessment feature, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
Custom Speech Service does not support automatic failover. We suggest the follow
1. Create your custom model in one main region (Primary). 2. Run the [Model Copy API](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to replicate the custom model to all prepared regions (Secondary).
-3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Train and deploy a Custom Speech model](./how-to-custom-speech-train-model.md).
+3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md).
- If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
These parameters might be included in the query string of the REST request:
| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md#speech-to-text). | Required | | `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional | | `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
-| `cid` | When you're using the [Custom Speech portal](./custom-speech-overview.md) to create custom models, you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
+| `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
### Request headers
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
If you have multiple phrases to add, call `.addPhrase()` for each phrase to add
# [Custom speech-to-text](#tab/cstt)
-The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Custom Speech portal](https://speech.microsoft.com/customspeech).
+The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://speech.microsoft.com/customspeech).
-The custom speech **Model ID** is required to run the container. It can be found on the **Training** page of the Custom Speech portal. From the Custom Speech portal, go to the **Training** page and select the model.
-<br>
+The custom speech **Model ID** is required to run the container. For more information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png) Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
-<br>
![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
The following table represents the various `docker run` parameters and their cor
| Parameter | Description | ||| | `{VOLUME_MOUNT}` | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which Docker uses to persist the custom model. An example is *C:\CustomSpeech* where the C drive is located on the host machine. |
-| `{MODEL_ID}` | The custom speech **Model ID** from the **Training** page of the Custom Speech portal. |
+| `{MODEL_ID}` | The custom speech model ID. For more information, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). |
| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [Gather required parameters](#gather-required-parameters). | | `{API_KEY}` | The API key is required. For more information, see [Gather required parameters](#gather-required-parameters). |
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech service delivers great functionality with its default models across s
### Custom speech-to-text
-When you use speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. The creation and management of no-code Custom Speech models is available through the [Custom Speech portal](./custom-speech-overview.md). After the Custom Speech model is published, it can be consumed by the Speech SDK.
+When you use speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. The creation and management of no-code Custom Speech models is available in the [Speech Studio](./custom-speech-overview.md). After the Custom Speech model is published, it can be consumed by the Speech SDK.
### Custom text-to-speech
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
## Prerequisites
-Before you can begin using [Speech Studio](https://speech.microsoft.com), you need to have an Azure account and a Speech resource. If you don't already have an account and a resource, [try Speech service for free](overview.md#try-the-speech-service-for-free).
+Before you can begin using [Speech Studio](https://speech.microsoft.com), you need to have an Azure account and a Speech resource. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource).
After you've created an Azure account and a Speech service resource, do the following:
In Speech Studio, the following Speech service features are available as project
* **Real-time speech-to-text**: Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
-* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Prepare data for Custom Speech](how-to-custom-speech-test-and-train.md).
+* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
* **Pronunciation assessment**: Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
keywords: speech to text, speech to text software
In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-Speech-to-text, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input.
-
-This feature uses the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md">translation </a> and <a href="./text-to-speech.md">text-to-speech </a> offerings of the Speech service. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
-
-The speech-to-text feature defaults to using the Universal Language Model. This model was trained through Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios.
-
-When you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.
+Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
+> [!NOTE]
+> Microsoft uses the same recognition technology for Cortana and Office products.
## Get started
-To get started with speech-to-text, see the [quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text-short.md#pronunciation-assessment-parameters), and the [Speech CLI](spx-overview.md).
+To get started with speech-to-text, see the [quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text.md), and the [Speech CLI](spx-overview.md).
-## Sample code
-
-Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models:
+Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream for continuous and single-shot recognition, and working with custom models:
- [Speech-to-text samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Pronunciation assessment samples (REST)](rest-speech-to-text-short.md#pronunciation-assessment-parameters)-
-## Customization
-
-In addition to the standard Speech service model, you can create custom models. Customization helps to overcome speech recognition barriers such as speaking style, vocabulary, and background noise. For more information, see [Custom Speech](./custom-speech-overview.md).
-
-Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md).
## Batch transcription Batch transcription is a set of REST API operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md).
-## Reference docs
+## Custom Speech
-The [Speech SDK](speech-sdk.md) provides most of the functionalities that you need to interact with the Speech service. For scenarios such as model development and batch transcription, you can use the REST API.
+The Azure speech-to-text service analyzes audio in real-time or batch to transcribe the spoken word into text. Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This base model is pre-trained with dialects and phonetics representing a variety of common domains. The base model works well in most scenarios.
-### Speech SDK reference docs
+The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md).
-Use the following list to find the appropriate Speech SDK reference docs:
--- <a href="/dotnet/api/overview/azure/cognitiveservices/client/speechservice">C# SDK </a>-- <a href="/cpp/cognitive-services/speech/">C++ SDK </a>-- <a href="/java/api/com.microsoft.cognitiveservices.speech">Java SDK </a>-- <a href="/python/api/azure-cognitiveservices-speech/">Python SDK</a>-- <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/">JavaScript SDK</a>-- <a href="/objectivec/cognitive-services/speech/">Objective-C SDK </a>
+Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md).
-> [!TIP]
-> The Speech service SDK is actively maintained and updated. To track changes, updates, and feature additions, see the [Speech SDK release notes](releasenotes.md).
+### REST API
-### REST API references
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). For speech-to-text REST APIs, see the following documentation:
-For speech-to-text REST APIs, see the following resources:
+- [Speech-to-text REST API v3.0](rest-speech-to-text.md): You should use the REST API for [batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md): Use it only in cases where you can't use the [Speech SDK](speech-sdk.md).
-- [REST API: Speech-to-text](rest-speech-to-text.md)-- [REST API: Pronunciation assessment](rest-speech-to-text-short.md#pronunciation-assessment-parameters)-- <a href="https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0">REST API: Batch transcription and customization </a> ## Next steps -- [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)-- [Get the Speech SDK](speech-sdk.md)
+- [Get started with speech-to-text](get-started-speech-to-text.md)
+- [Get the Speech SDK](speech-sdk.md)
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
If your applications, tools, or products are using the [Translator Speech API](.
## Next steps
-* Read the [Get started with speech translation](get-started-speech-translation.md) quickstart article.
-* Get a [Speech service subscription key for free](overview.md#try-the-speech-service-for-free).
-* Get the [Speech SDK](speech-sdk.md).
+* Complete the [speech translation quickstart](get-started-speech-translation.md)
+* Get the [Speech SDK](speech-sdk.md)
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
This article assumes that you have working knowledge of the Command Prompt windo
# [Terminal](#tab/terminal)
-To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
To configure your subscription key and region identifier, run the following commands:
spx config @region --clear
# [PowerShell](#tab/powershell)
-To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
To configure your subscription key and region identifier, run the following commands in PowerShell:
cognitive-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md
You'll need to set Swagger to the region of your Speech resource. You can confir
You can use the Python library that you generated with the [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-## Reference documents
-
-* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
-* [Text-to-speech REST API](rest-text-to-speech.md)
- ## Next steps * [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
+* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
For detailed information, see [Speech service pricing](https://azure.microsoft.c
## Next steps
-* [Get a free Speech service subscription](overview.md#try-the-speech-service-for-free)
+* [Text to speech quickstart](get-started-text-to-speech.md)
* [Get the Speech SDK](speech-sdk.md)
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
If you're not going to continue using the echo bot deployed in this tutorial, yo
1. Find the **SpeechEchoBotTutorial-ResourceGroup** resource group. Select the three dots (...). 1. Select **Delete resource group**.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build your own client app by using the Speech SDK](./quickstarts/voice-assistants.md?pivots=programming-language-csharp)
-
-## See also
+## Explore documentation
* [Deploy to an Azure region near you](https://azure.microsoft.com/global-infrastructure/locations/) to see the improvement in bot response time. * [Deploy to an Azure region that supports high-quality neural text-to-speech voices](./regions.md#prebuilt-neural-voices).
If you're not going to continue using the echo bot deployed in this tutorial, yo
* Build and deploy your own voice-enabled bot: * Build a [Bot Framework bot](https://dev.botframework.com/). Then [register it with the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech) and [customize your bot for voice](/azure/bot-service/directline-speech-bot). * Explore existing [Bot Framework solutions](https://microsoft.github.io/botframework-solutions/index): [Build a virtual assistant](https://microsoft.github.io/botframework-solutions/overview/virtual-assistant-solution/) and [extend it to Direct Line Speech](https://microsoft.github.io/botframework-solutions/clients-and-channels/tutorials/enable-speech/1-intro/).+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Build your own client app by using the Speech SDK](./quickstarts/voice-assistants.md?pivots=programming-language-csharp)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
As with Teams anonymous meeting join, your application must have the meeting lin
A Communication Service user won't be admitted to a Teams meeting until there is at least one Teams user present in the meeting. Once a Teams user is present, then the Communication Services user will wait in the lobby until explicitly admitted by a Teams user, unless the "Who can bypass the lobby?" meeting policy/setting is set to "Everyone".
-During a meeting, Communication Services users will be able to use core audio, video, screen sharing, and chat functionality via Azure Communication Services SDKs. Once a Communication Services user leaves the meeting or the meeting ends, they're no longer able to send or receive new chat messages, but they'll have access to messages sent and received during the meeting. Anonymous Communication Services users can't add/remove participants to/from the meeting nor can they start recording or transcription for the meeting.
+During a meeting, Communication Services users will be able to use core audio, video, screen sharing, and chat functionality via Azure Communication Services SDKs. Once a Communication Services user leaves the meeting or the meeting ends, they're no longer able to send or receive new chat messages, and they no longer have access to messages sent and received during the meeting. Anonymous Communication Services users can't add/remove participants to/from the meeting nor can they start recording or transcription for the meeting.
Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a web application.
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Use the `az containerapp revision list` command to get the revision, replica, an
```azurecli az containerapp revision list \ --name album-api \
- --resource-group album-api-rg \
+ --resource-group album-api-rg
``` # [PowerShell](#tab/powershell)
Show the streaming container logs:
# [Bash](#tab/bash) ```azurecli
-az containerapp logs show --name album-api \
+az containerapp logs show \
+ --name album-api \
--resource-group album-api-rg \ --revision album-api--v2 \ --replica album-api--v2-5fdd5b4ff5-6mblw \
az containerapp logs show --name album-api \
# [PowerShell](#tab/powershell) ```azurecli
-az containerapp logs show --name album-api `
+az containerapp logs show `
+ --name album-api `
--resource-group album-api-rg ` --revision album-api--v2 ` --replica album-api--v2-5fdd5b4ff5-6mblw `
For example, you can connect to a container console in a container app with a si
# [Bash](#tab/bash) ```azurecli
-az containerapp exec
+az containerapp exec \
--name album-api \
- --resource-group album-api-rg \
+ --resource-group album-api-rg
``` # [PowerShell](#tab/powershell) ```azurecli
-az containerapp exec
+az containerapp exec `
--name album-api `
- --resource-group album-api-rg `
+ --resource-group album-api-rg
```
Use the `az containerapp revision list` command to get the revision, replica and
```azurecli az containerapp revision list \ --name album-api \
- --resource-group album-api-rg \
+ --resource-group album-api-rg
``` # [PowerShell](#tab/powershell)
az containerapp revision list \
```azurecli az containerapp revision list ` --name album-api `
- --resource-group album-api-rg `
+ --resource-group album-api-rg
```
Connect to the container console.
# [Bash](#tab/bash) ```azurecli
-az containerapp exec --name album-api \
+az containerapp exec \
+ --name album-api \
--resource-group album-api-rg \ --revision album-api--v2 \ --replica album-api--v2-5fdd5b4ff5-6mblw \
az containerapp exec --name album-api \
# [PowerShell](#tab/powershell) ```azurecli
-az containerapp exec --name album-api `
+az containerapp exec `
+ --name album-api `
--resource-group album-api-rg ` --revision album-api--v2 ` --replica album-api--v2-5fdd5b4ff5-6mblw `
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
This error indicates that due to heavy load in the region in which you are attem
* Deploy at a later time ## Issues during container group runtime
+### Container had an isolated restart without explicit user input
+
+There are two broad categories for why a container group may restart without explicit user input. First, containers may experience restarts caused by an application process crash. The ACI service recommends leveraging observability solutions such as Application Insights SDK, container group metrics, and container group logs to determine why the application experienced issues. Second, customers may experience restarts initiated by the ACI infrastructure due to maintenance events. To increase the availability of your application, run multiple container groups behind an ingress component such as an Application Gateway or Traffic Manager.
+ ### Container continually exits and restarts (no long-running process) Container groups default to a [restart policy](container-instances-restart-policy.md) of **Always**, so containers in the container group always restart after they run to completion. You may need to change this to **OnFailure** or **Never** if you intend to run task-based containers. If you specify **OnFailure** and still see continual restarts, there might be an issue with the application or script executed in your container.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-adoption.md
Title: From Apache Cassandra to Cassandra API
-description: Learn best practices and ways to successfully use the Azure Cosmos DB Cassandra API to use with Apache Cassandra applications.
+ Title: How to adapt to the Cassandra API from Apache Cassandra
+description: Learn best practices and ways to successfully use the Azure Cosmos DB Cassandra API with Apache Cassandra applications.
Last updated 03/24/2022 +
-# From Apache Cassandra to Cassandra API
+# How to adapt to the Cassandra API if you are coming from Apache Cassandra
[!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)] The Azure Cosmos DB Cassandra API provides wire protocol compatibility with existing Cassandra SDKs and tools. You can run applications that are designed to connect to Apache Cassandra by using the Cassandra API with minimal changes.
-When you use the Cassandra API, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. This article is a checklist to help users who are familiar with native [Apache Cassandra](https://cassandra.apache.org/) successfully begin to use the Azure Cosmos DB Cassandra API.
+When you use the Cassandra API, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. If you're familiar with native [Apache Cassandra](https://cassandra.apache.org/), this article can help you begin to use the Azure Cosmos DB Cassandra API.
## Feature support
-The Cassandra API supports a large number of Apache Cassandra features, but some features aren't supported, or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB Cassandra API features](cassandra-support.md) you need are supported.
+The Cassandra API supports a large number of Apache Cassandra features. Some features aren't supported or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB Cassandra API features](cassandra-support.md) you need are supported.
## Replication When you plan for replication, it's important to look at both migration and consistency.
-### Migration
- Although you can communicate with the Cassandra API through the Cassandra Query Language (CQL) binary protocol v4 wire protocol, Azure Cosmos DB implements its own internal replication protocol. You can't use the Cassandra gossip protocol for live migration or replication. For more information, see [Live-migrate from Apache Cassandra to the Cassandra API by using dual writes](migrate-data-dual-write-proxy.md). For information about offline migration, see [Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks](migrate-data-databricks.md).
-### Consistency
-
-Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they are different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels).
+Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they're different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels).
## Recommended client configurations
-When you use the Cassandra API, you don't need to make substantial code changes to existing applications that run Apache Cassandra. For the best experience, we recommend some approaches and configuration settings for the Cassandra API in Azure Cosmos DB. For details, review the blog post [Cassandra API recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/).
+When you use the Cassandra API, you don't need to make substantial code changes to existing applications that run Apache Cassandra. We recommend some approaches and configuration settings for the Cassandra API in Azure Cosmos DB. Review the blog post [Cassandra API recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/).
## Code samples
-The Cassandra API is designed to work with your existing application code. However, if you encounter any connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover any minor setup changes you might need to make in your existing code. We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations.
+The Cassandra API is designed to work with your existing application code. If you encounter connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover minor setup changes you might need to make in your existing code.
+
+We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations.
You also can use samples for [Java Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Java Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git). ## Storage
-The Cassandra API ultimately is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload.
+The Cassandra API is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload.
-The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. However, this factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept.
+The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. This factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept.
## Multi-region deployments
-Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there is no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes.
+Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there's no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes.
An advantage of having a single-master region for writes is avoiding cross-region conflict scenarios. It gives you the option to maintain strong consistency across multiple regions while maintaining a level of high availability. > [!NOTE] > Strong consistency across regions and a Recovery Point Objective (RPO) of zero isn't possible for native Apache Cassandra because all nodes are capable of serving writes. You can configure Azure Cosmos DB for strong consistency across regions in a *single write region* configuration. However, like with native Apache Cassandra, you can't configure an Azure Cosmos DB account that's configured with multiple write regions for strong consistency. A distributed system can't provide an RPO of zero *and* a Recovery Time Objective (RTO) of zero.
-For more information, we recommend that you review [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [Cassandra API recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java) and [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
+For more information, see [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [Cassandra API recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java). Also, see [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
## Request units
-One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. However, Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost), and each request can be profiled to determine its cost.
+One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost). Each request can be profiled to determine its cost.
-The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that, to maximize the benefit, you need to have a solid understanding of the throughput characteristics of your workload, maybe more than you have been used to.
+The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that you need to have a solid understanding of the throughput characteristics of your workload.
-We highly recommend that you profile your requests and use the information you gain to help you accurately estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate:
+We highly recommend that you profile your requests. Use that information to help you estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate:
- [Request units in Azure Cosmos DB](../request-units.md) - [Find the request unit charge for operations executed in the Azure Cosmos DB Cassandra API](find-request-unit-charge-cassandra.md)
In general, steady-state workloads that have predictable throughput benefit most
Partitioning in Azure Cosmos DB is similar to partitioning in Apache Cassandra. One of the main differences is that Azure Cosmos DB is more optimized for *horizontal scale*. In Azure Cosmos DB, limits are placed on the amount of *vertical throughput* capacity that's available in any physical partition. The effect of this optimization is most noticeable when an existing data model has significant throughput skew.
-Take steps to ensure that your partition key design will result in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity (request units per second, measured as *RU/s*) per partition, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
+Take steps to ensure that your partition key design results in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity per partition, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
## Scaling
In native Apache Cassandra, increasing capacity and scale involves adding new no
## Rate limiting
-A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited (429) errors if clients consume more resources (RU/s) than the amount you provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations](prevent-rate-limiting-errors.md).
+A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited errors if clients consume more resources than the amount you provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations](prevent-rate-limiting-errors.md).
## Apache Spark connector
-Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the Cassandra API the same way and by using the same connector. Before you connect to the Cassandra API, we recommend that you review [Connect to the Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration).
+Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the Cassandra API the same way and by using the same connector. Before you connect to the Cassandra API, review [Connect to the Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration).
## Troubleshoot common issues
cosmos-db Glowroot Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/glowroot-cassandra.md
Glowroot is an application performance management tool used to optimize and moni
## Run Glowroot central collector with Cosmos DB endpoint Once the endpoint configuration has been completed.
-1. [Download Glowroot central collector distribution](https://github.com/glowroot/glowroot/wiki/Central-Collector-Installation#central-collector-installation)
+1. [Download Glowroot central collector distribution](https://github.com/glowroot/glowroot)
2. In the glowroot-central.properties file, populate the following properties from your Cosmos DB Cassandra API endpoint * cassandra.contactPoints * cassandra.username
cosmos-db Manage Data Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-nodejs.md
Title: 'Quickstart: Cassandra API with Node.js - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with Node.js
+ Title: Create a profile app with Node.js using the Cassandra API
+description: This quickstart shows how to use the Azure Cosmos DB Cassandra API to create a profile application with Node.js.
ms.devlang: javascript Last updated 02/10/2021-+
+- devx-track-js
+- mode-api
+- kr2b-contr-experiment
# Quickstart: Build a Cassandra app with Node.js SDK and Azure Cosmos DB [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. In addition, you need:+ * [Node.js](https://nodejs.org/dist/v0.10.29/x64/node-v0.10.29-x64.msi) version v0.10.29 or higher * [Git](https://git-scm.com/)
Before you can create a document database, you need to create a Cassandra accoun
## Clone the sample application
-Now let's clone a Cassandra API app from GitHub, set the connection string, and run it. You see how easy it is to work with data programmatically.
+Clone a Cassandra API app from GitHub, set the connection string, and run it.
-1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
+1. Open a Command Prompt window. Create a new folder named `git-samples`. Then, close the window.
- ```bash
+ ```cmd
md "C:\git-samples" ```
-2. Open a git terminal window, such as git bash. Use the `cd` command to change to the new folder to install the sample app.
+1. Open a git terminal window, such as git bash. Use the `cd` command to change to the new folder to install the sample app.
```bash cd "C:\git-samples" ```
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
```bash git clone https://github.com/Azure-Samples/azure-cosmos-db-cassandra-nodejs-getting-started.git ```
-1. Install the Node.js dependencies with npm.
+1. Install the Node.js dependencies with `npm`.
```bash npm install
Now let's clone a Cassandra API app from GitHub, set the connection string, and
## Review the code
-This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. The snippets are all taken from the `uprofile.js` file in the `C:\git-samples\azure-cosmos-db-cassandra-nodejs-getting-started` folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+This step is optional. If you're interested to learn how the code creates the database resources, you can review the following snippets. The snippets are all taken from the `uprofile.js` file in the `C:\git-samples\azure-cosmos-db-cassandra-nodejs-getting-started` folder. Otherwise, skip ahead to [Update your connection string](#update-your-connection-string).
-* The username and password values were set using the connection string page in the Azure portal.
+* The username and password values were set using the connection string page in the Azure portal.
```javascript let authProvider = new cassandra.auth.PlainTextAuthProvider(
This step is optional. If you're interested to learn how the code creates the da
} ```
-* Close connection.
+* Close connection.
```javascript client.shutdown();
This step is optional. If you're interested to learn how the code creates the da
## Update your connection string
-Now go back to the Azure portal to get your connection string information and copy it into the app. The connection string enables your app to communicate with your hosted database.
+Go to the Azure portal to get your connection string information and copy it into the app. The connection string enables your app to communicate with your hosted database.
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
-1. Use the :::image type="icon" source="./media/manage-data-nodejs/copy.png"::: button on the right side of the screen to copy the top value, the CONTACT POINT.
+1. Use the :::image type="icon" source="./media/manage-data-nodejs/copy.png"::: button on the right side of the screen to copy the top value, **CONTACT POINT**.
- :::image type="content" source="./media/manage-data-nodejs/keys.png" alt-text="View and copy the CONTACT POINT, USERNAME,and PASSWORD from the Azure portal, connection string page":::
+ :::image type="content" source="./media/manage-data-nodejs/keys.png" alt-text="Screenshot showing how to view and copy the CONTACT POINT, USERNAME,and PASSWORD from the Connection String page.":::
-1. Open the `config.js` file.
+1. Open the *config.js* file.
-1. Paste the CONTACT POINT value from the portal over `'CONTACT-POINT` on line 9.
+1. Paste the **CONTACT POINT** value from the portal over `CONTACT-POINT` on line 9.
- Line 9 should now look similar to
+ Line 9 should now look similar to this value:
`contactPoint: "cosmos-db-quickstarts.cassandra.cosmosdb.azure.com",`
-1. Copy the USERNAME value from the portal and paste it over `<FillMEIN>` on line 2.
+1. Copy the **USERNAME** value from the portal and paste it over `<FillMEIN>` on line 2.
- Line 2 should now look similar to
+ Line 2 should now look similar to this value:
`username: 'cosmos-db-quickstart',`
-1. Copy the PASSWORD value from the portal and paste it over `USERNAME` on line 8.
+1. Copy the **PASSWORD** value from the portal and paste it over `USERNAME` on line 8.
- Line 8 should now look similar to
+ Line 8 should now look similar to this value:
`password: '2Ggkr662ifxz2Mg==',`
-1. Replace REGION with the Azure region you created this resource in.
-
-1. Save the `config.js` file.
+1. Replace **REGION** with the Azure region you created this resource in.
+1. Save the *config.js* file.
## Run the Node.js app
-1. In the terminal window, ensure you are in the sample directory you cloned earlier:
+1. In the bash terminal window, ensure you are in the sample directory you cloned earlier:
```bash cd azure-cosmos-db-cassandra-nodejs-getting-started
Now go back to the Azure portal to get your connection string information and co
npm start ```
-4. Verify the results as expected from the command line.
+1. Verify the results as expected from the command line.
- :::image type="content" source="./media/manage-data-nodejs/output.png" alt-text="View and verify the output":::
+ :::image type="content" source="./media/manage-data-nodejs/output.png" alt-text="Screenshot shows a Command Prompt window where you can view and verify the output.":::
- Press CTRL+C to stop execution of the program and close the console window.
+ Press **Ctrl**+**C** to stop the program and close the console window.
-5. In the Azure portal, open **Data Explorer** to query, modify, and work with this new data.
+1. In the Azure portal, open **Data Explorer** to query, modify, and work with this new data.
- :::image type="content" source="./media/manage-data-nodejs/data-explorer.png" alt-text="View the data in Data Explorer":::
+ :::image type="content" source="./media/manage-data-nodejs/data-explorer.png" alt-text="Screenshot shows the Data Explorer page, where you can view the data.":::
## Review SLAs in the Azure portal
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Node.js app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Node.js app that creates a Cassandra database and container. You can now import more data into your Azure Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-hdinsight.md
Title: Access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight
-description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight
+ Title: Access Azure Cosmos DB Cassandra API on YARN with HDInsight
+description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight.
Last updated 09/24/2018 ms.devlang: scala+ # Access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-This article covers how to access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight-Spark from spark-shell. HDInsight is Microsoft's Hortonworks Hadoop PaaS on Azure that leverages object storage for HDFS, and comes in several flavors including [Spark](../../hdinsight/spark/apache-spark-overview.md). While the content in this document references HDInsight-Spark, it is applicable to all Hadoop distributions.
+This article covers how to access Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight-Spark from `spark-shell`. HDInsight is Microsoft's Hortonworks Hadoop PaaS on Azure. It uses object storage for HDFS and comes in several flavors, including [Spark](../../hdinsight/spark/apache-spark-overview.md). While this article refers to HDInsight-Spark, it applies to all Hadoop distributions.
## Prerequisites
-* [Provision Azure Cosmos DB Cassandra API](manage-data-dotnet.md#create-a-database-account)
+Before you begin, [review the basics of connecting to Azure Cosmos DB Cassandra API](connect-spark-configuration.md).
-* [Review the basics of connecting to Azure Cosmos DB Cassandra API](connect-spark-configuration.md)
+You need the following prerequisites:
-* [Provision a HDInsight-Spark cluster](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md)
+* Provision Azure Cosmos DB Cassandra API. See [Create a database account](manage-data-dotnet.md#create-a-database-account).
-* [Review the code samples for working with Cassandra API](connect-spark-configuration.md#next-steps)
+* Provision an HDInsight-Spark cluster. See [Create Apache Spark cluster in Azure HDInsight using ARM template](../../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
-* [Use cqlsh for validation if you so prefer](connect-spark-configuration.md#connecting-to-azure-cosmos-db-cassandra-api-from-spark)
+* Cassandra API configuration in Spark2. The Spark connector for Cassandra requires that the Cassandra connection details to be initialized as part of the Spark context. When you launch a Jupyter notebook, the spark session and context are already initialized. Don't stop and reinitialize the Spark context unless it's complete with every configuration set as part of the HDInsight default Jupyter notebook start-up. One workaround is to add the Cassandra instance details to Ambari, Spark2 service configuration, directly. This approach is a one-time activity per cluster that requires a Spark2 service restart.
-* **Cassandra API configuration in Spark2** - The Spark connector for Cassandra requires that the Cassandra connection details to be initialized as part of the Spark context. When you launch a Jupyter notebook, the spark session and context are already initialized and it is not advisable to stop and reinitialize the Spark context unless it's complete with every configuration set as part of the HDInsight default Jupyter notebook start-up. One workaround is to add the Cassandra instance details to Ambari, Spark2 service configuration directly. This is a one-time activity per cluster that requires a Spark2 service restart.
-
- 1. Go to Ambari, Spark2 service and select configs
+ 1. Go to Ambari, Spark2 service and select configs.
- 2. Then go to custom spark2-defaults and add a new property with the following, and restart Spark2 service:
+ 2. Go to custom spark2-defaults and add a new property with the following, and restart Spark2 service:
```scala spark.cassandra.connection.host=YOUR_COSMOSDB_ACCOUNT_NAME.cassandra.cosmosdb.azure.com<br>
This article covers how to access Azure Cosmos DB Cassandra API from Spark on YA
spark.cassandra.auth.password=YOUR_COSMOSDB_KEY<br> ```
+You can use `cqlsh` for validation. For more information, see [Connecting to Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md#connecting-to-azure-cosmos-db-cassandra-api-from-spark).
+ ## Access Azure Cosmos DB Cassandra API from Spark shell
-Spark shell is used for testing/exploration purposes.
+Spark shell is used for testing and exploration.
-* Launch spark-shell with the required maven dependencies compatible with your cluster's Spark version.
+* Launch `spark-shell` with the required maven dependencies compatible with your cluster's Spark version.
```scala spark-shell --packages "com.datastax.spark:spark-cassandra-connector_2.11:2.3.0,com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.0.0"
Spark shell is used for testing/exploration purposes.
## Access Azure Cosmos DB Cassandra API from Jupyter notebooks
-HDInsight-Spark comes with Zeppelin and Jupyter notebook services. They are both web-based notebook environments that support Scala and Python. Notebooks are great for interactive exploratory analytics and collaboration, but not meant for operational/productionized processes.
+HDInsight-Spark comes with Zeppelin and Jupyter notebook services. They're both web-based notebook environments that support Scala and Python. Notebooks are great for interactive exploratory analytics and collaboration, but not meant for operational or production processes.
The following Jupyter notebooks can be uploaded into your HDInsight Spark cluster and provide ready samples for working with Azure Cosmos DB Cassandra API. Be sure to review the first notebook `1.0-ReadMe.ipynb` to review Spark service configuration for connecting to Azure Cosmos DB Cassandra API.
-Download these notebooks under [azure-cosmos-db-cassandra-api-spark-notebooks-jupyter](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-jupyter/blob/main/scala/) to your machine.
+Download the notebooks under [azure-cosmos-db-cassandra-api-spark-notebooks-jupyter](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-jupyter/blob/main/scala/) to your machine.
-### How to upload:
-When you launch Jupyter, navigate to Scala. Create a directory first and then upload the notebooks to the directory. The upload button is on the top, right hand-side.
+### How to upload
-### How to run:
-Run through the notebooks, and each notebook cell sequentially. Click the run button at the top of each notebook to execute all cells, or shift+enter for each cell.
+When you launch Jupyter, navigate to Scala. Create a directory and then upload the notebooks to the directory. The **Upload** button is on the top, right-hand side.
-## Access with Azure Cosmos DB Cassandra API from your Spark Scala program
+### How to run
-For automated processes in production, Spark programs are submitted to the cluster via [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html).
+Go through the notebooks, and each notebook cell sequentially. Select the **Run** button at the top of each notebook to run all cells, or **Shift**+**Enter** for each cell.
-## Next steps
+## Access with Azure Cosmos DB Cassandra API from your Spark Scala program
-* [How to build a Spark Scala program in an IDE and submit it to the HDInsight Spark cluster through Livy for execution](../../hdinsight/spark/apache-spark-create-standalone-application.md)
+For automated processes in production, Spark programs are submitted to the cluster by using [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html).
-* [How to connect to Azure Cosmos DB Cassandra API from a Spark Scala program](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
+## Next steps
-* [Complete list of code samples for working with Cassandra API](connect-spark-configuration.md)
+* [Create a Scala Maven application for Apache Spark in HDInsight using IntelliJ](../../hdinsight/spark/apache-spark-create-standalone-application.md)
+* [SampleCosmosDBApp.scala](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-connector-sample/blob/main/src/main/scala/com/microsoft/azure/cosmosdb/cassandra/SampleCosmosDBApp.scala)
+* [Connect to Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md)
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
There are many different ways to provision a dedicated gateway:
- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster) - [Use Azure Cosmos DB's REAT API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)-- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest#az-cosmosdb-service-create)
+- [Azure CLI](/cli/azure/cosmosdb/service#az-cosmosdb-service-create)
- [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
Title: Use the graph bulk executor .NET library with Azure Cosmos DB Gremlin API
+ Title: Bulk ingestion in Azure Cosmos DB Gremlin API using BulkExecutor
description: Learn how to use the bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
The way you create a `TokenCredential` instance is beyond the scope of this arti
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes) - [In JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)-- [In Python](/python/api/overview/azure/identity-readme?view=azure-python#credential-classes)
+- [In Python](/python/api/overview/azure/identity-readme#credential-classes)
The examples below use a service principal with a `ClientSecretCredential` instance.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
Title: Azure role-based access control in Azure Cosmos DB
description: Learn how Azure Cosmos DB provides database protection with Active directory integration (Azure RBAC). Previously updated : 04/06/2022 Last updated : 05/11/2022
When this feature is enabled, changes to any resource can only be made from a us
### Check list before enabling
-This setting will prevent any changes to any Cosmos resource from any client connecting using account keys including any Cosmos DB SDK, any tools that connect via account keys, or from the Azure portal. To prevent issues or errors from applications after enabling this feature, check if applications or Azure portal users perform any of the following actions before enabling this feature, including:
--- Any change to the Cosmos account including any properties or adding or removing regions.
+This setting will prevent any changes to any Cosmos resource from any client connecting using account keys including any Cosmos DB SDK, any tools that connect via account keys. To prevent issues or errors from applications after enabling this feature, check if applications perform any of the following actions before enabling this feature, including:
- Creating, deleting child resources such as databases and containers. This includes resources for other APIs such as Cassandra, MongoDB, Gremlin, and table resources.
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
var result = await client.GetContainer("database", "container").Scripts.ExecuteS
The following example shows how to register a stored procedure by using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-StoredProcedure newStoredProcedure = new StoredProcedure(
- "{" +
- " 'id':'spCreateToDoItems'," +
- " 'body':" + new String(Files.readAllBytes(Paths.get("..\\js\\spCreateToDoItems.js"))) +
- "}");
-//toBlocking() blocks the thread until the operation is complete and is used only for demo.
-StoredProcedure createdStoredProcedure = asyncClient.createStoredProcedure(containerLink, newStoredProcedure, null)
- .toBlocking().single().getResource();
+CosmosStoredProcedureProperties definition = new CosmosStoredProcedureProperties(
+ "spCreateToDoItems",
+ Files.readString(Paths.get("createToDoItems.js"))
+);
+
+CosmosStoredProcedureResponse response = container
+ .getScripts()
+ .createStoredProcedure(definition);
``` The following code shows how to call a stored procedure by using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-String sprocLink = String.format("%s/sprocs/%s", containerLink, "spCreateToDoItems");
-final CountDownLatch successfulCompletionLatch = new CountDownLatch(1);
-
-List<ToDoItem> ToDoItems = new ArrayList<ToDoItem>();
-
-class ToDoItem {
- public String category;
- public String name;
- public String description;
- public boolean isComplete;
-}
-
-ToDoItem newItem = new ToDoItem();
-newItem.category = "Personal";
-newItem.name = "Groceries";
-newItem.description = "Pick up strawberries";
-newItem.isComplete = false;
-
-ToDoItems.add(newItem)
+CosmosStoredProcedure sproc = container
+ .getScripts()
+ .getStoredProcedure("spCreateToDoItems");
-newItem.category = "Personal";
-newItem.name = "Doctor";
-newItem.description = "Make appointment for check up";
-newItem.isComplete = false;
+List<Object> items = new ArrayList<Object>();
-ToDoItems.add(newItem)
+ToDoItem firstItem = new ToDoItem();
+firstItem.category = "Personal";
+firstItem.name = "Groceries";
+firstItem.description = "Pick up strawberries";
+firstItem.isComplete = false;
+items.add(firstItem);
-RequestOptions requestOptions = new RequestOptions();
-requestOptions.setPartitionKey(new PartitionKey("Personal"));
+ToDoItem secondItem = new ToDoItem();
+secondItem.category = "Personal";
+secondItem.name = "Doctor";
+secondItem.description = "Make appointment for check up";
+secondItem.isComplete = true;
+items.add(secondItem);
-Object[] storedProcedureArgs = new Object[] { ToDoItems };
-asyncClient.executeStoredProcedure(sprocLink, requestOptions, storedProcedureArgs)
- .subscribe(storedProcedureResponse -> {
- String storedProcResultAsString = storedProcedureResponse.getResponseAsString();
- successfulCompletionLatch.countDown();
- System.out.println(storedProcedureResponse.getActivityId());
- }, error -> {
- successfulCompletionLatch.countDown();
- System.err.println("an error occurred while executing the stored procedure: actual cause: "
- + error.getMessage());
- });
+CosmosStoredProcedureRequestOptions options = new CosmosStoredProcedureRequestOptions();
+options.setPartitionKey(
+ new PartitionKey("Personal")
+);
-successfulCompletionLatch.await();
+CosmosStoredProcedureResponse response = sproc.execute(
+ items,
+ options
+);
``` ### [JavaScript SDK](#tab/javascript-sdk)
await client.GetContainer("database", "container").CreateItemAsync(newItem, null
The following code shows how to register a pre-trigger using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-String triggerId = "trgPreValidateToDoItemTimestamp";
-Trigger trigger = new Trigger();
-trigger.setId(triggerId);
-trigger.setBody(new String(Files.readAllBytes(Paths.get(String.format("..\\js\\%s.js", triggerId)));
-trigger.setTriggerOperation(TriggerOperation.Create);
-trigger.setTriggerType(TriggerType.Pre);
-//toBlocking() blocks the thread until the operation is complete and is used only for demo.
-Trigger createdTrigger = asyncClient.createTrigger(containerLink, trigger, new RequestOptions()).toBlocking().single().getResource();
+CosmosTriggerProperties definition = new CosmosTriggerProperties(
+ "preValidateToDoItemTimestamp",
+ Files.readString(Paths.get("validateToDoItemTimestamp.js"))
+);
+definition.setTriggerOperation(TriggerOperation.CREATE);
+definition.setTriggerType(TriggerType.PRE);
+
+CosmosTriggerResponse response = container
+ .getScripts()
+ .createTrigger(definition);
``` The following code shows how to call a pre-trigger using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
- Document item = new Document("{ "
- + "\"category\": \"Personal\", "
- + "\"name\": \"Groceries\", "
- + "\"description\": \"Pick up strawberries\", "
- + "\"isComplete\": false, "
- + "}"
- );
-RequestOptions requestOptions = new RequestOptions();
-requestOptions.setPreTriggerInclude(Arrays.asList("trgPreValidateToDoItemTimestamp"));
-//toBlocking() blocks the thread until the operation is complete and is used only for demo.
-asyncClient.createDocument(containerLink, item, requestOptions, false).toBlocking();
+ToDoItem item = new ToDoItem();
+item.category = "Personal";
+item.name = "Groceries";
+item.description = "Pick up strawberries";
+item.isComplete = false;
+
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+options.setPreTriggerInclude(
+ Arrays.asList("preValidateToDoItemTimestamp")
+);
+
+CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
``` ### [JavaScript SDK](#tab/javascript-sdk)
await client.GetContainer("database", "container").CreateItemAsync(newItem, null
The following code shows how to register a post-trigger using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-String triggerId = "trgPostUpdateMetadata";
-Trigger trigger = new Trigger();
-trigger.setId(triggerId);
-trigger.setBody(new String(Files.readAllBytes(Paths.get(String.format("..\\js\\%s.js", triggerId)))));
-trigger.setTriggerOperation(TriggerOperation.Create);
-trigger.setTriggerType(TriggerType.Post);
-Trigger createdTrigger = asyncClient.createTrigger(containerLink, trigger, new RequestOptions()).toBlocking().single().getResource();
+CosmosTriggerProperties definition = new CosmosTriggerProperties(
+ "postUpdateMetadata",
+ Files.readString(Paths.get("updateMetadata.js"))
+);
+definition.setTriggerOperation(TriggerOperation.CREATE);
+definition.setTriggerType(TriggerType.POST);
+
+CosmosTriggerResponse response = container
+ .getScripts()
+ .createTrigger(definition);
``` The following code shows how to call a post-trigger using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-Document item = new Document(String.format("{ "
- + "\"name\": \"artist_profile_1023\", "
- + "\"artist\": \"The Band\", "
- + "\"albums\": [\"Hellujah\", \"Rotators\", \"Spinning Top\"]"
- + "}"
-));
-RequestOptions requestOptions = new RequestOptions();
-requestOptions.setPostTriggerInclude(Arrays.asList("trgPostUpdateMetadata"));
-//toBlocking() blocks the thread until the operation is complete, and is used only for demo.
-asyncClient.createDocument(containerLink, item, requestOptions, false).toBlocking();
+ToDoItem item = new ToDoItem();
+item.category = "Personal";
+item.name = "Doctor";
+item.description = "Make appointment for check up";
+item.isComplete = true;
+
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+options.setPostTriggerInclude(
+ Arrays.asList("postUpdateMetadata")
+);
+
+CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
``` ### [JavaScript SDK](#tab/javascript-sdk)
while (iterator.HasMoreResults)
The following code shows how to register a user-defined function using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-String udfId = "Tax";
-UserDefinedFunction udf = new UserDefinedFunction();
-udf.setId(udfId);
-udf.setBody(new String(Files.readAllBytes(Paths.get(String.format("..\\js\\%s.js", udfId)))));
-//toBlocking() blocks the thread until the operation is complete and is used only for demo.
-UserDefinedFunction createdUDF = client.createUserDefinedFunction(containerLink, udf, new RequestOptions()).toBlocking().single().getResource();
+CosmosUserDefinedFunctionProperties definition = new CosmosUserDefinedFunctionProperties(
+ "udfTax",
+ Files.readString(Paths.get("tax.js"))
+);
+
+CosmosUserDefinedFunctionResponse response = container
+ .getScripts()
+ .createUserDefinedFunction(definition);
``` The following code shows how to call a user-defined function using the Java SDK: ```java
-String containerLink = String.format("/dbs/%s/colls/%s", "myDatabase", "myContainer");
-Observable<FeedResponse<Document>> queryObservable = client.queryDocuments(containerLink, "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000", new FeedOptions());
-final CountDownLatch completionLatch = new CountDownLatch(1);
-queryObservable.subscribe(
- queryResultPage -> {
- System.out.println("Got a page of query result with " +
- queryResultPage.getResults().size());
- },
- // terminal error signal
- e -> {
- e.printStackTrace();
- completionLatch.countDown();
- },
-
- // terminal completion signal
- () -> {
- completionLatch.countDown();
- });
-completionLatch.await();
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+
+CosmosPagedIterable<ToDoItem> iterable = container.queryItems(
+ "SELECT t.cost, udf.udfTax(t.cost) AS costWithTax FROM t",
+ options,
+ ToDoItem.class);
``` ### [JavaScript SDK](#tab/javascript-sdk)
cosmos-db Sql Query Group By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-group-by.md
Previously updated : 09/01/2021 Last updated : 05/12/2022
The GROUP BY clause divides the query's results according to the values of one o
When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](sql-query-aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
- The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You currently cannot use GROUP BY with an ORDER BY clause but this is planned.
+ The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You cannot use GROUP BY with an ORDER BY clause.
The GROUP BY clause does not allow any of the following:
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/transactional-batch.md
Last updated 10/27/2020
# Transactional batch operations in Azure Cosmos DB using the .NET SDK [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET SDK, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they are described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
+Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back.
## What's a transaction in Azure Cosmos DB
A transaction in a typical database can be defined as a sequence of operations p
* **Durability** ensures that any change that is committed in a database will always be present. Azure Cosmos DB supports [full ACID compliant transactions with snapshot isolation](database-transactions-optimistic-concurrency.md) for operations within the same [logical partition key](../partitioning-overview.md).
-## Transactional batch operations Vs stored procedures
+## Transactional batch operations and stored procedures
Azure Cosmos DB currently supports stored procedures, which also provide the transactional scope on operations. However, transactional batch operations offer the following benefits: * **Language option** ΓÇô Transactional batch is supported on the SDK and language you work with already, while stored procedures need to be written in JavaScript. * **Code versioning** ΓÇô Versioning application code and onboarding it onto your CI/CD pipeline is much more natural than orchestrating the update of a stored procedure and making sure the rollover happens at the right time. It also makes rolling back changes easier. * **Performance** ΓÇô Reduced latency on equivalent operations by up to 30% when compared to the stored procedure execution.
-* **Content serialization** ΓÇô Each operation within a transactional batch can leverage custom serialization options for its payload.
+* **Content serialization** ΓÇô Each operation within a transactional batch can use custom serialization options for its payload.
## How to create a transactional batch operation
-When creating a transactional batch operation, you begin from a container instance and call `CreateTransactionalBatch`:
+### [.NET](#tab/dotnet)
+
+When creating a transactional batch operation, start with a container instance and call [CreateTransactionalBatch](/dotnet/api/microsoft.azure.cosmos.container.createtransactionalbatch):
```csharp
-string partitionKey = "The Family";
-ParentClass parent = new ParentClass(){ Id = "The Parent", PartitionKey = partitionKey, Name = "John", Age = 30 };
-ChildClass child = new ChildClass(){ Id = "The Child", ParentId = parent.Id, PartitionKey = partitionKey };
-TransactionalBatch batch = container.CreateTransactionalBatch(new PartitionKey(parent.PartitionKey))
- .CreateItem<ParentClass>(parent)
- .CreateItem<ChildClass>(child);
+PartitionKey partitionKey = new PartitionKey("road-bikes");
+
+TransactionalBatch batch = container.CreateTransactionalBatch(partitionKey);
```
-Next, you'll need to call `ExecuteAsync` on the batch:
+Next, add multiple operations to the batch:
```csharp
-TransactionalBatchResponse batchResponse = await batch.ExecuteAsync();
+Product bike = new (
+ id: "68719520766",
+ category: "road-bikes",
+ name: "Chropen Road Bike"
+);
+
+batch.CreateItem<Product>(bike);
+
+Part part = new (
+ id: "68719519885",
+ category: "road-bikes",
+ name: "Tronosuros Tire",
+ productId: bike.id
+);
+
+batch.CreateItem<Part>(part);
```
-Once the response is received, examine if it is successful or not, and extract the results:
+Finally, call [ExecuteAsync](/dotnet/api/microsoft.azure.cosmos.transactionalbatch.executeasync) on the batch:
```csharp
-using (batchResponse)
+using TransactionalBatchResponse response = await batch.ExecuteAsync();
+```
+
+Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
+
+```csharp
+if (response.IsSuccessStatusCode)
{
- if (batchResponse.IsSuccessStatusCode)
- {
- TransactionalBatchOperationResult<ParentClass> parentResult = batchResponse.GetOperationResultAtIndex<ParentClass>(0);
- ParentClass parentClassResult = parentResult.Resource;
- TransactionalBatchOperationResult<ChildClass> childResult = batchResponse.GetOperationResultAtIndex<ChildClass>(1);
- ChildClass childClassResult = childResult.Resource;
- }
+ TransactionalBatchOperationResult<Product> productResponse;
+ productResponse = response.GetOperationResultAtIndex<Product>(0);
+ Product productResult = productResponse.Resource;
+
+ TransactionalBatchOperationResult<Part> partResponse;
+ partResponse = response.GetOperationResultAtIndex<Part>(1);
+ Part partResult = partResponse.Resource;
} ```
-If there is a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+> [!IMPORTANT]
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
-```csharp
-// Parent's birthday!
-parent.Age = 31;
-// Naming two children with the same name, should abort the transaction
-ChildClass anotherChild = new ChildClass(){ Id = "The Child", ParentId = parent.Id, PartitionKey = partitionKey };
-TransactionalBatchResponse failedBatchResponse = await container.CreateTransactionalBatch(new PartitionKey(partitionKey))
- .ReplaceItem<ParentClass>(parent.Id, parent)
- .CreateItem<ChildClass>(anotherChild)
- .ExecuteAsync();
-
-using (failedBatchResponse)
+### [Java](#tab/java)
+
+When creating a transactional batch operation, call [CosmosBatch.createCosmosBatch](/java/api/com.azure.cosmos.models.cosmosbatch.createcosmosbatch):
+
+```java
+PartitionKey partitionKey = new PartitionKey("road-bikes");
+
+CosmosBatch batch = CosmosBatch.createCosmosBatch(partitionKey);
+```
+
+Next, add multiple operations to the batch:
+
+```java
+Product bike = new Product();
+bike.setId("68719520766");
+bike.setCategory("road-bikes");
+bike.setName("Chropen Road Bike");
+
+batch.createItemOperation(bike);
+
+Part part = new Part();
+part.setId("68719519885");
+part.setCategory("road-bikes");
+part.setName("Tronosuros Tire");
+part.setProductId(bike.getId());
+
+batch.createItemOperation(part);
+```
+
+Finally, use a container instance to call [executeCosmosBatch](/java/api/com.azure.cosmos.cosmoscontainer.executecosmosbatch) with the batch:
+
+```java
+CosmosBatchResponse response = container.executeCosmosBatch(batch);
+```
+
+Once the response is received, examine if the response is successful. If the response indicates a success, extract the results:
+
+```java
+if (response.isSuccessStatusCode())
{
- if (!failedBatchResponse.IsSuccessStatusCode)
- {
- TransactionalBatchOperationResult<ParentClass> parentResult = failedBatchResponse.GetOperationResultAtIndex<ParentClass>(0);
- // parentResult.StatusCode is 424
- TransactionalBatchOperationResult<ChildClass> childResult = failedBatchResponse.GetOperationResultAtIndex<ChildClass>(1);
- // childResult.StatusCode is 409
- }
+ List<CosmosBatchOperationResult> results = response.getResults();
} ```
+> [!IMPORTANT]
+> If there's a failure, the failed operation will have a status code of its corresponding error. All the other operations will have a 424 status code (failed dependency). In the example below, the operation fails because it tries to create an item that already exists (409 HttpStatusCode.Conflict). The status code enables one to identify the cause of transaction failure.
+++ ## How are transactional batch operations executed When the `ExecuteAsync` method is called, all operations in the `TransactionalBatch` object are grouped, serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
The SDK exposes the response for you to verify the result and, optionally, extra
Currently, there are two known limits: * The Azure Cosmos DB request size limit constrains the size of the `TransactionalBatch` payload to not exceed 2 MB, and the maximum execution time is 5 seconds.
-* There is a current limit of 100 operations per `TransactionalBatch` to ensure the performance is as expected and within SLAs.
+* There's a current limit of 100 operations per `TransactionalBatch` to ensure the performance is as expected and within SLAs.
## Next steps
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
description: This article describes how cost alerts help you monitor usage and spending in Cost Management. Previously updated : 10/07/2021 Last updated : 05/12/2022
This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending. Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost management and billing alerts together in one place. When your consumption reaches a given threshold, alerts are generated by Cost Management. There are three types of cost alerts: budget alerts, credit alerts, and department spending quota alerts.
+## Required permissions for alerts
+
+The following table shows how Cost Management alerts are used by each role. The behavior below is applicable to all Azure RBAC scopes.
+
+| **Feature/Role** | **Owner** | **Contributor** | **Reader** | **Cost Management Reader** | **Cost Management Contributor** |
+| | | | | | |
+| **Alerts** | Read, Update | Read, Update | Read only | Read only | Read, Update |
+ ## Budget alerts Budget alerts notify you when spending, based on usage or cost, reaches or exceeds the amount defined in the [alert condition of the budget](tutorial-acm-create-budgets.md). Cost Management budgets are created using the Azure portal or the [Azure Consumption](/rest/api/consumption) API.
cost-management-billing Reservation Discount App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-app-service.md
Title: Reservation discounts for Azure App Service
-description: Learn how reservation discounts apply to Azure App Service Premium v3 and Premium v2 instances and Isolated Stamps.
+description: Learn how reservation discounts apply to Azure App Service Premium v3 instances and Isolated Stamps.
Previously updated : 03/22/2022 Last updated : 05/12/2022 # How reservation discounts apply to Azure App Service
-This article helps you understand how discounts apply to Azure App Service Premium v3 and Premium v2 instances and Isolated Stamps.
+This article helps you understand how discounts apply to Azure App Service Premium v3 instances and Isolated Stamps.
## How reservation discounts apply to Premium v3 instances
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 03/08/2022 Last updated : 05/11/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 03/08/2022 Last updated : 05/11/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
description: Learn about Microsoft Defender for Containers
Previously updated : 04/28/2022 Last updated : 05/08/2022 # Overview of Microsoft Defender for Containers
Defender for Containers protects your clusters whether they're running in:
- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. > [!NOTE]
-> Defender for Containers' support for Arc-enabled Kubernetes clusters (AWS EKS, and GCP GKE) is a preview feature.
+> Defender for Containers' support for Arc-enabled Kubernetes clusters (AWS EKS and GCP GKE clusters) is a preview feature.
For high-level diagrams of each scenario, see the relevant tabs below.
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Title: Microsoft Defender for Kubernetes - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Kubernetes. Previously updated : 03/10/2022 Last updated : 05/08/2022
# Introduction to Microsoft Defender for Kubernetes (deprecated) + Defender for Cloud provides real-time threat protection for your Azure Kubernetes Service (AKS) containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the analysis of the Kubernetes audit logs.
Examples of security events that Microsoft Defender for Kubernetes monitors incl
- Creation of high privileged roles - Creation of sensitive mounts.
-For a full list of the cluster level alerts, see alerts with "K8S.NODE_" prefix in the alert type in the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
+For a full list of the cluster level alerts, see alerts with "K8S_" prefix in the alert type in the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
## FAQ - Microsoft Defender for Kubernetes
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
If you have any existing connectors created with the classic cloud connectors ex
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-eks#network-requirements) for the Defender for Containers plan.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-eks&source=docs#network-requirements) for the Defender for Containers plan.
> [!Note] > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## November 2021
+
+Our Ignite release includes:
+
+- [Azure Security Center and Azure Defender become Microsoft Defender for Cloud](#azure-security-center-and-azure-defender-become-microsoft-defender-for-cloud)
+- [Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2](#native-cspm-for-aws-and-threat-protection-for-amazon-eks-and-aws-ec2)
+- [Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)](#prioritize-security-actions-by-data-sensitivity-powered-by-microsoft-purview-in-preview)
+- [Expanded security control assessments with Azure Security Benchmark v3](#expanded-security-control-assessments-with-azure-security-benchmark-v3)
+- [Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)](#microsoft-sentinel-connectors-optional-bi-directional-alert-synchronization-released-for-general-availability-ga)
+- [New recommendation to push Azure Kubernetes Service (AKS) logs to Sentinel](#new-recommendation-to-push-azure-kubernetes-service-aks-logs-to-sentinel)
+- [Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)](#recommendations-mapped-to-the-mitre-attck-frameworkreleased-for-general-availability-ga)
+
+Other changes in November include:
+
+- [Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solutionreleased-for-general-availability-ga)
+- [Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-microsoft-defender-for-serversreleased-for-general-availability-ga)
+- [Snapshot export for recommendations and security findings (in preview)](#snapshot-export-for-recommendations-and-security-findings-in-preview)
+- [Auto provisioning of vulnerability assessment solutions released for general availability (GA)](#auto-provisioning-of-vulnerability-assessment-solutions-released-for-general-availability-ga)
+- [Software inventory filters in asset inventory released for general availability (GA)](#software-inventory-filters-in-asset-inventory-released-for-general-availability-ga)
+- [New AKS security policy added to default initiative ΓÇô for use by private preview customers only](#new-aks-security-policy-added-to-default-initiative--for-use-by-private-preview-customers-only)
+- [Inventory display of on-premises machines applies different template for resource name](#inventory-display-of-on-premises-machines-applies-different-template-for-resource-name)
++
+### Azure Security Center and Azure Defender become Microsoft Defender for Cloud
+
+According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multi-cloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
+
+**Microsoft Defender for Cloud** (formerly known as Azure Security Center and Azure Defender) is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multi-cloud and hybrid environments.
+
+At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud**, reflects the integrated capabilities of our security offering and our ability to support any cloud platform.
++
+### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2
+
+A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
++
+When you've added your AWS accounts, Defender for Cloud protects your AWS resources with any or all of the following plans:
+
+- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
+- **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+
+Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
++
+### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)
+Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments.
+
+To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multi-cloud, and on-premises workloads.
+
+The integration with Microsoft Purview extends your security visibility in Defender for Cloud from the infrastructure level down to the data, enabling an entirely new way to prioritize resources and security activities for your security teams.
+
+Learn more in [Prioritize security actions by data sensitivity](information-protection.md).
++
+### Expanded security control assessments with Azure Security Benchmark v3
+Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
+
+[Azure Security Benchmark](../security/benchmarks/introduction.md) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+
+From Ignite 2021, Azure Security Benchmark **v3** is available in [Defender for Cloud's regulatory compliance dashboard](update-regulatory-compliance-packages.md) and enabled as the new default initiative for all Azure subscriptions protected with Microsoft
+Defender for Cloud.
+
+Enhancements for v3 include:
+
+- Additional mappings to industry frameworks [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf) and [CIS Controls v8](https://www.cisecurity.org/controls/v8/).
+- More granular and actionable guidance for controls with the introduction of:
+ - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
+ - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
+
+- New controls include DevOps security for issues such as threat modeling and software supply chain security, as well as key and certificate management for best practices in Azure.
+
+Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction).
++
+### Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)
+
+In July, [we announced](release-notes-archive.md#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
+
+When you connect Microsoft Defender for Cloud to Microsoft Sentinel, the status of security alerts is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert will display as closed in Microsoft Sentinel as well. Changing the status of an alert in Defender for Cloud won't affect the status of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert, only that of the synchronized alert itself.
+
+When you enable **bi-directional alert synchronization** you'll automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident containing a Defender for Cloud alert is closed, Defender for Cloud will automatically close the corresponding original alert.
+
+Learn more in [Connect Azure Defender alerts from Azure Security Center](../sentinel/connect-azure-security-center.md) and [Stream alerts to Azure Sentinel](export-to-siem.md#stream-alerts-to-microsoft-sentinel).
+
+### New recommendation to push Azure Kubernetes Service (AKS) logs to Sentinel
+
+In a further enhancement to the combined value of Defender for Cloud and Microsoft Sentinel, we'll now highlight Azure Kubernetes Service instances that aren't sending log data to Microsoft Sentinel.
+
+SecOps teams can choose the relevant Microsoft Sentinel workspace directly from the recommendation details page and immediately enable the streaming of raw logs. This seamless connection between the two products makes it easy for security teams to ensure complete logging coverage across their workloads to stay on top of their entire environment.
+
+The new recommendation, "Diagnostic logs in Kubernetes services should be enabled" includes the 'Fix' option for faster remediation.
+
+We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
++
+### Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)
+
+We've enhanced Defender for Cloud's security recommendations to show their position on the MITRE ATT&CK® framework. This globally accessible knowledge base of threat actors' tactics and techniques based on real-world observations, provides more context to help you understand the associated risks of the recommendations for your environment.
+
+You'll find these tactics wherever you access recommendation information:
+
+- **Azure Resource Graph query results** for relevant recommendations include the MITRE ATT&CK® tactics and techniques.
+
+- **Recommendation details pages** show the mapping for all relevant recommendations:
+
+ :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
+
+- **The recommendations page in Defender for Cloud** has a new :::image type="icon" source="media/review-security-recommendations/tactics-filter-recommendations-page.png" border="false"::: filter to select recommendations according to their associated tactic:
+
+Learn more in [Review your security recommendations](review-security-recommendations.md).
+
+### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)
+
+In October, [we announced](release-notes-archive.md#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solution-in-preview) an extension to the integration between [Microsoft Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt). This feature is now released for general availability (GA).
+
+Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
+
+Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
+
+To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
+
+Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+
+### Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)
+
+In August, [we announced](release-notes-archive.md#microsoft-defender-for-endpoint-for-linux-now-supported-by-azure-defender-for-servers-in-preview) preview support for deploying the [Defender for Endpoint for Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux) sensor to supported Linux machines. This feature is now released for general availability (GA).
+
+[Microsoft Defender for Servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
+
+When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack.
+
+Learn more in [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
++
+### Snapshot export for recommendations and security findings (in preview)
+
+Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment.
+
+Defender for Cloud's **continuous export** feature lets you fully customize *what* will be exported, and *where* it will go. Learn more in [Continuously export Microsoft Defender for Cloud data](continuous-export.md).
+
+Even though the feature is called *continuous*, there's also an option to export weekly snapshots. Until now, these weekly snapshots were limited to secure score and regulatory compliance data. We've added the capability to export recommendations and security findings.
+
+### Auto provisioning of vulnerability assessment solutions released for general availability (GA)
+
+In October, [we announced](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview) the addition of vulnerability assessment solutions to Defender for Cloud's auto provisioning page. This is relevant to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for Servers](defender-for-servers-introduction.md). This feature is now released for general availability (GA).
+
+If the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) is enabled, Defender for Cloud presents a choice of vulnerability assessment solutions:
+
+- (**NEW**) The Microsoft threat and vulnerability management module of Microsoft Defender for Endpoint (see [the release note](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solutionreleased-for-general-availability-ga))
+- The integrated Qualys agent
+
+Your chosen solution will be automatically enabled on supported machines.
+
+Learn more in [Automatically configure vulnerability assessment for your machines](auto-deploy-vulnerability-assessment.md).
+
+### Software inventory filters in asset inventory released for general availability (GA)
+
+In October, [we announced](release-notes-archive.md#software-inventory-filters-added-to-asset-inventory-in-preview) new filters for the [asset inventory](asset-inventory.md) page to select machines running specific software - and even specify the versions of interest. This feature is now released for general availability (GA).
+
+You can query the software inventory data in **Azure Resource Graph Explorer**.
+
+To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+
+For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory).
+
+### New AKS security policy added to default initiative ΓÇô for use by private preview customers only
+
+To ensure that Kubernetes workloads are secure by default, Defender for Cloud includes Kubernetes level policies and hardening recommendations, including enforcement options with Kubernetes admission control.
+
+As part of this project, we've added a policy and recommendation (disabled by default) for gating deployment on Kubernetes clusters. The policy is in the default initiative but is only relevant for organizations who register for the related private preview.
+
+You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
+
+If you'd like to participate in the private preview, you'll need to be a member of the private preview ring. If you're not already a member, submit a request [here](https://aka.ms/atscale). Members will be notified when the preview begins.
+
+### Inventory display of on-premises machines applies different template for resource name
+
+To improve the presentation of resources in the [Asset inventory](asset-inventory.md), we've removed the "source-computer-IP" element from the template for naming on-premises machines.
+
+- **Previous format:** ``machine-name_source-computer-id_VMUUID``
+- **From this update:** ``machine-name_VMUUID``
+ ## October 2021 Updates in October include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/09/2022 Last updated : 05/12/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
## May 2022
-Updates in May Include:
--- [Availability of Defender for SQL to protect Amazon Web Services (AWS) and Google Cloud Computing (GCP) environment](#general-availability-ga-of-defender-for-sql-for-aws-and-gcp-environments)
+Updates in May include:
+- [General availability (GA) of Defender for SQL for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-for-aws-and-gcp-environments)
+- [Multi-cloud settings of Servers plan are now available in connector level](#multi-cloud-settings-of-servers-plan-are-now-available-in-connector-level)
### General availability (GA) of Defender for SQL for AWS and GCP environments
Using Defender for SQL, enterprises can now protect their data, whether hosted i
Microsoft Defender for SQL now provides a unified cross-environment experience to view security recommendations, security alerts and vulnerability assessment findings encompassing SQL servers and the underlying Windows OS. - Using the multi-cloud onboarding experience, you can enable and enforce databases protection for VMs in AWS and GCP. After enabling multi-cloud protection, all supported resources covered by your subscription are protected. Future resources created within the same subscription will also be protected. Learn how to protect and connect your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+### Multi-cloud settings of Servers plan are now available in connector level
+
+There are now connector-level settings for Defender for Servers in multi-cloud.
+
+The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription.
+
+All auto-provisioning components available in the connector level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+
+Updates in the UI include a reflection of the selected pricing tier and the required components configured.
++ ## April 2022
The following alert was removed from our network layer alerts due to inefficienc
-## November 2021
-
-Our Ignite release includes:
--- [Azure Security Center and Azure Defender become Microsoft Defender for Cloud](#azure-security-center-and-azure-defender-become-microsoft-defender-for-cloud)-- [Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2](#native-cspm-for-aws-and-threat-protection-for-amazon-eks-and-aws-ec2)-- [Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)](#prioritize-security-actions-by-data-sensitivity-powered-by-microsoft-purview-in-preview)-- [Expanded security control assessments with Azure Security Benchmark v3](#expanded-security-control-assessments-with-azure-security-benchmark-v3)-- [Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)](#microsoft-sentinel-connectors-optional-bi-directional-alert-synchronization-released-for-general-availability-ga)-- [New recommendation to push Azure Kubernetes Service (AKS) logs to Sentinel](#new-recommendation-to-push-azure-kubernetes-service-aks-logs-to-sentinel)-- [Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)](#recommendations-mapped-to-the-mitre-attck-frameworkreleased-for-general-availability-ga)-
-Other changes in November include:
--- [Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solutionreleased-for-general-availability-ga)-- [Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-microsoft-defender-for-serversreleased-for-general-availability-ga)-- [Snapshot export for recommendations and security findings (in preview)](#snapshot-export-for-recommendations-and-security-findings-in-preview)-- [Auto provisioning of vulnerability assessment solutions released for general availability (GA)](#auto-provisioning-of-vulnerability-assessment-solutions-released-for-general-availability-ga)-- [Software inventory filters in asset inventory released for general availability (GA)](#software-inventory-filters-in-asset-inventory-released-for-general-availability-ga)-- [New AKS security policy added to default initiative ΓÇô for use by private preview customers only](#new-aks-security-policy-added-to-default-initiative--for-use-by-private-preview-customers-only)-- [Inventory display of on-premises machines applies different template for resource name](#inventory-display-of-on-premises-machines-applies-different-template-for-resource-name)--
-### Azure Security Center and Azure Defender become Microsoft Defender for Cloud
-
-According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multi-cloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
-
-**Microsoft Defender for Cloud** (formerly known as Azure Security Center and Azure Defender) is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multi-cloud and hybrid environments.
-
-At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud**, reflects the integrated capabilities of our security offering and our ability to support any cloud platform.
--
-### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2
-
-A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
--
-When you've added your AWS accounts, Defender for Cloud protects your AWS resources with any or all of the following plans:
--- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.-- **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.-
-Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
--
-### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)
-Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments.
-
-To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multi-cloud, and on-premises workloads.
-
-The integration with Microsoft Purview extends your security visibility in Defender for Cloud from the infrastructure level down to the data, enabling an entirely new way to prioritize resources and security activities for your security teams.
-
-Learn more in [Prioritize security actions by data sensitivity](information-protection.md).
--
-### Expanded security control assessments with Azure Security Benchmark v3
-Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
-
-[Azure Security Benchmark](../security/benchmarks/introduction.md) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
-
-From Ignite 2021, Azure Security Benchmark **v3** is available in [Defender for Cloud's regulatory compliance dashboard](update-regulatory-compliance-packages.md) and enabled as the new default initiative for all Azure subscriptions protected with Microsoft
-Defender for Cloud.
-
-Enhancements for v3 include:
--- Additional mappings to industry frameworks [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf) and [CIS Controls v8](https://www.cisecurity.org/controls/v8/).-- More granular and actionable guidance for controls with the introduction of:
- - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
- - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
--- New controls include DevOps security for issues such as threat modeling and software supply chain security, as well as key and certificate management for best practices in Azure.-
-Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction).
--
-### Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)
-
-In July, [we announced](release-notes-archive.md#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
-
-When you connect Microsoft Defender for Cloud to Microsoft Sentinel, the status of security alerts is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert will display as closed in Microsoft Sentinel as well. Changing the status of an alert in Defender for Cloud won't affect the status of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert, only that of the synchronized alert itself.
-
-When you enable **bi-directional alert synchronization** you'll automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident containing a Defender for Cloud alert is closed, Defender for Cloud will automatically close the corresponding original alert.
-
-Learn more in [Connect Azure Defender alerts from Azure Security Center](../sentinel/connect-azure-security-center.md) and [Stream alerts to Azure Sentinel](export-to-siem.md#stream-alerts-to-microsoft-sentinel).
-
-### New recommendation to push Azure Kubernetes Service (AKS) logs to Sentinel
-
-In a further enhancement to the combined value of Defender for Cloud and Microsoft Sentinel, we'll now highlight Azure Kubernetes Service instances that aren't sending log data to Microsoft Sentinel.
-
-SecOps teams can choose the relevant Microsoft Sentinel workspace directly from the recommendation details page and immediately enable the streaming of raw logs. This seamless connection between the two products makes it easy for security teams to ensure complete logging coverage across their workloads to stay on top of their entire environment.
-
-The new recommendation, "Diagnostic logs in Kubernetes services should be enabled" includes the 'Fix' option for faster remediation.
-
-We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
--
-### Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)
-
-We've enhanced Defender for Cloud's security recommendations to show their position on the MITRE ATT&CK® framework. This globally accessible knowledge base of threat actors' tactics and techniques based on real-world observations, provides more context to help you understand the associated risks of the recommendations for your environment.
-
-You'll find these tactics wherever you access recommendation information:
--- **Azure Resource Graph query results** for relevant recommendations include the MITRE ATT&CK® tactics and techniques.--- **Recommendation details pages** show the mapping for all relevant recommendations:-
- :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
--- **The recommendations page in Defender for Cloud** has a new :::image type="icon" source="media/review-security-recommendations/tactics-filter-recommendations-page.png" border="false"::: filter to select recommendations according to their associated tactic:-
-Learn more in [Review your security recommendations](review-security-recommendations.md).
-
-### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)
-
-In October, [we announced](release-notes-archive.md#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solution-in-preview) an extension to the integration between [Microsoft Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt). This feature is now released for general availability (GA).
-
-Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
-
-Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
-
-To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
-
-Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
-
-### Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)
-
-In August, [we announced](release-notes-archive.md#microsoft-defender-for-endpoint-for-linux-now-supported-by-azure-defender-for-servers-in-preview) preview support for deploying the [Defender for Endpoint for Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux) sensor to supported Linux machines. This feature is now released for general availability (GA).
-
-[Microsoft Defender for Servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
-
-When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack.
-
-Learn more in [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
--
-### Snapshot export for recommendations and security findings (in preview)
-
-Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment.
-
-Defender for Cloud's **continuous export** feature lets you fully customize *what* will be exported, and *where* it will go. Learn more in [Continuously export Microsoft Defender for Cloud data](continuous-export.md).
-
-Even though the feature is called *continuous*, there's also an option to export weekly snapshots. Until now, these weekly snapshots were limited to secure score and regulatory compliance data. We've added the capability to export recommendations and security findings.
-
-### Auto provisioning of vulnerability assessment solutions released for general availability (GA)
-
-In October, [we announced](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview) the addition of vulnerability assessment solutions to Defender for Cloud's auto provisioning page. This is relevant to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for Servers](defender-for-servers-introduction.md). This feature is now released for general availability (GA).
-
-If the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) is enabled, Defender for Cloud presents a choice of vulnerability assessment solutions:
--- (**NEW**) The Microsoft threat and vulnerability management module of Microsoft Defender for Endpoint (see [the release note](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solutionreleased-for-general-availability-ga))-- The integrated Qualys agent-
-Your chosen solution will be automatically enabled on supported machines.
-
-Learn more in [Automatically configure vulnerability assessment for your machines](auto-deploy-vulnerability-assessment.md).
-
-### Software inventory filters in asset inventory released for general availability (GA)
-
-In October, [we announced](release-notes-archive.md#software-inventory-filters-added-to-asset-inventory-in-preview) new filters for the [asset inventory](asset-inventory.md) page to select machines running specific software - and even specify the versions of interest. This feature is now released for general availability (GA).
-
-You can query the software inventory data in **Azure Resource Graph Explorer**.
-
-To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-
-For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory).
-
-### New AKS security policy added to default initiative ΓÇô for use by private preview customers only
-
-To ensure that Kubernetes workloads are secure by default, Defender for Cloud includes Kubernetes level policies and hardening recommendations, including enforcement options with Kubernetes admission control.
-
-As part of this project, we've added a policy and recommendation (disabled by default) for gating deployment on Kubernetes clusters. The policy is in the default initiative but is only relevant for organizations who register for the related private preview.
-
-You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
-
-If you'd like to participate in the private preview, you'll need to be a member of the private preview ring. If you're not already a member, submit a request [here](https://aka.ms/atscale). Members will be notified when the preview begins.
-
-### Inventory display of on-premises machines applies different template for resource name
-
-To improve the presentation of resources in the [Asset inventory](asset-inventory.md), we've removed the "source-computer-IP" element from the template for naming on-premises machines.
--- **Previous format:** ``machine-name_source-computer-id_VMUUID``-- **From this update:** ``machine-name_VMUUID``
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For more information, see:
- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) - [Microsoft Defender for IoT architecture](architecture.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Known issues and limitations associated with online migrations from PostgreSQL t
## Size limitations - You can migrate up to 1 TB of data from PostgreSQL to Azure Database for PostgreSQL, using a single DMS service.-- **pg_dump** command offers the ability to pick a few tables inside a database as a part of its dump operation.
+- DMS allows the users to pick tables inside a database that they want to migrate.
+
+Behind the scenes, there is a **pg_dump** command that is used to take the dump of the selected tables using one of the following options:
+ - **-T** to include the table names picked in the UI
+ - **-t** to exclude the table names not picked by the user
+
+There is a max limit of 7500 characters that can be included as part of the pg_dump command following the **-t** or **-T** option. The pg_dump command uses the count of the characters for selected or unselected tables , whichever is lower. If the count of characters for the selected and unselected tables exceed 7500, the pg_dump command fails with an error.
+
+For the previous example, the pg_dump command would be:
+ ```
-pg_dump -h hostname.postgres.com -U myusername -d database1 -t "\"public\".\"table_1\"" -t "\"public\".\"table_2\""
+pg_dump -h hostname -u username -d databasename -T "\"public\".\"table_1\"" -T "\"public\".\"table_2\""
```
-In the above example, the user has picked just the tables **table_1, table_2** under the **public** schema as a part the dump.
-A maximum of **8191** characters can be included as a part of the **pg_dump** command. In scenarios where the users wants to include many tables, care should be taken that the length of the **pg_dump** does not exceed this limit. If exceeded, the command will error out.
+In the previous command, the number of characters is 55 (includes double quotes, spaces, -T and slash)
## Datatype limitations
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022 Last updated : 05/11/2022
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 05/10/2022
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
The following FastPath features are in Public preview:
**VNet Peering** - FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This preview is available for both IPv4 and IPv6 connectivity.
+Available in all regions.
+ **Private Link Connectivity for 10Gbps ExpressRoute Direct Connectivity** - Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path. This preview is available in the following Azure Regions. - Australia East
This preview is available for connections associated to ExpressRoute Direct circ
See [How to enroll in ExpressRoute FastPath features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview).
-Available in all regions.
## Next steps
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
In this article, let's discuss how to address challenges you may face when confi
Let's consider the example network illustrated in the following diagram. In the example, geo-redundant ExpressRoute connectivity is established between a Contoso's on-premises location and Contoso's VNet in an Azure region. In the diagram, solid green line indicates preferred path (via ExpressRoute 1) and the dotted one represents stand-by path (via ExpressRoute 2).
-[![1]][1]
When you are designing ExpressRoute connectivity for disaster recovery, you need to consider:
You can influence Azure to prefer one ExpressRoute circuit over another one usin
The following diagram illustrates influencing ExpressRoute path selection using more specific route advertisement. In the illustrated example, Contoso on-premises /24 IP range is advertised as two /25 address ranges via the preferred path (ExpressRoute 1) and as /24 via the stand-by path (ExpressRoute 2).
-[![2]][2]
Because /25 is more specific, compared to /24, Azure would send the traffic destined to 10.1.11.0/24 via ExpressRoute 1 in the normal state. If both the connections of ExpressRoute 1 go down, then the VNet would see the 10.1.11.0/24 route advertisement only via ExpressRoute 2; and therefore the standby circuit is used in this failure state.
Because /25 is more specific, compared to /24, Azure would send the traffic dest
The following screenshot illustrates configuring the weight of an ExpressRoute connection via Azure portal.
-[![3]][3]
The following diagram illustrates influencing ExpressRoute path selection using connection weight. The default connection weight is 0. In the example below, the weight of the connection for ExpressRoute 1 is configured as 100. When a VNet receives a route prefix advertised via more than one ExpressRoute circuit, the VNet will prefer the connection with the highest weight.
-[![4]][4]
If both the connections of ExpressRoute 1 go down, then the VNet would see the 10.1.11.0/24 route advertisement only via ExpressRoute 2; and therefore the standby circuit is used in this failure state.
If both the connections of ExpressRoute 1 go down, then the VNet would see the 1
The following diagram illustrates influencing ExpressRoute path selection using AS path prepend. In the diagram, the route advertisement over ExpressRoute 1 indicates the default behavior of eBGP. On the route advertisement over ExpressRoute 2, the on-premises network's ASN is prepended additionally on the route's AS path. When the same route is received through multiple ExpressRoute circuits, per the eBGP route selection process, VNet would prefer the route with the shortest AS path.
-[![5]][5]
If both the connections of ExpressRoute 1 go down, then the VNet would see the 10.1.11.0/24 route advertisement only via ExpressRoute 2. Consequentially, the longer AS path would become irrelevant. Therefore, the standby circuit would be used in this failure state.
The solution is illustrated in the following diagram. As illustrated, you can ar
[![10]][10] > [!IMPORTANT]
-> When one or multiple ExpressRoute circuits are connected to multiple virtual networks, virtual network to virtual network traffic can route via ExpressRoute. However, this is not recommended. To enable virtual network to virtual network connectivity, [configure virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-peering).
+> When one or multiple ExpressRoute circuits are connected to multiple virtual networks, virtual network to virtual network traffic can route via ExpressRoute. However, this is not recommended. To enable virtual network to virtual network connectivity, [configure virtual network peering](../virtual-network/virtual-network-manage-peering.md).
>
In this article, we discussed how to design for disaster recovery of an ExpressR
- [SMB disaster recovery with Azure Site Recovery][SMB DR] <!--Image References-->
-[1]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/one-region.png "small to medium size on-premises network considerations"
-[2]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/specificroute.png "influencing path selection using more specific routes"
-[3]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/configure-weight.png "configuring connection weight via Azure portal"
-[4]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/connectionweight.png "influencing path selection using connection weight"
-[5]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/aspath.png "influencing path selection using AS path prepend"
[6]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region.png "large distributed on-premises network considerations" [7]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-arch1.png "scenario 1" [8]: ./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-sol1.png "active-active ExpressRoute circuits solution 1"
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
The following table shows the gateway types and the estimated performance scale
|**Ultra Performance/ErGw3Az**|16,000|10,000|1,000,000|11,000| > [!IMPORTANT]
-> Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
+> Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment. Additionally, Microsoft performs routine host and OS maintenance on the ExpressRoute Virtual Network Gateway, to maintain reliability of the service. During a maintenance period, control plane and data path capacity of the gateway is reduced.
>[!NOTE] > The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network is 4 for all gateways.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada, Equinix, Megaport, Telus |
-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Transtelco|
+| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Megaport, Transtelco|
| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Equinix | | **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **LG CNS** |Supported |Supported | Busan, Seoul | | **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported | Cape Town, Johannesburg | | **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, Queretaro (Mexico), San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported | London | | **MTN Global Connect** |Supported |Supported | Cape Town, Johannesburg| | **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported | Bangkok |
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 04/19/2022 Last updated : 05/12/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
To learn what's new with Azure Firewall, see [Azure updates](https://azure.micro
Azure Firewall Standard has the following known issues:
+> [!NOTE]
+> Any issue that applies to Standard also applies to Premium.
+ |Issue |Description |Mitigation | |||| |Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/outbound-rules.md#limitations). We're exploring options to support this scenario in a future release.|
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
You can add a wildcard domain following guidance in [add a custom domain](standa
> * Azure DNS supports wildcard records. > * Cache purge for wildcard domain is not supported, you have to specify a subdomain for cache purge.
-You can add as many single-level subdomains of the wildcard. For example, for the wildcard domain *.contoso.com, you can add subdomains in the form of image.contosto.com, cart.contoso.com, etc. Subdomains like www.image.contoso.com aren't a single-level subdomain of *.contoso.com. This functionality might be required for:
+You can add as many single-level subdomains of the wildcard as you would like. For example, for the wildcard domain *.contoso.com, you can add subdomains in the form of image.contosto.com, cart.contoso.com, etc. Subdomains like www.image.contoso.com aren't a single-level subdomain of *.contoso.com. This functionality might be required for:
* Defining a different route for a subdomain than the rest of the domains (from the wildcard domain).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance description: Learn about the management groups, how their permissions work, and how to use them. Previously updated : 05/02/2022 Last updated : 05/12/2022
under those subscriptions. This security policy cannot be altered by the resourc
owner allowing for improved governance. > [!NOTE]
-> Management groups aren't currently supported for Microsoft Customer Agreement subscriptions.
+> Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement (MCA) subscriptions.
Another scenario where you would use management groups is to provide user access to multiple subscriptions. By moving multiple subscriptions under that management group, you can create one
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 04/27/2022 Last updated : 05/12/2022
initiatives. The policy assignment can determine the values of parameters for th
resources at assignment time, making it possible to reuse policy definitions that address the same resource properties with different needs for compliance.
+> [!NOTE]
+> For more information on Azure Policy scope, see
+> [Understand scope in Azure Policy](./scope.md).
+ You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for: - display name
In this example, the parameters previously defined in the policy definition are
same policy definition is reusable with a different set of parameters for a different department, reducing the duplication and complexity of policy definitions while providing flexibility.
-## Identity
-For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+## Identity
+For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
> [!NOTE] > A single policy assignment can be associated with only one system- or user-assigned managed identity. However, that identity can be assigned more than one role if necessary. ```json
-# System-assigned identity
+# System-assigned identity
"identity": { "type": "SystemAssigned" }
-# User-assigned identity
+# User-assigned identity
"identity": { "type": "UserAssigned", "userAssignedIdentities": {
For policy assignments with effect set to **deployIfNotExisit** or **modify**, i
}, ``` -- ## Next steps - Learn about the [policy definition structure](./definition-structure.md).
For policy assignments with effect set to **deployIfNotExisit** or **modify**, i
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
-
+
governance Guest Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration-assignments.md
report compliance status. For more information, see
When an Azure Policy assignment is deleted, if a guest configuration assignment was created by the policy, the guest configuration assignment is also deleted.
+When an Azure Policy assignment is deleted from an initiative, if a guest configuration assignment was created by the policy, you will need to manually delete the corresponding guest configuration assignment. This can be done by navigating to the guest assignments page on Azure portal and deleting the assignment there.
+ ## Manually creating guest configuration assignments Guest assignment resources in Azure Resource Manager can be created by Azure
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 05/10/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-factor authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
-|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-factor authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
-|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-factor authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
### User identification - 415
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Windows machines should meet requirements for 'Security Settings - Account Policies'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2143251-70de-4e81-87a8-36cee5a2f29d) |Windows machines should have the specified Group Policy settings in the category 'Security Settings - Account Policies' for password history, age, length, complexity, and storing passwords using reversible encryption. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecuritySettingsAccountPolicies_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but don't have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but don't have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
-|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
-|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
+|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
+|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
## Guidelines for Database Systems - Database management system software
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
## Guidelines for Cryptography - Transport Layer Security
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_ApiApp_Audit.json) | |[Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 05/10/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[\[Deprecated\]: Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |The policy is deprecated. Please use /providers/Microsoft.Authorization/policyDefinitions/2393d2cf-a342-44cd-a2e2-fe0188fd1234 instead. |Audit, Deny, Disabled |[1.0.1-deprecated](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_deprecated.json) |
|[\[Preview\]: Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](../../../key-vault/general/private-link-service.md). |Audit, Deny, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
-|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[3.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](../../../azure-app-configuration/concept-private-endpoint.md). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
-|[Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d092e0a-7acd-40d2-a975-dca21cae48c4) |Azure Virtual Network deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access.When an Azure Cache for Redis instance is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_CacheInVnet_Audit.json) |
+|[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](../../../azure-cache-for-redis/cache-private-link.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](../../../event-grid/configure-private-endpoints.md). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](../../../event-grid/configure-private-endpoints.md). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
+|[Cosmos DB database accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5450f5bd-9c72-4390-a9c4-a7aba4edfdd2) |Disabling local authentication methods improves security by ensuring that Cosmos DB database accounts exclusively require Azure Active Directory identities for authentication. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth](../../../cosmos-db/how-to-setup-rbac.md#disable-local-auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_DisableLocalAuth_AuditDeny.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Manage application identities securely and automatically
initiative definition.
|[FTPS only should be required in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[FTPS should be required in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | |[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for AKS Engine and Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md) |audit, deny, disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for AKS Engine and Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md) |audit, Audit, deny, Deny, disabled, Disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/IngressHttpsOnly.json) |
|[Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_ApiApp_Audit.json) | |[Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | |[Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
initiative definition.
|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Use customer-managed key option in data at rest encryption when required
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](../../../cosmos-db/how-to-setup-cmk.md). |audit, deny, disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](../../../cosmos-db/how-to-setup-cmk.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](/azure/machine-learning/how-to-create-workspace-template#deploy-an-encrypted-workspace"). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | |[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) | |[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/container-registry-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
### Ensure security of key and certificate repository
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Azure Defender's extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc](/azure/security-center/defender-for-kubernetes-azure-arc). |AuditIfNotExists, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](../../../defender-for-cloud/defender-for-containers-enable.md?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[5.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[\[Preview\]: Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | |[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Azure Defender's extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/security-center/defender-for-kubernetes-azure-arc](/azure/security-center/defender-for-kubernetes-azure-arc). |AuditIfNotExists, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](../../../defender-for-cloud/defender-for-containers-enable.md?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[5.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[\[Preview\]: Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | |[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Azure Arc enabled Kubernetes clusters should have the Azure Policy extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b2122c1-8120-4ff5-801b-17625a355590) |The Azure Policy extension for Azure Arc provides at-scale enforcements and safeguards on your Arc enabled Kubernetes clusters in a centralized, consistent manner. Learn more at [https://aka.ms/akspolicydoc](../concepts/policy-for-kubernetes.md). |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ArcPolicyExtension_Audit.json) |
-|[\[Preview\]: Kubernetes clusters should gate deployment of vulnerable images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13cd7ae3-5bc0-4ac4-a62d-4f7c120b9759) |Protect your Kubernetes clusters and container workloads from potential threats by restricting deployment of container images with vulnerable software components. Use Azure Defender CI/CD scanning (https://aka.ms/AzureDefenderCICDscanning) and Azure defender for container registries (https://aka.ms/AzureDefenderForContainerRegistries) to identify and patch vulnerabilities prior to deployment. Evaluation prerequisite: Policy Addon and Azure Defender Profile. Only applicable for private preview customers. |Audit, Deny, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockVulnerableImages.json) |
+|[\[Preview\]: Kubernetes clusters should gate deployment of vulnerable images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13cd7ae3-5bc0-4ac4-a62d-4f7c120b9759) |Protect your Kubernetes clusters and container workloads from potential threats by restricting deployment of container images with vulnerable software components. Use Azure Defender CI/CD scanning (https://aka.ms/AzureDefenderCICDscanning) and Azure defender for container registries (https://aka.ms/AzureDefenderForContainerRegistries) to identify and patch vulnerabilities prior to deployment. Evaluation prerequisite: Policy Addon and Azure Defender Profile. Only applicable for private preview customers. |Audit, Deny, Disabled |[1.0.3-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockVulnerableImages.json) |
|[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | |[CORS should not allow every resource to access your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F358c20a6-3f9e-4f0e-97ff-c6ce485e2aac) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API app. Allow only required domains to interact with your API app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_ApiApp_Audit.json) | |[CORS should not allow every resource to access your Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
initiative definition.
|[Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0c192fe8-9cbb-4516-85b3-0ade8bd03886) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ApiApp_Audit_ClientCert.json) | |[Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
-|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerResourceLimits.json) |
-|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockHostNamespace.json) |
-|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) |
-|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
-|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) |
-|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) |
-|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
-|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/HostNetworkPorts.json) |
-|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) |
-|[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) |
-|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockAutomountToken.json) |
-|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
-|[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
-|[Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockDefaultNamespace.json) |
+|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[7.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerResourceLimits.json) |
+|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[3.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockHostNamespace.json) |
+|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) |
+|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) |
+|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) |
+|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
+|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/HostNetworkPorts.json) |
+|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[6.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) |
+|[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[7.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) |
+|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockAutomountToken.json) |
+|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[3.3.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
+|[Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockDefaultNamespace.json) |
|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) | |[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](../../../defender-for-cloud/endpoint-protection-recommendations-technical.md). Endpoint protection assessment is documented here - [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](/azure/defender-for-cloud/endpoint-protection-recommendations-technical). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | |[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](../../../defender-for-cloud/endpoint-protection-recommendations-technical.md). Endpoint protection assessment is documented here - [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](/azure/defender-for-cloud/endpoint-protection-recommendations-technical). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
## Backup and Recovery
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 05/10/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Windows machines should meet requirements for 'Administrative Templates - Network'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67e010c1-640d-438e-a3a5-feaccb533a98) |Windows machines should have the specified Group Policy settings in the category 'Administrative Templates - Network' for guest logons, simultaneous connections, network bridge, ICS, and multicast name resolution. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministrativeTemplatesNetwork_AINE.json) | |[Windows machines should meet requirements for 'Security Options - Microsoft Network Server'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcaf2d518-f029-4f6b-833b-d7081702f253) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Microsoft Network Server' for disabling SMB v1 server. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsMicrosoftNetworkServer_AINE.json) |
initiative definition.
|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Log and alert on changes to critical Azure resources
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 04/19/2022 Last updated : 05/11/2022 -- # Azure Policy built-in initiative definitions
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policysets-regulatory-compliance](../../../../includes/policy/reference/bycat/policysets-regulatory-compliance.md)]
+## SDN
++ ## Security Center [!INCLUDE [azure-policy-reference-policysets-security-center](../../../../includes/policy/reference/bycat/policysets-security-center.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 04/25/2022 Last updated : 05/11/2022 -- # Azure Policy built-in policy definitions
The name of each built-in links to the policy definition in the Azure portal. Us
**Source** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). The built-ins are grouped by the **category** property in **metadata**. To jump to a specific **category**, use the menu on the right
-side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>Cmd</kbd>-<kbd>F</kbd> (macOS) to use your browser's search feature.
+side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
## API for FHIR
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>
[!INCLUDE [azure-policy-reference-policies-azure-edge-hardware-center](../../../../includes/policy/reference/bycat/policies-azure-edge-hardware-center.md)]
-## Microsoft Purview
+## Azure Purview
[!INCLUDE [azure-policy-reference-policies-azure-purview](../../../../includes/policy/reference/bycat/policies-azure-purview.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>
[!INCLUDE [azure-policy-reference-policies-cache](../../../../includes/policy/reference/bycat/policies-cache.md)]
+## CDN
++ ## Cognitive Services [!INCLUDE [azure-policy-reference-policies-cognitive-services](../../../../includes/policy/reference/bycat/policies-cognitive-services.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>
[!INCLUDE [azure-policy-reference-policies-kubernetes](../../../../includes/policy/reference/bycat/policies-kubernetes.md)]
+## Lab Services
++ ## Lighthouse [!INCLUDE [azure-policy-reference-policies-lighthouse](../../../../includes/policy/reference/bycat/policies-lighthouse.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>
[!INCLUDE [azure-policy-reference-policies-managed-application](../../../../includes/policy/reference/bycat/policies-managed-application.md)]
+## Managed Labs
++ ## Media Services [!INCLUDE [azure-policy-reference-policies-media-services](../../../../includes/policy/reference/bycat/policies-media-services.md)]
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 05/10/2022
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines missing any of specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F30f71ea1-ac77-4f26-9fc5-2d926bbd4ba7) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group does not contain one or more members that are listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToInclude_AINE.json) | |[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines missing any of specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F30f71ea1-ac77-4f26-9fc5-2d926bbd4ba7) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group does not contain one or more members that are listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToInclude_AINE.json) | |[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
-|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
-|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
+|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) | |[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
-|[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) |
-|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) |
+|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
+|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) |
### Authenticator Management | Password-Based Authentication
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
-|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) |
+|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) |
|[Audit Windows machines that allow re-use of the previous 24 passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b054a0d-39e2-4d53-bea3-9734cad2c69b) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that allow re-use of the previous 24 passwords |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEnforce_AINE.json) | |[Audit Windows machines that do not have a maximum password age of 70 days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ceb8dc2-559c-478b-a15b-733fbf1e3738) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not have a maximum password age of 70 days |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMaximumPassword_AINE.json) | |[Audit Windows machines that do not have a minimum password age of 1 day](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F237b38db-ca4d-4259-9e47-7882441ca2c0) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../concepts/guest-configuration.md). Machines are non-compliant if Windows machines that do not have a minimum password age of 1 day |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMinimumPassword_AINE.json) |
This built-in initiative is deployed as part of the
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
## System and Information Integrity
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/14/2022 Last updated : 05/10/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CIS Microsoft Azure Foundations Benchmark v1.1.0** Regulatory Compliance built-in initiative definition.
+This built-in initiative is deployed as part of the
+[CIS Microsoft Azure Foundations Benchmark 1.1.0 blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
+ > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Ensure ASC Default policy setting "Monitor Network Security Groups" is not "Disabled"
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[3.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
### Ensure default network access rule for Storage Accounts is set to deny
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[3.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
### Ensure the storage account containing the container with activity logs is encrypted with BYOK (Use Your Own Key)
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Ensure that 'Data disks' are encrypted
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Ensure that only approved extensions are installed
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Authentication should be enabled on your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ebc54a-46e1-481a-bee2-d4411e95d828) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_ApiApp_Audit.json) |
-|[Authentication should be enabled on your Function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
-|[Authentication should be enabled on your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
+|[Authentication should be enabled on your Function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
+|[Authentication should be enabled on your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
### Ensure that 'HTTP Version' is the latest, if used to run the web app
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/10/2022 Last updated : 05/10/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[3.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
### Ensure default network access rule for Storage Accounts is set to deny
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
### Ensure 'Enforce SSL connection' is set to 'ENABLED' for MySQL Database Server
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
### Ensure server parameter 'log_checkpoints' is set to 'ON' for PostgreSQL Database Server
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[3.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
### Ensure the storage account containing the container with activity logs is encrypted with BYOK (Use Your Own Key)
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse](https://aka.ms/disksse) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](../../../virtual-machines/disk-encryption-overview.md#comparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
### Ensure that only approved extensions are installed
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Authentication should be enabled on your API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ebc54a-46e1-481a-bee2-d4411e95d828) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_ApiApp_Audit.json) |
-|[Authentication should be enabled on your Function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
-|[Authentication should be enabled on your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
+|[Authentication should be enabled on your Function app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
+|[Authentication should be enabled on your web app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
### Ensure FTP deployments are disabled
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/01/2022 Last updated : 05/10/2022 # Details of the CMMC Level 3 Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in Cybersecurity Maturity Model Certification (CMMC) Level 3.
+definition maps to **compliance domains** and **controls** in CMMC Level 3.
For more information about this compliance standard, see [CMMC Level 3](https://www.acq.osd.mil/cmmc/documentation.html). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **CMMC Level 3** Regulatory Compliance built-in initiative definition.
+This built-in initiative is deployed as pa