Updates from: 06/14/2021 03:04:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
In Azure Active Directory (Azure AD), the term **app provisioning** refers to automatically creating user identities and roles for applications.
-![architecture](./media/user-provisioning/arch-1.png)
+![provisioning scenarios](../governance/media/what-is-provisioning/provisioning.png)
Azure AD to SaaS application provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Claims mapping policies do not apply to guest users. If a guest user tries to ac
## Next steps -- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](active-directory-claims-mapping.md)
+- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md) - To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
attestation Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/faq.md
- Title: Frequently asked questions
-description: Answers to frequently asked questions about Microsoft Azure Attestation
---- Previously updated : 07/20/2020-----
-# Frequently asked questions for Microsoft Azure Attestation
-
-This article provides answers to some of the most common questions about [Azure Attestation](overview.md).
-
-If your Azure issue is not addressed in this article, you can also submit an Azure support request on the [Azure support page](https://azure.microsoft.com/support/options/).
-
-## What is Azure PCK caching service and its role in enclave attestation
-
-Azure PCK caching service defines the Azure security baseline for the [Azure Confidential computing (ACC)](../confidential-computing/overview.md) nodes from Intel and caches the data. The cached information will be further used by Azure Attestation in validating Trusted Execution Environments (TEEs).
-
-Azure PCK caching service:
- - Offers high availability
- - Reduces dependencies on externally hosted services and internet connectivity.
- - Fetches the latest versions of Intel certificates, CRLs, Trusted Computing Base (TCB) information and Quoting Enclave identity of the ACC nodes from Intel. The service hence confirms the Azure security baseline to be referred by Azure Attestation while validating the TEEs, greatly reducing attestation failures due to invalidation or revocation of Intel certificates
-
-## Is SGX attestation supported by Azure Attestation in non-Azure environments
-
-No. Azure Attestation depends on the security baseline stated by Azure PCK caching service to validate the TEEs. Azure PCK caching service is currently designed to support only Azure Confidential computing nodes.
-
-## What validations does Azure Attestation perform for attesting SGX enclaves
-
-Azure Attestation is a unified framework for remotely attesting different types of TEEs. Azure Attestation:
-
- - Validates if the trusted root of a signed enclave quote belongs to Intel.
- - Validates if the enclave quote meets the Azure security baseline as defined by Azure PCK caching service.
- - Validates if the SHA256 hash of Enclave Held Data (EHD) in the attestation request object matches the first 32 bytes of reportData field in the enclave quote.
- - Allows customers to create an attestation provider and configure a custom policy. In addition to the above validations, Azure Attestation evaluates the enclave quote against the policy. Policies define authorization rules for the enclave and also dictate issuance rules for generating the attestation token. To confirm if intended software is running in an enclave, customers can add authorization rules to verify if **mrsigner** and **mrenclave** fields in the enclave quote matches the values of customer binaries.
-
-## How can a verifier obtain the collateral for SGX attestation supported by Azure Attestation
-
-In general, for the attestation models with Intel as the root of trust, attestation client talks to enclave APIs to fetch the enclave evidence. Enclave APIs internally call Intel PCK caching service to fetch Intel certificates of the node to be attested. The certificates are used to sign the enclave evidence thereby generating a remotely attestable collateral.
-
-The same process can be implemented for Azure Attestation. However to leverage the benefits offered by Azure PCK caching service, after installing ACC virtual machine, it is recommended to install [Azure DCAP library](https://www.nuget.org/packages/Microsoft.Azure.DCAP). Based on the agreement with Intel, when Azure DCAP library is installed, the requests for generating enclave evidence are redirected from Intel PCK caching service to Azure PCK caching service. Azure DCAP library is supported in Windows and Linux-based environments.
-
-## How to shift to Azure Attestation from other attestation models
--- After installing Azure Confidential computing virtual machine, install Azure DCAP library ([Windows/](https://www.nuget.org/packages/Microsoft.Azure.DCAP/) [Linux](https://packages.microsoft.com/ubuntu/18.04/prod/pool/main/a/az-dcap-client/)) to leverage the benefits offered by Azure PCK caching service.-- Remote attestation client needs to be authored which can retrieve the enclave evidence and send requests to Azure Attestation. See [code samples](/samples/browse/?expanded=azure&terms=attestation) for reference -- Attestation requests can be sent to the REST API endpoint of default providers or custom attestation providers -- Azure Attestation APIs are protected by Azure AD authentication. Hence the client that invokes attest APIs must be able to obtain and pass a valid Azure AD access token in the attestation request -
-## How can the relying party verify the integrity of attestation token
-
-Attestation token generated by Azure Attestation is signed using a self-signed certificate. The certificates are exposed via an [OpenID metadata endpoint](/rest/api/attestation/metadataconfiguration/get). Relying party can retrieve the signing certificates from this endpoint and perform signature verification of the attestation token. Validity time of the attestation token is 8 hours.
-
-## How to identify the certificate to be used for signature verification from the OpenID metadata endpoint
-
-Multiple certificates exposed in the OpenID metadata endpoint correspond to different use cases (e.g. SGX attestation) supported by Azure Attestation. As per the standards specified by [RFC 7515](https://tools.ietf.org/html/rfc7515), the certificate with key ID (kid) matching the *kid* parameter in the attestation token header is to be used for signature verification. If no matching **kid** is found, then it is expected to try all the certificates exposed by OpenID metadata endpoint.
-
-## Is it possible for the relying party to share secrets with the validated Trusted Execution Environments (TEEs)
-
-Public key generated within an enclave can be expressed in the Enclave Held Data (EHD) property of the attestation request object sent by the client to Azure Attestation. After confirming if SHA256 hash of EHD is expressed in reportData field of the quote, Azure Attestation includes EHD in the attestation token. Relying party can use the EHD from the verified attestation response to encrypt the secrets and share with the enclave. See [Azure Attestation basic concepts](basic-concepts.md) for more information.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
Learn more about how to develop and configure Azure Functions.
<!-- LINKS --> [Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json
-[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/azuredeploy.json
+[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-linux/azuredeploy.json
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
Azure Disk encryption relies on two encryption keys for implementation, as descr
The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where customers can choose to store their keys in [FIPS 140-2 validated hardware security modules](../key-vault/keys/hsm-protected-keys-byok.md).
-For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.md), Azure Disk encryption selects the encryption method in BitLocker based on the version of Windows, e.g., XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). For [Linux VMs](../virtual-machines/linux/disk-encryption-faq.md), Azure Disk encryption uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140-2 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace.
+For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Disk encryption selects the encryption method in BitLocker based on the version of Windows, e.g., XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). For [Linux VMs](../virtual-machines/linux/disk-encryption-faq.yml), Azure Disk encryption uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140-2 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace.
##### *Server-side encryption for managed disks* [Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for customers. Azure managed disks automatically encrypt customer data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140-2 validated. For encryption key management, customers have the following choices:
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
Pricing for supported countries/regions is listed in the [Azure Monitor pricing
| 351 | Portugal | | 1 | Puerto Rico | | 40 | Romania |
+| 7 | Russia |
| 65 | Singapore | | 27 | South Africa | | 82 | South Korea | | 34 | Spain | | 41 | Switzerland | | 886 | Taiwan |
+| 971 | UAE |
| 44 | United Kingdom | | 1 | United States |
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
try
} catch (ex) {
- appInsights.trackException(ex);
+ appInsights.trackException({exception: ex});
} ```
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
## Some of my telemetry is missing *In Application Insights, I only see a fraction of the events that are being generated by my app.*
-* If you are consistently seeing the same fraction, it's probably due to adaptive [sampling](./sampling.md). To confirm this, open Search (from the overview blade) and look at an instance of a Request or other event. At the bottom of the properties section click "..." to get full property details. If Request Count > 1, then sampling is in operation.
-* Otherwise, it's possible that you're hitting a [data rate limit](./pricing.md#limits-summary) for your pricing plan. These limits are applied per minute.
+* If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the overview blade) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
+* It's possible that you're hitting a [data rate limit](../../azure-monitor/app/pricing.md#limits-summary) for your pricing plan. These limits are applied per minute.
-*I'm experiencing data loss randomly.*
+*I'm randomly experiencing data loss.*
-* Check if you are experiencing data loss at [Telemetry Channel](telemetry-channels.md#does-the-application-insights-channel-guarantee-telemetry-delivery-if-not-what-are-the-scenarios-in-which-telemetry-can-be-lost)
+* Check whether you're experiencing data loss at [Telemetry Channel](telemetry-channels.md#does-the-application-insights-channel-guarantee-telemetry-delivery-if-not-what-are-the-scenarios-in-which-telemetry-can-be-lost).
-* Check for any known issues in Telemetry Channel [GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/issues)
+* Check for any known issues in Telemetry Channel [GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/issues).
*I'm experiencing data loss in Console App or on Web App when app is about to stop.*
-* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you may need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
+* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you might need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
## Request count collected by Application Insights SDK does not match the IIS log count for my application Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this it is not guaranteed that the request count collected by the SDKs will match the total IIS log count. ## No data from my server
-*I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
-
-* Probably a firewall issue. [Set firewall exceptions for Application Insights to send data](./ip-addresses.md).
-* IIS Server might be missing some prerequisites: .NET Extensibility 4.5, and ASP.NET 4.5.
+* I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
+* This is probably a firewall issue. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
+* IIS Server might be missing some prerequisites, like .NET Extensibility 4.5 or ASP.NET 4.5.
*I [installed Status Monitor](./monitor-performance-live-website-now.md) on my web server to monitor existing apps. I don't see any results.*
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Here are some use cases for telemetry processors:
* Conditionally add custom dimensions. * Update the span name, which is used to aggregate similar telemetry in the Azure portal. * Drop specific span attribute(s) to control ingestion costs.
+ * Filter out some metrics to control ingestion costs.
> [!NOTE] > If you are looking to drop specific (whole) spans for controlling ingestion cost,
The trace message or body is the primary display for logs in the Azure portal. L
## Telemetry processor types
-Currently, the three types of telemetry processors are attribute processors, span processors and log processors.
+Currently, the four types of telemetry processors are attribute processors, span processors, log processors, and metric filters.
An attribute processor can insert, update, delete, or hash attributes of a telemetry item (`span` or `log`). It can also use a regular expression to extract one or more new attributes from an existing attribute.
It can also use a regular expression to extract one or more new attributes from
A log processor can update the telemetry name of logs. It can also use a regular expression to extract one or more new attributes from the log name.
+A metric filter can filter out metrics to help control ingestion cost.
+ > [!NOTE] > Currently, telemetry processors process only attributes of type string. They don't process attributes of type Boolean or number.
To begin, create a configuration file named *applicationinsights.json*. Save it
{ "type": "log", ...
+ },
+ {
+ "type": "metric-filter",
+ ...
} ] }
All specified conditions must evaluate to true to result in a match.
] ``` For more information, see [Telemetry processor examples](./java-standalone-telemetry-processors-examples.md).+
+## Metric filter
+
+Metric filter are used to exclude some metrics in order to help control ingestion cost.
+
+Metric filters only support `exclude` criteria. Metrics that match its `exclude` criteria will not be exported.
+
+To configure this option, under `exclude`, specify the `matchType` one or more `metricNames`.
+
+* **Required field**:
+ * `matchType` controls how items in `metricNames` are matched. Possible values are `regexp` and `strict`.
+ * `metricNames` must match at least one of the items.
+
+### Sample usage
+
+```json
+"processors": [
+ {
+ "type": "metric-filter",
+ "exclude": {
+ "matchType": "strict",
+ "metricNames": [
+ "metricA",
+ "metricB"
+ ]
+ }
+ }
+]
+```
azure-monitor Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/data-explorer.md
The **Usage** tab allows users to deep dive into the performance of the cluster'
The **tables** tab shows the latest and historical properties of tables in the cluster. You can see which tables are consuming the most space, track growth history by table size, hot data, and the number of rows over time.
-The **cache** tab allows users to analyze their actual queries' look back patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly. You may get particular cache policy recommendations on specific tables in Azure Advisor (currently, cache recommendations are available only from the [main Azure Advisor dashboard](/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations)), based on actual queries' look back in the past 30 days and an un-optimized cache policy for at least 95% of the queries. Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data" (meaning the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster could not scale-in or scale-down).
+The **cache** tab allows users to analyze their actual queries' lookback window patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly. You may get particular cache policy recommendations on specific tables in Azure Advisor (currently, cache recommendations are available only from the [main Azure Advisor dashboard](https://docs.microsoft.com/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations)), based on actual queries' lookback window in the past 30 days and an un-optimized cache policy for at least 95% of the queries. Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data" (meaning the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster could not scale-in or scale-down).
[![Screenshot of cache details](./media/data-explorer/cache-tab.png)](./media/data-explorer/cache-tab.png#lightbox)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
> [!IMPORTANT] > - The recommended way to revoke access to your data is by disabling your key, or deleting access policy in your Key Vault.
-> - Setting the cluster's `identity` `type` to "None" also revokes access to your data, but this approach isn't recommended since you can't revert the revocation when restating the `identity` in the cluster without opening support request.
+> - Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support.
The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data becomes inaccessible and queries on these workspaces fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Ingested data in last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This gets deleted on key revocation operation and becomes inaccessible.
Customer-Managed key is provided on dedicated cluster and these operations are r
- If you create a cluster and get an error "<region-name> doesnΓÇÖt support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can not be changed after the cluster has been created.
- - If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn' applied to System-assigned managed identity.
+ - Setting the cluster's `identity` `type` to `None` acks also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support. The recommended way to revoke access to your data is [key revocation](#key-revocation).
- - You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
+ - You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
## Troubleshooting
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
For some resource types, you need to contact support to have the 800 instance li
## microsoft.insights * metricalerts
+* scheduledQueryRules
## Microsoft.Logic
azure-sql Audit Write Storage Account Behind Vnet Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/audit-write-storage-account-behind-vnet-firewall.md
You can configure auditing to write database events on a storage account behind
> [!IMPORTANT] > In order to use storage account behind virtual network and firewall, you need to set **isStorageBehindVnet** parameter to true -- [Deploy an Azure SQL Server with Auditing enabled to write audit logs to a blob storage](https://azure.microsoft.com/resources/templates/sql-auditing-server-policy-to-blob-storage)
+- [Deploy an Azure SQL Server with Auditing enabled to write audit logs to a blob storage](https://azure.microsoft.com/resources/templates/sql-auditing-server-policy-to-blob-storage/)
> [!NOTE] > The linked sample is on an external public repository and is provided 'as is', without warranty, and are not supported under any Microsoft support program/service.
backup About Azure Vm Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-azure-vm-restore.md
This article describes how the [Azure Backup service](./backup-overview.md) rest
| [Restore to create a new virtual machine](./backup-azure-arm-restore-vms.md) | Restores the entire VM to OLR (if the source VM still exists) or ALR | <li> If the source VM is lost or corrupt, then you can restore entire VM <li> You can create a copy of the VM <li> You can perform a restore drill for audit or compliance <li> This option won't work for Azure VMs created from Marketplace images (that is, if they aren't available because the license expired). | | [Restore disks of the VM](./backup-azure-arm-restore-vms.md#restore-disks) | Restore disks attached to the VM | All disks: This option creates the template and restores the disk. You can edit this template with special configurations (for example, availability sets) to meet your requirements and then use both the template and restore the disk to recreate the VM. | | [Restore specific files within the VM](./backup-azure-restore-files-from-vm.md) | Choose restore point, browse, select files, and restore them to the same (or compatible) OS as the backed-up VM. | If you know which specific files to restore, then use this option instead of restoring the entire VM. |
-| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <li> [Encrypted VM with Azure Active Directory](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Azure AD](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Azure AD* migrated to *without Azure AD*](../virtual-machines/windows/disk-encryption-faq.md#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app) |
+| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <li> [Encrypted VM with Azure Active Directory](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Azure AD](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Azure AD* migrated to *without Azure AD*](/azure/virtual-machines/windows/disk-encryption-faq#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app) |
| [Cross Region Restore](./backup-azure-arm-restore-vms.md#cross-region-restore) | Create a new VM or restore disks to a secondary region (Azure paired region) | <li> **Full outage**: With the cross region restore feature, there's no wait time to recover data in the secondary region. You can initiate restores in the secondary region even before Azure declares an outage. <li> **Partial outage**: Downtime can occur in specific storage clusters where Azure Backup stores your backed-up data or even in-network, connecting Azure Backup and storage clusters associated with your backed-up data. With Cross Region Restore, you can perform a restore in the secondary region using a replica of backed up data in the secondary region. <li> **No outage**: You can conduct business continuity and disaster recovery (BCDR) drills for audit or compliance purposes with the secondary region data. This allows you to perform a restore of backed up data in the secondary region even if there isn't a full or partial outage in the primary region for business continuity and disaster recovery drills. | ## Next steps
container-registry Monitor Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/monitor-service-reference.md
The following schemas are in use by Azure Container Registry's resource logs.
## Next steps - See [Monitor Azure Container Registry](monitor-service.md) for a description of monitoring an Azure container registry.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/overview) for details on monitoring Azure resources.
container-registry Monitor Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/monitor-service.md
You can use the Azure Monitor REST API to get information programmatically about
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/diagnostic-logs-schema#top-level-resource-logs-schema). The schema for Azure Container Registry resource logs is found in the [Azure Container Registry Data Reference](monitor-service-reference.md#schemas).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-resource-logs-schema). The schema for Azure Container Registry resource logs is found in the [Azure Container Registry Data Reference](monitor-service-reference.md#schemas).
The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
The following table lists common and recommended alert rules for Azure Container
## Next steps - See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Registry.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
Previously updated : 05/13/2021 Last updated : 06/13/2021 # Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
This article only shows Azure Resource Manager template examples for Core (SQL)
> * To change the throughput values, redeploy the template with updated RU/s. > * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately. > * Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
+> * To provision throughput at the database level and share across all containers, apply the throughput values to the database options property.
To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../azure-resource-manager/templates/deploy-to-azure-button.md).
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
If you have **Include in ARM template** selected for deploying global parameters
#### Resolution Unselect **Include in ARM template** and deploy global parameters with PowerShell as described in Global parameters in CI/CD.
+
+### Extra left "[" displayed in published JSON file
+
+#### Issue
+When publishing ADF with DevOps, there is one more left "[" displayed. ADF adds one more left "[" in ARMTemplate in DevOps automatically.
+
+#### Cause
+Because [ is a reserved character for ARM, an extra [ is added automatically to escape "[".
+
+#### Resolution
+This is normal behavior during ADF publishing process for CI/CD.
## Next steps
data-factory Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/frequently-asked-questions.md
- Title: 'Azure Data Factory: Frequently asked questions '
-description: Get answers to frequently asked questions about Azure Data Factory.
---- Previously updated : 05/11/2021--
-# Azure Data Factory FAQ
--
-This article provides answers to frequently asked questions about Azure Data Factory.
-
-## What is Azure Data Factory?
-
-Data Factory is a fully managed, cloud-based, data-integration ETL service that automates the movement and transformation of data. Like a factory that runs equipment to transform raw materials into finished goods, Azure Data Factory orchestrates existing services that collect raw data and transform it into ready-to-use information.
-
-By using Azure Data Factory, you can create data-driven workflows to move data between on-premises and cloud data stores. And you can process and transform data with Data Flows. ADF also supports external compute engines for hand-coded transformations by using compute services such as Azure HDInsight, Azure Databricks, and the SQL Server Integration Services (SSIS) integration runtime.
-
-With Data Factory, you can execute your data processing either on an Azure-based cloud service or in your own self-hosted compute environment, such as SSIS, SQL Server, or Oracle. After you create a pipeline that performs the action you need, you can schedule it to run periodically (hourly, daily, or weekly, for example), time window scheduling, or trigger the pipeline from an event occurrence. For more information, see [Introduction to Azure Data Factory](introduction.md).
-
-## Compliance and Security Considerations
-
-Azure Data Factory is certified for a range of compliance certifications, including _SOC 1, 2, 3_, _HIPAA BAA_, and _HITRUST_. Full and growing list of certifications can be found [here](data-movement-security-considerations.md). Digital copies for audit reports and compliance certifications can be found in [Service Trust Center](https://servicetrust.microsoft.com/)
-
-### Control flows and scale
-
-To support the diverse integration flows and patterns in the modern data warehouse, Data Factory enables flexible data pipeline modeling. This entails full control flow programming paradigms, which include conditional execution, branching in data pipelines, and the ability to explicitly pass parameters within and across these flows. Control flow also encompasses transforming data through activity dispatch to external execution engines and data flow capabilities, including data movement at scale, via the Copy activity.
-
-Data Factory provides freedom to model any flow style that's required for data integration and that can be dispatched on demand or repeatedly on a schedule. A few common flows that this model enables are:
--- Control flows:
- - Activities can be chained together in a sequence within a pipeline.
- - Activities can be branched within a pipeline.
- - Parameters:
- * Parameters can be defined at the pipeline level and arguments can be passed while you invoke the pipeline on demand or from a trigger.
- * Activities can consume the arguments that are passed to the pipeline.
- - Custom state passing:
- * Activity outputs, including state, can be consumed by a subsequent activity in the pipeline.
- - Looping containers:
- * The foreach activity will iterate over a specified collection of activities in a loop.
-- Trigger-based flows:
- - Pipelines can be triggered on demand, by wall-clock time, or in response to driven by event grid topics
-- Delta flows:
- - Parameters can be used to define your high-water mark for delta copy while moving dimension or reference tables from a relational store, either on-premises or in the cloud, to load the data into the lake.
-
-For more information, see [Tutorial: Control flows](tutorial-control-flow.md).
-
-### Data transformed at scale with code-free pipelines
-
-The new browser-based tooling experience provides code-free pipeline authoring and deployment with a modern, interactive web-based experience.
-
-For visual data developers and data engineers, the Data Factory web UI is the code-free design environment that you will use to build pipelines. It's fully integrated with Visual Studio Online Git and provides integration for CI/CD and iterative development with debugging options.
-
-### Rich cross-platform SDKs for advanced users
-
-Data Factory V2 provides a rich set of SDKs that can be used to author, manage, and monitor pipelines by using your favorite IDE, including:
-
-* Python SDK
-* PowerShell CLI
-* C# SDK
-
-Users can also use the documented REST APIs to interface with Data Factory V2.
-
-### Iterative development and debugging by using visual tools
-
-Azure Data Factory visual tools enable iterative development and debugging. You can create your pipelines and do test runs by using the **Debug** capability in the pipeline canvas without writing a single line of code. You can view the results of your test runs in the **Output** window of your pipeline canvas. After your test run succeeds, you can add more activities to your pipeline and continue debugging in an iterative manner. You can also cancel your test runs after they are in progress.
-
-You are not required to publish your changes to the data factory service before selecting **Debug**. This is helpful in scenarios where you want to make sure that the new additions or changes will work as expected before you update your data factory workflows in development, test, or production environments.
-
-### Ability to deploy SSIS packages to Azure
-
-If you want to move your SSIS workloads, you can create a Data Factory and provision an Azure-SSIS integration runtime. An Azure-SSIS integration runtime is a fully managed cluster of Azure VMs (nodes) that are dedicated to run your SSIS packages in the cloud. For step-by-step instructions, see the [Deploy SSIS packages to Azure](./tutorial-deploy-ssis-packages-azure.md) tutorial.
-
-### SDKs
-
-If you are an advanced user and looking for a programmatic interface, Data Factory provides a rich set of SDKs that you can use to author, manage, or monitor pipelines by using your favorite IDE. Language support includes .NET, PowerShell, Python, and REST.
-
-### Monitoring
-
-You can monitor your Data Factories via PowerShell, SDK, or the Visual Monitoring Tools in the browser user interface. You can monitor and manage on-demand, trigger-based, and clock-driven custom flows in an efficient and effective manner. Cancel existing tasks, see failures at a glance, drill down to get detailed error messages, and debug the issues, all from a single pane of glass without context switching or navigating back and forth between screens.
-
-### New features for SSIS in Data Factory
-
-Since the initial public preview release in 2017, Data Factory has added the following features for SSIS:
--- Support for three more configurations/variants of Azure SQL Database to host the SSIS database (SSISDB) of projects/packages:-- SQL Database with virtual network service endpoints-- SQL Managed Instance-- Elastic pool-- Support for an Azure Resource Manager virtual network on top of a classic virtual network to be deprecated in the future, which lets you inject/join your Azure-SSIS integration runtime to a virtual network configured for SQL Database with virtual network service endpoints/MI/on-premises data access. For more information, see also [Join an Azure-SSIS integration runtime to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md).-- Support for Azure Active Directory (Azure AD) authentication and SQL authentication to connect to the SSISDB, allowing Azure AD authentication with your Data Factory managed identity for Azure resources-- Support for bringing your existing SQL Server license to earn substantial cost savings from the Azure Hybrid Benefit option-- Support for Enterprise Edition of the Azure-SSIS integration runtime that lets you use advanced/premium features, a custom setup interface to install additional components/extensions, and a partner ecosystem. For more information, see also [Enterprise Edition, Custom Setup, and 3rd Party Extensibility for SSIS in ADF](https://blogs.msdn.microsoft.com/ssis/2018/04/27/enterprise-edition-custom-setup-and-3rd-party-extensibility-for-ssis-in-adf/). -- Deeper integration of SSIS in Data Factory that lets you invoke/trigger first-class Execute SSIS Package activities in Data Factory pipelines and schedule them via SSMS. For more information, see also [Modernize and extend your ETL/ELT workflows with SSIS activities in ADF pipelines](https://blogs.msdn.microsoft.com/ssis/2018/05/23/modernize-and-extend-your-etlelt-workflows-with-ssis-activities-in-adf-pipelines/).-
-## What is the integration runtime?
-
-The integration runtime is the compute infrastructure that Azure Data Factory uses to provide the following data integration capabilities across various network environments:
--- **Data movement**: For data movement, the integration runtime moves the data between the source and destination data stores, while providing support for built-in connectors, format conversion, column mapping, and performant and scalable data transfer.-- **Dispatch activities**: For transformation, the integration runtime provides capability to natively execute SSIS packages.-- **Execute SSIS packages**: The integration runtime natively executes SSIS packages in a managed Azure compute environment. The integration runtime also supports dispatching and monitoring transformation activities running on a variety of compute services, such as Azure HDInsight, Azure Machine Learning, SQL Database, and SQL Server.-
-You can deploy one or many instances of the integration runtime as required to move and transform data. The integration runtime can run on an Azure public network or on a private network (on-premises, Azure Virtual Network, or Amazon Web Services virtual private cloud [VPC]).
-
-For more information, see [Integration runtime in Azure Data Factory](concepts-integration-runtime.md).
-
-## What is the limit on the number of integration runtimes?
-
-There is no hard limit on the number of integration runtime instances you can have in a data factory. There is, however, a limit on the number of VM cores that the integration runtime can use per subscription for SSIS package execution. For more information, see [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).
-
-## What are the top-level concepts of Azure Data Factory?
-
-An Azure subscription can have one or more Azure Data Factory instances (or data factories). Azure Data Factory contains four key components that work together as a platform on which you can compose data-driven workflows with steps to move and transform data.
-
-### Pipelines
-
-A data factory can have one or more pipelines. A pipeline is a logical grouping of activities to perform a unit of work. Together, the activities in a pipeline perform a task. For example, a pipeline can contain a group of activities that ingest data from an Azure blob and then run a Hive query on an HDInsight cluster to partition the data. The benefit is that you can use a pipeline to manage the activities as a set instead of having to manage each activity individually. You can chain together the activities in a pipeline to operate them sequentially, or you can operate them independently, in parallel.
-
-### Data flows
-
-Data flows are objects that you build visually in Data Factory which transform data at scale on backend Spark services. You do not need to understand programming or Spark internals. Just design your data transformation intent using graphs (Mapping) or spreadsheets (Wrangling).
-
-### Activities
-
-Activities represent a processing step in a pipeline. For example, you can use a Copy activity to copy data from one data store to another data store. Similarly, you can use a Hive activity, which runs a Hive query on an Azure HDInsight cluster to transform or analyze your data. Data Factory supports three types of activities: data movement activities, data transformation activities, and control activities.
-
-### Datasets
-
-Datasets represent data structures within the data stores, which simply point to or reference the data you want to use in your activities as inputs or outputs.
-
-### Linked services
-
-Linked services are much like connection strings, which define the connection information needed for Data Factory to connect to external resources. Think of it this way: A linked service defines the connection to the data source, and a dataset represents the structure of the data. For example, an Azure Storage linked service specifies the connection string to connect to the Azure Storage account. And an Azure blob dataset specifies the blob container and the folder that contains the data.
-
-Linked services have two purposes in Data Factory:
--- To represent a *data store* that includes, but is not limited to, a SQL Server instance, an Oracle database instance, a file share, or an Azure Blob storage account. For a list of supported data stores, see [Copy Activity in Azure Data Factory](copy-activity-overview.md).-- To represent a *compute resource* that can host the execution of an activity. For example, the HDInsight Hive activity runs on an HDInsight Hadoop cluster. For a list of transformation activities and supported compute environments, see [Transform data in Azure Data Factory](transform-data.md).-
-### Triggers
-
-Triggers represent units of processing that determine when a pipeline execution is kicked off. There are different types of triggers for different types of events.
-
-### Pipeline runs
-
-A pipeline run is an instance of a pipeline execution. You usually instantiate a pipeline run by passing arguments to the parameters that are defined in the pipeline. You can pass the arguments manually or within the trigger definition.
-
-### Parameters
-
-Parameters are key-value pairs in a read-only configuration. You define parameters in a pipeline, and you pass the arguments for the defined parameters during execution from a run context. The run context is created by a trigger or from a pipeline that you execute manually. Activities within the pipeline consume the parameter values.
-
-A dataset is a strongly typed parameter and an entity that you can reuse or reference. An activity can reference datasets, and it can consume the properties that are defined in the dataset definition.
-
-A linked service is also a strongly typed parameter that contains connection information to either a data store or a compute environment. It's also an entity that you can reuse or reference.
-
-### Control flows
-
-Control flows orchestrate pipeline activities that include chaining activities in a sequence, branching, parameters that you define at the pipeline level, and arguments that you pass as you invoke the pipeline on demand or from a trigger. Control flows also include custom state passing and looping containers (that is, foreach iterators).
--
-For more information about Data Factory concepts, see the following articles:
--- [Dataset and linked services](concepts-datasets-linked-services.md)-- [Pipelines and activities](concepts-pipelines-activities.md)-- [Integration runtime](concepts-integration-runtime.md)-
-## What is the pricing model for Data Factory?
-
-For Azure Data Factory pricing details, see [Data Factory pricing details](https://azure.microsoft.com/pricing/details/data-factory/).
-
-## How can I stay up-to-date with information about Data Factory?
-
-For the most up-to-date information about Azure Data Factory, go to the following sites:
--- [Blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)-- [Documentation home page](./index.yml)-- [Product home page](https://azure.microsoft.com/services/data-factory/)-
-## Technical deep dive
-
-### How can I schedule a pipeline?
-
-You can use the scheduler trigger or time window trigger to schedule a pipeline. The trigger uses a wall-clock calendar schedule, which can schedule pipelines periodically or in calendar-based recurrent patterns (for example, on Mondays at 6:00 PM and Thursdays at 9:00 PM). For more information, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md).
-
-### Can I pass parameters to a pipeline run?
-
-Yes, parameters are a first-class, top-level concept in Data Factory. You can define parameters at the pipeline level and pass arguments as you execute the pipeline run on demand or by using a trigger.
-
-### Can I define default values for the pipeline parameters?
-
-Yes. You can define default values for the parameters in the pipelines.
-
-### Can an activity in a pipeline consume arguments that are passed to a pipeline run?
-
-Yes. Each activity within the pipeline can consume the parameter value that's passed to the pipeline and run with the `@parameter` construct.
-
-### Can an activity output property be consumed in another activity?
-
-Yes. An activity output can be consumed in a subsequent activity with the `@activity` construct.
-
-### How do I gracefully handle null values in an activity output?
-
-You can use the `@coalesce` construct in the expressions to handle null values gracefully.
-
-## Mapping data flows
-
-### I need help troubleshooting my data flow logic. What info do I need to provide to get help?
-
-When Microsoft provides help or troubleshooting with data flows, please provide the Data Flow Script. This is the code-behind script from your data flow graph. From the ADF UI, open your data flow, then click the "Script" button at the top-right corner. Copy and paste this script or save it in a text file.
-
-### How do I access data by using the other 90 dataset types in Data Factory?
-
-The mapping data flow feature currently allows Azure SQL Database, Azure Synapse Analytics, delimited text files from Azure Blob storage or Azure Data Lake Storage Gen2, and Parquet files from Blob storage or Data Lake Storage Gen2 natively for source and sink.
-
-Use the Copy activity to stage data from any of the other connectors, and then execute a Data Flow activity to transform data after it's been staged. For example, your pipeline will first copy into Blob storage, and then a Data Flow activity will use a dataset in source to transform that data.
-
-### Is the self-hosted integration runtime available for data flows?
-
-Self-hosted IR is an ADF pipeline construct that you can use with the Copy Activity to acquire or move data to and from on-prem or VM-based data sources and sinks. The virtual machines that you use for a self-hosted IR can also be placed inside of the same VNET as your protected data stores for access to those data stores from ADF. With data flows, you'll achieve these same end-results using the Azure IR with managed VNET instead.
-
-### Does the data flow compute engine serve multiple tenants?
-
-Clusters are never shared. We guarantee isolation for each job run in production runs. In case of debug scenario one person gets one cluster, and all debugs will go to that cluster which are initiated by that user.
-
-### Is there a way to write attributes in cosmos db in the same order as specified in the sink in ADF data flow?
-
-For cosmos DB, the underlying format of each document is a JSON object which is an unordered set of name/value pairs, so the order cannot be reserved.
-
-### Why a user is unable to use data preview in the data flows?
-
-You should check permissions for custom role. There are multiple actions involved in the dataflow data preview. You start by checking network traffic while debugging on your browser. Please follow all of the actions, for details, please refer to [Resource provider.](../role-based-access-control/resource-provider-operations.md#microsoftdatafactory)
-
-### In ADF, can I calculate value for a new column from existing column from mapping?
-
-You can use derive transformation in mapping data flow to create a new column on the logic you want. When creating a derived column, you can either generate a new column or update an existing one. In the Column textbox, enter in the column you are creating. To override an existing column in your schema, you can use the column dropdown. To build the derived column's expression, click on the Enter expression textbox. You can either start typing your expression or open up the expression builder to construct your logic.
-
-### Why mapping data flow preview failing with Gateway timeout?
-
-Please try to use larger cluster and leverage the row limits in debug settings to a smaller value to reduce the size of debug output.
-
-### How to parameterize column name in dataflow?
-
-Column name can be parameterized similar to other properties. Like in derived column customer can use **$ColumnNameParam = toString(byName($myColumnNameParamInData)).** These parameters can be passed from pipeline execution down to Data flows.
-
-### The data flow advisory about TTL and costs
-
-This troubleshoot document may help to resolve your issues: [Mapping data flows performance and tuning guide-Time to live](./concepts-data-flow-performance.md#time-to-live).
--
-## Wrangling data flow (Data flow power query)
-
-### What are the supported regions for wrangling data flow?
-
-Data factory is available in following [regions.](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory)
-Power query feature is being rolled out to all regions. If the feature is not available in your region, please check with support.
-
-### What are the limitations and constraints with wrangling data flow?
-
-Dataset names can only contain alpha-numeric characters. The following data stores are supported:
-
-* DelimitedText dataset in Azure Blob Storage using account key authentication
-* DelimitedText dataset in Azure Data Lake Storage gen2 using account key or service principal authentication
-* DelimitedText dataset in Azure Data Lake Storage gen1 using service principal authentication
-* Azure SQL Database and Data Warehouse using sql authentication. See supported SQL types below. There is no PolyBase or staging support for data warehouse.
-
-At this time, linked service Key Vault integration is not supported in wrangling data flows.
-
-### What is the difference between mapping and wrangling data flows?
-
-Mapping data flows provide a way to transform data at scale without any coding required. You can design a data transformation job in the data flow canvas by constructing a series of transformations. Start with any number of source transformations followed by data transformation steps. Complete your data flow with a sink to land your results in a destination. Mapping data flow is great at mapping and transforming data with both known and unknown schemas in the sinks and sources.
-
-Wrangling data flows allow you to do agile data preparation and exploration using the Power Query Online mashup editor at scale via spark execution. With the rise of data lakes sometimes you just need to explore a data set or create a dataset in the lake. You aren't mapping to a known target. Wrangling data flows are used for less formal and model-based analytics scenarios.
-
-### What is the difference between Power Platform Dataflows and wrangling data flows?
-
-Power Platform Dataflows allow users to import and transform data from a wide range of data sources into the Common Data Service and Azure Data Lake to build PowerApps applications, Power BI reports or Flow automations. Power Platform Dataflows use the established Power Query data preparation experiences, similar to Power BI and Excel. Power Platform Dataflows also enable easy reuse within an organization and automatically handle orchestration (e.g. automatically refreshing dataflows that depend on another dataflow when the former one is refreshed).
-
-Azure Data Factory (ADF) is a managed data integration service that allows data engineers and citizen data integrator to create complex hybrid extract-transform-load (ETL) and extract-load-transform (ELT) workflows. Wrangling data flow in ADF empowers users with a code-free, serverless environment that simplifies data preparation in the cloud and scales to any data size with no infrastructure management required. It uses the Power Query data preparation technology (also used in Power Platform dataflows, Excel, Power BI) to prepare and shape the data. Built to handle all the complexities and scale challenges of big data integration, wrangling data flows allow users to quickly prepare data at scale via spark execution. Users can build resilient data pipelines in an accessible visual environment with our browser-based interface and let ADF handle the complexities of Spark execution. Build schedules for your pipelines and monitor your data flow executions from the ADF monitoring portal. Easily manage data availability SLAs with ADF's rich availability monitoring and alerts and leverage built-in continuous integration and deployment capabilities to save and manage your flows in a managed environment. Establish alerts and view execution plans to validate that your logic is performing as planned as you tune your data flows.
-
-### Supported SQL Types
-
-Wrangling data flow supports the following data types in SQL. You will get a validation error for using a data type that isn't supported.
-
-* short
-* double
-* real
-* float
-* char
-* nchar
-* varchar
-* nvarchar
-* integer
-* int
-* bit
-* boolean
-* smallint
-* tinyint
-* bigint
-* long
-* text
-* date
-* datetime
-* datetime2
-* smalldatetime
-* timestamp
-* uniqueidentifier
-* xml
--
-## Next steps
-
-For step-by-step instructions to create a data factory, see the following tutorials:
--- [Quick-start: Create a data factory](quickstart-create-data-factory-dot-net.md)-- [Tutorial: Copy data in the cloud](tutorial-copy-data-dot-net.md)
data-factory Data Factory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-faq.md
- Title: Azure Data Factory - Frequently Asked Questions
-description: Frequently asked questions about Azure Data Factory.
----- Previously updated : 01/10/2018--
-# Azure Data Factory - Frequently Asked Questions
-> [!NOTE]
-> This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [frequently asked question - Data Factory](../frequently-asked-questions.md).
--
-## General questions
-### What is Azure Data Factory?
-Data Factory is a cloud-based data integration service that **automates the movement and transformation of data**. Just like a factory that runs equipment to take raw materials and transform them into finished goods, Data Factory orchestrates existing services that collect raw data and transform it into ready-to-use information.
-
-Data Factory allows you to create data-driven workflows to move data between both on-premises and cloud data stores as well as process/transform data using compute services such as Azure HDInsight and Azure Data Lake Analytics. After you create a pipeline that performs the action that you need, you can schedule it to run periodically (hourly, daily, weekly etc.).
-
-For more information, see [Overview & Key Concepts](data-factory-introduction.md).
-
-### Where can I find pricing details for Azure Data Factory?
-See [Data Factory Pricing Details page][adf-pricing-details] for the pricing details for the Azure Data Factory.
-
-### How do I get started with Azure Data Factory?
-* For an overview of Azure Data Factory, see [Introduction to Azure Data Factory](data-factory-introduction.md).
-* For a tutorial on how to **copy/move data** using Copy Activity, see [Copy data from Azure Blob Storage to Azure SQL Database](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md).
-* For a tutorial on how to **transform data** using HDInsight Hive Activity. See [Process data by running Hive script on Hadoop cluster](data-factory-build-your-first-pipeline.md)
-
-### What is the Data Factory's region availability?
-Data Factory is available in **US West** and **North Europe**. The compute and storage services used by data factories can be in other regions. See [Supported regions](data-factory-introduction.md#supported-regions).
-
-### What are the limits on number of data factories/pipelines/activities/datasets?
-See **Azure Data Factory Limits** section of the [Azure Subscription and Service Limits, Quotas, and Constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits) article.
-
-### What is the authoring/developer experience with Azure Data Factory service?
-You can author/create data factories using one of the following tools/SDKs:
-
-* **Visual Studio**
- You can use Visual Studio to create an Azure data factory. See [Build your first data pipeline using Visual Studio](data-factory-build-your-first-pipeline-using-vs.md) for details.
-* **Azure PowerShell**
- See [Create and monitor Azure Data Factory using Azure PowerShell](data-factory-build-your-first-pipeline-using-powershell.md) for a tutorial/walkthrough for creating a data factory using PowerShell. See [Data Factory Cmdlet Reference][adf-powershell-reference] content on MSDN Library for a comprehensive documentation of Data Factory cmdlets.
-* **.NET Class Library**
- You can programmatically create data factories by using Data Factory .NET SDK. See [Create, monitor, and manage data factories using .NET SDK](data-factory-create-data-factories-programmatically.md) for a walkthrough of creating a data factory using .NET SDK. See [Data Factory Class Library Reference][msdn-class-library-reference] for a comprehensive documentation of Data Factory .NET SDK.
-* **REST API**
- You can also use the REST API exposed by the Azure Data Factory service to create and deploy data factories. See [Data Factory REST API Reference][msdn-rest-api-reference] for a comprehensive documentation of Data Factory REST API.
-* **Azure Resource Manager Template**
- See [Tutorial: Build your first Azure data factory using Azure Resource Manager template](data-factory-build-your-first-pipeline-using-arm.md) fo details.
-
-### Can I rename a data factory?
-No. Like other Azure resources, the name of an Azure data factory cannot be changed.
-
-### Can I move a data factory from one Azure subscription to another?
-Yes. Use the **Move** button on your data factory blade as shown in the following diagram:
-
-![Move data factory](media/data-factory-faq/move-data-factory.png)
-
-### What are the compute environments supported by Data Factory?
-The following table provides a list of compute environments supported by Data Factory and the activities that can run on them.
-
-| Compute environment | activities |
-| | |
-| [On-demand HDInsight cluster](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) or [your own HDInsight cluster](data-factory-compute-linked-services.md#azure-hdinsight-linked-service) |[DotNet](data-factory-use-custom-activities.md), [Hive](data-factory-hive-activity.md), [Pig](data-factory-pig-activity.md), [MapReduce](data-factory-map-reduce.md), [Hadoop Streaming](data-factory-hadoop-streaming-activity.md) |
-| [Azure Batch](data-factory-compute-linked-services.md#azure-batch-linked-service) |[DotNet](data-factory-use-custom-activities.md) |
-| [Azure Machine Learning Studio (classic)](data-factory-compute-linked-services.md#azure-machine-learning-studio-classic-linked-service) |[Studio (classic) activities: Batch Execution and Update Resource](data-factory-azure-ml-batch-execution-activity.md) |
-| [Azure Data Lake Analytics](data-factory-compute-linked-services.md#azure-data-lake-analytics-linked-service) |[Data Lake Analytics U-SQL](data-factory-usql-activity.md) |
-| [Azure SQL](data-factory-compute-linked-services.md#azure-sql-linked-service), [Azure Synapse Analytics](data-factory-compute-linked-services.md#azure-synapse-analytics-linked-service), [SQL Server](data-factory-compute-linked-services.md#sql-server-linked-service) |[Stored Procedure](data-factory-stored-proc-activity.md) |
-
-### How does Azure Data Factory compare with SQL Server Integration Services (SSIS)?
-See the [Azure Data Factory vs. SSIS](https://www.sqlbits.com/Sessions/Event15/Azure_Data_Factory_vs_SSIS) presentation from one of our MVPs (Most Valued Professionals): Reza Rad. Some of the recent changes in Data Factory may not be listed in the slide deck. We are continuously adding more capabilities to Azure Data Factory. We are continuously adding more capabilities to Azure Data Factory. We will incorporate these updates into the comparison of data integration technologies from Microsoft sometime later this year.
-
-## Activities - FAQ
-### What are the different types of activities you can use in a Data Factory pipeline?
-* [Data Movement Activities](data-factory-data-movement-activities.md) to move data.
-* [Data Transformation Activities](data-factory-data-transformation-activities.md) to process/transform data.
-
-### When does an activity run?
-The **availability** configuration setting in the output data table determines when the activity is run. If input datasets are specified, the activity checks whether all the input data dependencies are satisfied (that is, **Ready** state) before it starts running.
-
-## Copy Activity - FAQ
-### Is it better to have a pipeline with multiple activities or a separate pipeline for each activity?
-Pipelines are supposed to bundle related activities. If the datasets that connect them are not consumed by any other activity outside the pipeline, you can keep the activities in one pipeline. This way, you would not need to chain pipeline active periods so that they align with each other. Also, the data integrity in the tables internal to the pipeline is better preserved when updating the pipeline. Pipeline update essentially stops all the activities within the pipeline, removes them, and creates them again. From authoring perspective, it might also be easier to see the flow of data within the related activities in one JSON file for the pipeline.
-
-### What are the supported data stores?
-Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the following data stores. Data from any source can be written to any sink. Click a data store to learn how to copy data to and from that store.
--
-> [!NOTE]
-> Data stores with * can be on-premises or on Azure IaaS, and require you to install [Data Management Gateway](data-factory-data-management-gateway.md) on an on-premises/Azure IaaS machine.
-
-### What are the supported file formats?
-Azure Data Factory supports the following file format types:
-
-* [Text format](data-factory-supported-file-and-compression-formats.md#text-format)
-* [JSON format](data-factory-supported-file-and-compression-formats.md#json-format)
-* [Avro format](data-factory-supported-file-and-compression-formats.md#avro-format)
-* [ORC format](data-factory-supported-file-and-compression-formats.md#orc-format)
-* [Parquet format](data-factory-supported-file-and-compression-formats.md#parquet-format)
-
-### Where is the copy operation performed?
-See [Globally available data movement](data-factory-data-movement-activities.md#global) section for details. In short, when an on-premises data store is involved, the copy operation is performed by the Data Management Gateway in your on-premises environment. And, when the data movement is between two cloud stores, the copy operation is performed in the region closest to the sink location in the same geography.
-
-## HDInsight Activity - FAQ
-### What regions are supported by HDInsight?
-See the Geographic Availability section in the following article: or [HDInsight Pricing Details][hdinsight-supported-regions].
-
-### What region is used by an on-demand HDInsight cluster?
-The on-demand HDInsight cluster is created in the same region where the storage you specified to be used with the cluster exists.
-
-### How to associate additional storage accounts to your HDInsight cluster?
-If you are using your own HDInsight Cluster (BYOC - Bring Your Own Cluster), see the following topics:
-
-* [Using an HDInsight Cluster with Alternate Storage Accounts and Metastores][hdinsight-alternate-storage]
-* [Use Additional Storage Accounts with HDInsight Hive][hdinsight-alternate-storage-2]
-
-If you are using an on-demand cluster that is created by the Data Factory service, specify additional storage accounts for the HDInsight linked service so that the Data Factory service can register them on your behalf. In the JSON definition for the on-demand linked service, use **additionalLinkedServiceNames** property to specify alternate storage accounts as shown in the following JSON snippet:
-
-```JSON
-{
- "name": "MyHDInsightOnDemandLinkedService",
- "properties":
- {
- "type": "HDInsightOnDemandLinkedService",
- "typeProperties": {
- "version": "3.5",
- "clusterSize": 1,
- "timeToLive": "00:05:00",
- "osType": "Linux",
- "linkedServiceName": "LinkedService-SampleData",
- "additionalLinkedServiceNames": [ "otherLinkedServiceName1", "otherLinkedServiceName2" ]
- }
- }
-}
-```
-In the example above, otherLinkedServiceName1 and otherLinkedServiceName2 represent linked services whose definitions contain credentials that the HDInsight cluster needs to access alternate storage accounts.
-
-## Slices - FAQ
-### Why are my input slices not in Ready state?
-A common mistake is not setting **external** property to **true** on the input dataset when the input data is external to the data factory (not produced by the data factory).
-
-In the following example, you only need to set **external** to true on **dataset1**.
-
-**DataFactory1**
-Pipeline 1: dataset1 -> activity1 -> dataset2 -> activity2 -> dataset3
-Pipeline 2: dataset3-> activity3 -> dataset4
-
-If you have another data factory with a pipeline that takes dataset4 (produced by pipeline 2 in data factory 1), mark dataset4 as an external dataset because the dataset is produced by a different data factory (DataFactory1, not DataFactory2).
-
-**DataFactory2**
-Pipeline 1: dataset4->activity4->dataset5
-
-If the external property is properly set, verify whether the input data exists in the location specified in the input dataset definition.
-
-### How to run a slice at another time than midnight when the slice is being produced daily?
-Use the **offset** property to specify the time at which you want the slice to be produced. See [Dataset availability](data-factory-create-datasets.md#dataset-availability) section for details about this property. Here is a quick example:
-
-```json
-"availability":
-{
- "frequency": "Day",
- "interval": 1,
- "offset": "06:00:00"
-}
-```
-Daily slices start at **6 AM** instead of the default midnight.
-
-### How can I rerun a slice?
-You can rerun a slice in one of the following ways:
-
-* Use Monitor and Manage App to rerun an activity window or slice. See [Rerun selected activity windows](data-factory-monitor-manage-app.md#perform-batch-actions) for instructions.
-* Click **Run** in the command bar on the **DATA SLICE** blade for the slice in the Azure portal.
-* Run **Set-AzDataFactorySliceStatus** cmdlet with Status set to **Waiting** for the slice.
-
- ```powershell
- Set-AzDataFactorySliceStatus -Status Waiting -ResourceGroupName $ResourceGroup -DataFactoryName $df -TableName $table -StartDateTime "02/26/2015 19:00:00" -EndDateTime "02/26/2015 20:00:00"
- ```
- See [Set-AzDataFactorySliceStatus][set-azure-datafactory-slice-status] for details about the cmdlet.
-
-### How long did it take to process a slice?
-Use Activity Window Explorer in Monitor & Manage App to know how long it took to process a data slice. See [Activity Window Explorer](data-factory-monitor-manage-app.md#activity-window-explorer) for details.
-
-You can also do the following in the Azure portal:
-
-1. Click **Datasets** tile on the **DATA FACTORY** blade for your data factory.
-2. Click the specific dataset on the **Datasets** blade.
-3. Select the slice that you are interested in from the **Recent slices** list on the **TABLE** blade.
-4. Click the activity run from the **Activity Runs** list on the **DATA SLICE** blade.
-5. Click **Properties** tile on the **ACTIVITY RUN DETAILS** blade.
-6. You should see the **DURATION** field with a value. This value is the time taken to process the slice.
-
-### How to stop a running slice?
-If you need to stop the pipeline from executing, you can use [Suspend-AzDataFactoryPipeline](/powershell/module/az.datafactory/suspend-azdatafactorypipeline) cmdlet. Currently, suspending the pipeline does not stop the slice executions that are in progress. Once the in-progress executions finish, no extra slice is picked up.
-
-If you really want to stop all the executions immediately, the only way would be to delete the pipeline and create it again. If you choose to delete the pipeline, you do NOT need to delete tables and linked services used by the pipeline.
-
-[create-factory-using-dotnet-sdk]: data-factory-create-data-factories-programmatically.md
-[msdn-class-library-reference]: /dotnet/api/microsoft.azure.management.datafactories.models
-[msdn-rest-api-reference]: /rest/api/datafactory/
-
-[adf-powershell-reference]: /powershell/module/az.datafactory/
-[azure-portal]: https://portal.azure.com
-[set-azure-datafactory-slice-status]: /powershell/module/az.datafactory/set-Azdatafactoryslicestatus
-
-[adf-pricing-details]: https://go.microsoft.com/fwlink/?LinkId=517777
-[hdinsight-supported-regions]: https://azure.microsoft.com/pricing/details/hdinsight/
-[hdinsight-alternate-storage]: https://social.technet.microsoft.com/wiki/contents/articles/23256.using-an-hdinsight-cluster-with-alternate-storage-accounts-and-metastores.aspx
-[hdinsight-alternate-storage-2]: /archive/blogs/cindygross/use-additional-storage-accounts-with-hdinsight-hive
dedicated-hsm Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/faq.md
- Title: Frequently asked questions - Azure Dedicated HSM | Microsoft Docs
-description: Find answers to common questions about Azure Dedicated Hardware Security Module, such as basic information, interoperability, high availability, and support.
---
-tags: azure-resource-manager
---- Previously updated : 03/25/2021-
-#Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware.
--
-# Frequently asked questions (FAQ)
-
-Find answers to common questions about Microsoft Azure Dedicated HSM.
-
-## The Basics
-
-### Q: What is a hardware security module (HSM)?
-
-A Hardware Security Module (HSM) is a physical computing device used to safeguard and manage cryptographic keys. Keys stored in HSMs can be used for cryptographic operations. The key material stays safely in tamper-resistant, tamper-evident hardware modules. The HSM only allows authenticated and authorized applications to use the keys. The key material never leaves the HSM protection boundary.
-
-### Q: What is the Azure Dedicated HSM offering?
-
-Azure Dedicated HSM is a cloud-based service that provides HSMs hosted in Azure datacenters that are directly connected to a customer's virtual network. These HSMs are dedicated [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) network appliances. They are deployed directly to a customers' private IP address space and Microsoft does not have any access to the cryptographic functionality of the HSMs. Only the customer has full administrative and cryptographic control over these devices. Customers are responsible for the management of the device and they can get full activity logs directly from their devices. Dedicated HSMs help customers meet compliance/regulatory requirements such as FIPS 140-2 Level 3, HIPAA, PCI-DSS, and eIDAS and many others.
-
-### Q: What hardware is used for Dedicated HSM?
-
-Microsoft has partnered with Thales to deliver the Azure Dedicated HSM service. The specific device used is the [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). This device not only provides [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated firmware, but also offers low-latency, high performance, and high capacity via 10 partitions.
-
-### Q: What is an HSM used for?
-
-HSMs are used for storing cryptographic keys that are used for cryptographic functionality such as TLS (transport layer security), encrypting data, PKI (public key infrastructure), DRM (digital rights management), and signing documents.
-
-### Q: How does Dedicated HSM work?
-
-Customers can provision HSMs in specific regions using PowerShell or command-line interface. The customer specifies what virtual network the HSMs will be connected to and once provisioned the HSMs will be available in the designated subnet at assigned IP addresses in the customer's private IP address space. Then customers can connect to the HSMs using SSH for appliance management and administration, set up HSM client connections, initialize HSMs, create partitions, define, and assign roles such as partition officer, crypto officer, and crypto user. Then the customer will use Thales provided HSM client tools/SDK/software to perform cryptographic operations from their applications.
-
-### Q: What software is provided with the Dedicated HSM service?
-
-Thales supplies all software for the HSM device once provisioned by Microsoft. The software is available at the [Thales customer support portal](https://supportportal.thalesgroup.com/csm). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software. The supported client software is version 7.2, which is compatible with the FIPS 140-2 Level 3 validated firmware version 7.0.3.
-
-### Q: What extra costs may be incurred with Dedicated HSM service?
-
-The following items will incur extra cost when using the Dedicated HSM service.
-* Use of dedicated on-premises backup device is feasible to use with Dedicated HSM service, however this will incur an extra cost, and should be directly sourced from Thales.
-* Dedicated HSM is provided with a 10 partition license. If a customer requires more partitions, this will incur an extra cost for additional licenses directly sourced from Thales.
-* Dedicated HSM requires networking infrastructure (VNET, VPN Gateway, Etc.) and resources such as virtual machines for device configuration. These additional resources will incur extra costs and are not included in the Dedicated HSM service pricing.
-
-### Q: Does Azure Dedicated HSM offer Password-based and PED-based authentication?
-
-At this time, Azure Dedicated HSM only provides HSMs with password-based authentication.
-
-### Q: Will Azure Dedicated HSM host my HSMs for me?
-
-Microsoft only offers the Thales Luna 7 HSM model A790 via the Dedicated HSM service and cannot host any customer-provided devices.
-
-### Q: Does Azure Dedicated HSM support payment (PIN/EFT) features?
-
-The Azure Dedicated HSM service uses Thales Luna 7 HSMs. These devices do not support payment HSM specific functionality (such as PIN or EFT) or certifications. If you would like Azure Dedicated HSM service to support Payment HSMs in future, pass on the feedback to your Microsoft Account Representative.
-
-### Q: Which Azure regions is Dedicated HSM available in?
-
-As of late March 2019, Dedicated HSM is available in the 14 regions listed below. Further regions are planned and can be discussed via your Microsoft Account Representative.
-
-* East US
-* East US 2
-* West US
-* West US 2
-* South Central US
-* Southeast Asia
-* East Asia
-* India Central
-* India South
-* Japan East
-* Japan West
-* North Europe
-* West Europe
-* UK South
-* UK West
-* Canada Central
-* Canada East
-* Australia East
-* Australia Southeast
-* Switzerland North
-* Switzerland West
-* US Gov Virginia
-* US Gov Texas
-
-## Interoperability
-
-### Q: How does my application connect to a Dedicated HSM?
-
-You use Thales provided HSM client tools/SDK/software to perform cryptographic operations from your applications. The software is available at the [Thales customer support portal](https://supportportal.thalesgroup.com/csm). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software.
-
-### Q: Can an application connect to Dedicated HSM from a different VNET in or across regions?
-
-Yes, you will need to use [VNET peering](../virtual-network/virtual-network-peering-overview.md) within a region to establish connectivity across virtual networks. For cross-region connectivity, you must use [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-
-### Q: Can I synchronize Dedicated HSM with on-premises HSMs?
-
-Yes, you can sync on-premises HSMs with Dedicated HSM. [Point-to-point VPN or point-to-site](../vpn-gateway/vpn-gateway-about-vpngateways.md) connectivity can be used to establish connectivity with your on-premises network.
-
-### Q: Can I encrypt data used by other Azure services using keys stored in Dedicated HSM?
-
-No. Azure Dedicated HSMs are only accessible from inside your virtual network.
-
-### Q: Can I import keys from an existing On-premises HSM to Dedicated HSM?
-
-Yes, if you have on-premises Thales Luna 7 HSMs. There are multiple methods. Refer to the [Thales HSM documentation](https://thalesdocs.com/gphsm/luna/7.2/docs/network/Content/Home_network.htm).
-
-### Q: What operating systems are supported by Dedicated HSM client software?
-
-* Windows, Linux, Solaris, AIX, HP-UX, FreeBSD
-* Virtual: VMware, Hyper-V, Xen, KVM
-
-### Q: How do I configure my client application to create a high availability configuration with multiple partitions from multiple HSMs?
-
-To have high availability, you need to set up your HSM client application configuration to use partitions from each HSM. Refer to the Thales HSM client software documentation.
-
-### Q: What authentication mechanisms are supported by Dedicated HSM?
-
-Azure Dedicated HSM uses [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) devices and they support password-based authentication.
-
-### Q: What SDKs, APIs, client software is available to use with Dedicated HSM?
-
-PKCS#11, Java (JCA/JCE), Microsoft CAPI, and CNG, OpenSSL
-
-### Q: Can I import/migrate keys from Luna 5/6 HSMs to Azure Dedicated HSMs?
-
-Yes. Contact your Thales representative for the appropriate Thales migration guide.
-
-## Using your HSM
-
-### Q: How do I decide whether to use Azure Key Vault or Azure Dedicated HSM?
-
-Azure Dedicated HSM is the appropriate choice for enterprises migrating to Azure on-premises applications that use HSMs. Dedicated HSMs present an option to migrate an application with minimal changes. If cryptographic operations are performed in the application's code running in an Azure VM or Web App, they can use Dedicated HSM. In general, shrink-wrapped software running in IaaS (infrastructure as a service) models, that support HSMs as a key store can use Dedicate HSM, such as traffic manager for keyless TLS, ADCS (Active Directory Certificate Services), or similar PKI tools, tools/applications used for document signing, code signing, or a SQL Server (IaaS) configured with TDE (transparent database encryption) with master key in an HSM using an EKM (extensible key management) provider. Azure Key Vault is suitable for "born-in-cloud" applications or for encryption at rest scenarios where customer data is processed by PaaS (platform as a service) or SaaS (Software as a service) scenarios such as Office 365 Customer Key, Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store encryption with customer-managed key, Azure Storage encryption with customer managed key, and Azure SQL with customer managed key.
-
-### Q: What usage scenarios best suit Azure Dedicated HSM?
-
-Azure Dedicated HSM is most suitable for migration scenarios. This means that if you are migrating on-premises applications to Azure that are already using HSMs. This provides a low-friction option to migrate to Azure with minimal changes to the application. If cryptographic operations are performed in the application's code running in Azure VM or Web App, Dedicated HSM may be used. In general, shrink-wrapped software running in IaaS (infrastructure as a service) models, that support HSMs as a key store can use Dedicate HSM, such as:
-
-* Traffic Manager for Keyless TLS
-* ADCS (Active Directory Certificate Services)
-* Similar PKI tools
-* Tools/applications used for document signing
-* Code signing
-* SQL Server (IaaS) configured with TDE (transparent database encryption) with master key in an HSM using an EKM (extensible key management) provider
-
-### Q: Can Dedicated HSM be used with Office 365 Customer Key, Azure Information Protection, Azure Data Lake Store, Disk Encryption, Azure Storage encryption, Azure SQL TDE?
-
-No. Dedicated HSM is provisioned directly into a customer's private IP Address space so it is not accessible by other Azure or Microsoft services.
-
-## Administration, access, and control
-
-### Q: Does the customer get full exclusive control over the HSMs with Dedicated '?
-
-Yes. Each HSM appliance is fully dedicated to one single customer and no one else has administrative control once provisioned and the administrator password changed.
-
-### Q: What level of access does Microsoft have to my HSM?
-
-Microsoft does not have any administrative or cryptographic control over the HSM. Microsoft does have monitor level access via serial port connection to retrieve basic telemetry such as temperature and component health. This allows Microsoft to provide proactive notification of health issues. If necessary, the customer can disable this account.
-
-### Q: What is the "tenant admin" account Microsoft uses, I am used to the admin user being "admin" on Thales Luna HSMs?
-
-The HSM device ships with a default user of admin with its usual default password. Microsoft did not want to have default passwords in use while any device is in a pool waiting to be provisioned by customers. This would not meet our strict security requirements. For this reason, we set a strong password, which is discarded at provisioning time. Also, at provisioning time we create a new user in the admin role called "tenant admin". This user has the default password and customers change this as the first action when first logging into the newly provisioned device. This process ensures high degrees of security and maintains our promise of sole administrative control for our customers. It should be noted that the "tenant admin" user can be used to reset the admin user password if a customer prefers to use that account.
-
-### Q: Can Microsoft or anyone at Microsoft access keys in my Dedicated HSM?
-
-No. Microsoft does not have any access to the keys stored in customer allocated Dedicated HSM.
-
-### Q: Can I upgrade software/firmware on HSMs allocated to me?
-
-The customer has full administrative control including upgrading software/firmware if specific features are required from different firmware versions. Before making changes please consult with Microsoft about your upgrade by contacting HSMRequest@microsoft.com
-
-### Q: How do I manage Dedicated HSM?
-
-You can manage Dedicated HSMs by accessing them using SSH.
-
-### Q: How do I manage partitions on the Dedicated HSM?
-
-The Thales HSM client software is used to manage the HSMs and partitions.
-
-### Q: How do I monitor my HSM?
-
-A customer has full access to HSM activity logs via syslog and SNMP. A customer will need to set up a syslog server or SNMP server to receive the logs or events from the HSMs.
-
-### Q: Can I get full access log of all HSM operations from Dedicated HSM?
-
-Yes. You can send logs from the HSM appliance to a syslog server
-
-## High availability
-
-### Q: Is it possible to configure high availability in the same region or across multiple regions?
-
-Yes. High availability configuration and setup are performed in the HSM client software provided by Thales. HSMs from the same VNET or other VNETs in the same region or across regions, or on premises HSMs connected to a VNET using site-to-site or point-to-point VPN can be added to same high availability configuration. It should be noted that this synchronizes key material only and not specific configuration items such as roles.
-
-### Q: Can I add HSMs from my on-premises network to a high availability group with Azure Dedicated HSM?
-
-Yes. They must meet the high availability requirements for [Thales Luna 7 HSMs](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms)
-
-### Q: Can I add Luna 5/6 HSMs from on-premises networks to a high availability group with Azure Dedicated HSM?
-
-No.
-
-### Q: How many HSMs can I add to the same high availability configuration from one single application?
-
-16 members of an HA group have under-gone, full-throttle testing with excellent results.
-
-## Support
-
-### Q: What is the SLA for Dedicated HSM service?
-
-There is no specific uptime guarantee provided for the Dedicated HSM service. Microsoft will ensure network level access to the device, and hence standard Azure networking SLAs apply.
-
-### Q: How are the HSMs used in Azure Dedicated HSM protected?
-
-Azure datacenters have extensive physical and procedural security controls. In addition to that Dedicated HSMs are hosted in a further restricted access area of the datacenter. These areas have additional physical access controls and video camera surveillance for added security.
-
-### Q: What happens if there is a security breach or hardware tampering event?
-
-Dedicated HSM service uses [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) appliances. These devices support physical and logical tamper detection. If there is ever a tamper event the HSMs are automatically zeroized.
-
-### Q: How do I ensure that keys in my Dedicated HSMs are not lost due to error or a malicious insider attack?
-
-It is highly recommended to use an on-premises HSM backup device to perform regular periodic backup of the HSMs for disaster recovery. You will need to use a peer-to-peer or site-to-site VPN connection to an on-premises workstation connected to an HSM backup device.
-
-### Q: How do I get support for Dedicated HSM?
-
-Support is provided by both Microsoft and Thales. If you have an issue with the hardware or network access, raise a support request with Microsoft and if you have an issue with HSM configuration, software, and application development raise a support request with Thales. If you have an undetermined issue, raise a support request with Microsoft and then Thales can be engaged as required.
-
-### Q: How do I get the client software, documentation and access to integration guidance for the Thales Luna 7 HSM?
-
-After registering for the service, a Thales Customer ID will be provided that allows for registration in the Thales customer support portal. This will enable access to all software and documentation as well as enabling support requests directly with Thales.
-
-### Q: If there is a security vulnerability found and a patch is released by Thales, who is responsible for upgrading/patching OS/Firmware?
-
-Microsoft does not have the ability to connect to HSMs allocated to customers. Customers must upgrade and patch their HSMs.
-
-### Q: What if I need to reboot my HSM?
-
-The HSM has a command-line reboot option, however, we are experiencing issues where the reboot stops responding intermittently and for this reason it is recommended for the safest reboot that you raise a support request with Microsoft to have the device physically rebooted.
-
-## Cryptography and standards
-
-### Q: Is it safe to store encryption keys for my most important data in Dedicated HSM?
-
-Yes, Dedicated HSM provisions Thales Luna 7 HSMs that are [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated.
-
-### Q: What cryptographic keys and algorithms are supported by Dedicated HSM?
-
-Dedicated HSM service provisions Thales Luna 7 HSM appliances. They support a wide range of cryptographic key types and algorithms including:
-Full Suite B support
-
-* Asymmetric:
- * RSA
- * DSA
- * Diffie-Hellman
- * Elliptic Curve
- * Cryptography (ECDSA, ECDH, Ed25519, ECIES) with named, user-defined, and Brainpool curves, KCDSA
-* Symmetric:
- * AES-GCM
- * Triple DES
- * DES
- * ARIA, SEED
- * RC2
- * RC4
- * RC5
- * CAST
- * Hash/Message Digest/HMAC: SHA-1, SHA-2, SM3
- * Key Derivation: SP 800-108 Counter Mode
- * Key Wrapping: SP 800-38F
- * Random Number Generation: FIPS 140-2 approved DRBG (SP 800-90 CTR mode), complying with BSI DRG.4
-
-### Q: Is Dedicated HSM FIPS 140-2 Level 3 validated?
-
-Yes. Dedicated HSM service provisions [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) appliances that are [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated.
-
-### Q: What do I need to do to make sure I operate Dedicated HSM in FIPS 140-2 Level 3 validated mode?
-
-The Dedicated HSM service provisions Thales Luna 7 HSM appliances. These devices are FIPS 140-2 Level 3 validated HSMs. The default deployed configuration, operating system, and firmware are also FIPS validated. You do not need to take any action for FIPS 140-2 Level 3 compliance.
-
-### Q: How does a customer ensure that when an HSM is deprovisioned all the key material is wiped out?
-
-Before requesting deprovisioning, a customer must have zeroized the HSM using Thales provided HSM client tools.
-
-## Performance and scale
-
-### Q: How many cryptographic operations are supported per second with Dedicated HSM?
-
-Dedicated HSM provisions Thales Luna 7 HSMs. Here's a summary of maximum performance for some operations:
-
-* RSA-2048: 10,000 transactions per second
-* ECC P256: 20,000 transactions per second
-* AES-GCM: 17,000 transactions per second
-
-### Q: How many partitions can be created in Dedicated HSM?
-
-The [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) used includes a license for 10 partitions in the cost of the service. The device has a limit of 100 partitions and adding partitions up to this limit would incur extra licensing costs and require installation of a new license file on the device.
-
-### Q: How many keys can be supported in Dedicated HSM?
-
-The maximum number of keys is a function of the memory available. The Thales Luna 7 model A790 in use has 32MB of memory. The following numbers are also applicable to key pairs if using asymmetric keys.
-
-* RSA-2048 - 19,000
-* ECC-P256 - 91,000
-
-Capacity will vary depending on specific key attributes set in the key generation template and number of partitions.
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/networking.md
There are a couple of architectures you can use as an alternative to Global VNet
## Next steps -- [Frequently asked questions](faq.md)
+- [Frequently asked questions](faq.yml)
- [Supportability](supportability.md) - [High availability](high-availability.md) - [Physical Security](physical-security.md)
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies.md
No matter which strategy you choose for integrating an ontology into Azure Digit
1. Proceed with your chosen ontology integration strategy from above: [Adopt](concepts-ontologies-adopt.md), [Convert](concepts-ontologies-convert.md), or [Author](concepts-models.md) your models based on your ontology. 1. If necessary, [extend](concepts-ontologies-extend.md) your ontology to customize it to your needs. 1. [Validate](how-to-parse-models.md) your models to verify they are working DTDL documents.
-1. Upload your finished models to Azure Digital Twins, using the [APIs](how-to-manage-model.md#upload-models) or a sample like the [Azure Digital Twins model uploader](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/ModelUploader).
+1. Upload your finished models to Azure Digital Twins, using the [APIs](how-to-manage-model.md#upload-models) or a sample like the [Azure Digital Twins model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels).
After this, you should be able to use your models in your Azure Digital Twins instance.
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
You'll complete the following tasks:
* [Git](https://git-scm.com/downloads) for cloning the repository * Azure CLI. You have two options for running Azure CLI commands in this quickstart: * Use the Azure Cloud Shell, an interactive shell that runs CLI commands in your browser. This option is recommended because you don't need to install anything. If you're using Cloud Shell for the first time, sign in to the [Azure portal](https://portal.azure.com). Follow the steps in [Cloud Shell quickstart](/azure/cloud-shell/quickstart) to **Start Cloud Shell** and **Select the Bash environment**.
- * Optionally, run Azure CLI on your local machine. The quickstart requires Azure CLI version 2.0.76 or later. Run `az --version` to check the version. Follow the steps in [Install Azure CLI]( /cli/azure/install-azure-cli) to install or upgrade Azure CLI, run it, and sign in. If you're prompted, install the Azure CLI extensions on first use.
+ * Optionally, run Azure CLI on your local machine. If Azure CLI is already installed, run `az upgrade` to upgrade the CLI and extensions to the current version. To install Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+ * [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases): Cross-platform utility to monitor and manage Azure IoT * Hardware
You can use Azure CLI to create an IoT hub that handles events and messaging for
To create an IoT hub:
-1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
- If you prefer to use Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash) and select the option to open in a new tab. - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
-1. From your CLI app, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+1. Run [az extension add](/cli/azure/extension?view=azure-cli-latest#az_extension_add&preserve-view=true) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
> [!NOTE] > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
lighthouse Create Eligible Authorizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/create-eligible-authorizations.md
Title: Create eligible authorizations description: When onboarding customers to Azure Lighthouse, you can let users in your managing tenant elevate their role on a just-in-time basis. Previously updated : 05/26/2021 Last updated : 06/11/2021
The `justInTimeAccessPolicy` specifies two elements:
After you onboard a customer to Azure Lighthouse, any eligible roles you included will be available to the specified user (or to users in any specified groups).
-Each user can elevate their access at any time by visiting the **My customers** page in the Azure portal, selecting a delegation, and then selecting the **Manage eligible roles** button. After that, they can follow the [steps to activate the role](../../active-directory/privileged-identity-management/pim-how-to-activate-role.md) in Azure AD Privileged Identity Management.
+Each user can elevate their access at any time by visiting the **My customers** page in the Azure portal, selecting a delegation, and then selecting **Manage eligible roles**. After that, they can follow the [steps to activate the role](../../active-directory/privileged-identity-management/pim-how-to-activate-role.md) in Azure AD Privileged Identity Management.
+ Once the eligible role has been activated, the user will have that role for the full duration specified in the eligible authorization. After that time period, they will no longer be able to use that role, unless they repeat the elevation process and elevate their access again.
marketplace Create Managed Service Offer Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-preview.md
Add at least one Azure subscription ID, either individually (up to 10) or by upl
## Add email addresses using a CSV file
-1. On the Preview audience**Preview audience** page, select the **Export Audience (csv)** link.
+1. On the **Preview audience** page, select the **Export Audience (csv)** link.
2. Open the CSV file. In the **Id** column, enter the Azure subscription IDs you want to add to the preview audience. 3. In the **Description** column, you have the option to add a description for each entry. 4. In the **Type** column, add **SubscriptionId** to each row that has an ID.
marketplace Marketplace Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-power-bi.md
When you're publishing an offer to the commercial marketplace with Partner Cente
## Legal contracts
-<strike>To simplify the procurement process for customers and reduce legal complexity for software vendors, Microsoft offers a standard contract you can use for your offers in the commercial marketplace. When you offer your software under the standard contract, customers only need to read and accept it one time, and you don't have to create custom terms and conditions.</strike>
- You'll need terms and conditions customers must accept before they can try your offer, or a link to where they can be found. ## Offer listing details
You can choose to opt into Microsoft-supported marketing and sales channels. Whe
## Next steps -- [Create a Power BI app offer](power-bi-app-offer-setup.md)
+- [Create a Power BI app offer](power-bi-app-offer-setup.md)
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Previously updated : 05/10/2021 Last updated : 06/11/2021 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
The DNS resource records for PurviewA, when resolved in the VNet hosting the pri
| `PurviewA.privatelink.purview.azure.com` | A | \<private endpoint IP address\> | | `Web.purview.azure.com` | CNAME | \<private endpoint IP address\> |
+ > [!important]
+ > If you do not use DNS Forwarders and instead you manage A records directly in your on-premises DNS servers to resolve the endpoints through their private IP addresses, you may need to create additional A records in your DNS Servers:
+
+| Name | Type | Value |
+| - | -- | |
+| `PurviewA.purview.azure.com.scan.Purview.azure.com` | A | \<private endpoint IP address\> |
+| `PurviewA.purview.azure.com.catalog.Purview.azure.com` | A | \<private endpoint IP address\> |
+ _Example for Azure Purview DNS name resolution from outside the VNet or when Azure Private Endpoint is not configured:_ :::image type="content" source="media/catalog-private-link/purview-name-resolution-external.png" alt-text="Purview Name Resolution from outside CorpNet":::
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-portal.md
If necessary, follow these steps to configure your subscription to enable Azure
1. Select a **Resource group**. 1. Enter a **Purview account name** for your catalog. Spaces and symbols aren't allowed. 1. Choose a **Location**, and then select **Next: Configuration**.
-1. On the **Configuration** tab, select the desired **Platform size** - the allowed values are 4 capacity units (CU) and 16 CU. Select **Next: Tags**.
-1. On the **Tags** tab, you can optionally add one or more tags. These tags are for use only in the Azure portal, not Azure Purview.
+1. On the **Configuration** tab, select the desired **Platform size** - the allowed values are 4 capacity units (CU) and 16 CU. Optionally, provide a different name for the Azure Purview managed Resource Group. Select **Next: Tags**.
+
+ > [!Note]
+ > The [managed Resource Group](create-catalog-portal.md#azure-purview-managed-resources) will contain a managed Storage account and an EventHub namespace dedicated and used by Azure Purview account.
+
+3. On the **Tags** tab, you can optionally add one or more tags. These tags are for use only in the Azure portal, not Azure Purview.
> [!Note] > If you have **Azure Policy** and need to add exception as in **Prerequisites**, you need to add the correct tag. For example, you can add `resourceBypass` tag:
If upon clicking Add you see two choices showing both marked (disabled) then thi
1. Click on **Save**.
+## Azure Purview managed resources
+During the deployment of an Azure Purview account, a new managed Resource Group with a new Azure Storage Account and a new EventHub namespace are also deployed along with Azure Purview account inside your Azure subscription. You can optionally choose a different naming convention for the managed Resource Group during the deployment.
+
+These resources are essential for operation of the Azure Purview account and are used to contain temporary data until the information is ingested into Azure Purview data Catalog.
+
+A deny assignment is automatically added to the managed Resource Group for all principals, with Azure Purview managed identity as the only exclusion to allow Azure Purview to manage the resources (storage account, event hub namespace) inside the Resource Group, therefore, you cannot remove or modify the managed Resource Group, managed resources or their content in data plane, however, the managed resource group and its content will be deleted automatically when the purview account is deleted.
+ ## Clean up resources If you no longer need this Azure Purview account, delete it with the following steps:
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-synapse-workspace.md
Previously updated : 05/21/2021 Last updated : 06/11/2021 # Register and scan Azure Synapse workspaces
Azure Synapse Workspace scans support capturing metadata and schema for dedicate
> [!NOTE] > These steps **must** be followed in the exact order specified along with applying the exact permissions specified in each step where applicable, to successfully scan your workspace.
-### **STEP 1**: Register your source (Only a contributor on the Synapse workspace who is also a data source admin in Purview can carry out this step)
+### **STEP 1**: Register your source (A user with at least Reader role on the Synapse workspace who is also a data source admin in Purview can carry out this step)
To register a new Azure Synapse Source in your data catalog, do the following:
On the **Register sources (Azure Synapse Analytics)** screen, do the following:
#### Setting up authentication for enumerating dedicated SQL database resources under a Synapse Workspace
-1. Navigate to the **Resource group** or **Subscription** that the Synapse workspace is in, in the Azure portal.
+1. Navigate to the Azure Synapse workspace resource, in the Azure portal.
1. Select **Access Control (IAM)** from the left navigation menu
-1. You must be owner or user access administrator to add a role on the **Resource group** or **Subscription**. Select *+Add* button.
+1. You must be owner or user access administrator to add a role on the resource. Select *+Add* button.
1. Set the **Reader** Role and enter your Azure Purview account name (which represents its MSI) under Select input box. Click *Save* to finish the role assignment.
+> [!NOTE]
+> Role can be also assigned from a higher level such as **Resource group** or **Subscription**, if you are planning to register and scan multiple Azure Synapse workspaces in your Azure Purview account.
+ #### Setting up authentication for enumerating serverless SQL database resources under a Synapse Workspace > [!NOTE]
On the **Register sources (Azure Synapse Analytics)** screen, do the following:
CREATE USER [PurviewAccountName] FOR LOGIN [PurviewAccountName]; ALTER ROLE db_datareader ADD MEMBER [PurviewAccountName]; ```
+> [!NOTE]
+> Repeat the previous step for all serverless SQL databases in your Synapse workspace.
+ 1. Navigate to the **Resource group** or **Subscription** that the Synapse workspace is in, in the Azure portal. 1. Select **Access Control (IAM)** from the left navigation menu
There are two ways to set up authentication for an Azure Synapse source:
EXEC sp_addrolemember 'db_datareader', [PurviewAccountName] GO ```
+> [!NOTE]
+> Repeat the previous step for all dedicated SQL databases in your Synapse workspace.
+ #### Using Managed identity for Serverless SQL databases 1. Navigate to your **Synapse workspace**
There are two ways to set up authentication for an Azure Synapse source:
CREATE USER [PurviewAccountName] FOR LOGIN [PurviewAccountName]; ALTER ROLE db_datareader ADD MEMBER [PurviewAccountName]; ```
+> [!NOTE]
+> Repeat the previous step for all erverless SQL databases in your Synapse workspace.
#### Using Service Principal for Dedicated SQL databases
There are two ways to set up authentication for an Azure Synapse source:
EXEC sp_addrolemember 'db_datareader', [ServicePrincipalID] GO ```
+> [!NOTE]
+> Repeat the previous step for all dedicated SQL databases in your Synapse workspace.
#### Using Service Principal for Serverless SQL databases
There are two ways to set up authentication for an Azure Synapse source:
CREATE USER [PurviewAccountName] FOR LOGIN [PurviewAccountName]; ALTER ROLE db_datareader ADD MEMBER [PurviewAccountName]; ```
+> [!NOTE]
+> Repeat the previous step for all serverless SQL databases in your Synapse workspace.
### **STEP 4**: Setting up Synapse workspace firewall access
search Cognitive Search Skill Pii Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-pii-detection.md
Parameters are case-sensitive and all are optional.
|--|-| | `defaultLanguageCode` | (Optional) The language code to apply to documents that don't specify language explicitly. If the default language code is not specified, English (en) will be used as the default language code. <br/> See [Full list of supported languages](../cognitive-services/text-analytics/language-support.md). | | `minimumPrecision` | A value between 0.0 and 1.0. If the confidence score (in the `piiEntities` output) is lower than the set `minimumPrecision` value, the entity is not returned or masked. The default is 0.0. |
-| `maskingMode` | A parameter that provides various ways to mask the personal information detected in the input text. The following options are supported: <ul><li>`none` (default): No masking occurs and the `maskedText` output will not be returned. </li><li> `redact`: Removes the detected entities from the input text and does not replace the deleted values. In this case, the offset in the `piiEntities` output will be in relation to the original text, and not the masked text. </li><li> `replace`: Replaces the detected entities with the character given in the `maskingCharacter` parameter. The character will be repeated to the length of the detected entity so that the offsets will correctly correspond to both the input text as well as the output `maskedText`.</li></ul> |
-| `maskingCharacter` | The character used to mask the text if the `maskingMode` parameter is set to `replace`. The following options are supported: `*` (default), `#`, `X`. This parameter can only be `null` if `maskingMode` is not set to `replace`. |
+| `maskingMode` | A parameter that provides various ways to mask the personal information detected in the input text. The following options are supported: <ul><li>`none` (default): No masking occurs and the `maskedText` output will not be returned. </li><li> `replace`: Replaces the detected entities with the character given in the `maskingCharacter` parameter. The character will be repeated to the length of the detected entity so that the offsets will correctly correspond to both the input text as well as the output `maskedText`.</li></ul> <br/> During the PIIDetectionSkill preview, the `maskingMode` option `redact` was also supported, which allowed removing the detected entities entirely without replacement. The `redact` option has since been deprecated and will no longer be supported in the skill going forward. |
+| `maskingCharacter` | The character used to mask the text if the `maskingMode` parameter is set to `replace`. The following option is supported: `*` (default). This parameter can only be `null` if `maskingMode` is not set to `replace`. <br/><br/> During the PIIDetectionSkill preview, there was support for additional `maskingCharacter` options `X` and `#`. The `X` and `#` options have since been deprecated and will no longer be supported in the skill going forward. |
| `modelVersion` | (Optional) The version of the model to use when calling the Text Analytics service. It will default to the most recent version when not specified. We recommend you do not specify this value unless absolutely necessary. See [Model versioning in the Text Analytics API](../cognitive-services/text-analytics/concepts/model-versioning.md) for more details. | ## Skill inputs
security-center Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/continuous-export.md
Previously updated : 05/05/2021 Last updated : 06/13/2021
This article describes how to configure continuous export to Log Analytics works
Continuous export can export the following data types whenever they change: -- Security alerts-- Security recommendations -- Security findings which can be thought of as 'sub' recommendations like findings from vulnerability assessment scanners or specific system updates. You can select to include them with their 'parent' recommendations such as "System updates should be installed on your machines".-- Secure score (per subscription or per control)-- Regulatory compliance data
+- Security alerts.
+- Security recommendations.
+- Security findings. These can be thought of as 'sub' recommendations and belong to a 'parent' recommendation. For example:
+ - The recommendation "System updates should be installed on your machines" will have a ΓÇÿsubΓÇÖ recommendation for every outstanding system update.
+ - The recommendation ΓÇ£Vulnerabilities in your virtual machines should be remediatedΓÇ¥ will have a ΓÇÿsubΓÇÖ recommendation for every vulnerability identified by the vulnerability scanner.
+ > [!NOTE]
+ > If youΓÇÖre configuring a continuous export with the REST API, always include the parent with the findings.
+- (Preview feature) Secure score per subscription or per control.
+- (Preview feature) Regulatory compliance data.
-> [!NOTE]
-> The exporting of secure score and regulatory compliance data is a preview feature.
## Set up a continuous export
security-center Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/custom-dashboards-azure-workbooks.md
Previously updated : 03/04/2021 Last updated : 06/13/2021 # Create rich, interactive reports of Security Center data
Within Azure Security Center, you can access the built-in reports to track your
| Aspect | Details | ||:|
-| Release state: | Preview<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)] |
+| Release state: | General Availability (GA) |
| Pricing: | Free | | Required roles and permissions: | To save workbooks, you must have at least Workbook Contributor permissions on the target resource group | | Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) National/Sovereign (US Gov, China Gov, Other Gov) |
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 06/09/2021 Last updated : 06/13/2021
To learn about *planned* changes that are coming soon to Security Center, see [I
## June 2021
+Updates in June include:
+
+- [Recommendations to encrypt with customer-managed keys (CMKs) disabled by default](#recommendations-to-encrypt-with-customer-managed-keys-cmks-disabled-by-default)
+- [Prefix for Kubernetes alerts changed from "AKS_" to "K8S_"](#prefix-for-kubernetes-alerts-changed-from-aks_-to-k8s_)
+- [Deprecated two recommendations from "Apply system updates" security control](#deprecated-two-recommendations-from-apply-system-updates-security-control)
+++
+### Recommendations to encrypt with customer-managed keys (CMKs) disabled by default
+
+Security Center includes multiple recommendations to encrypt data at rest with customer-managed keys, such as:
+
+- Container registries should be encrypted with a customer-managed key (CMK)
+- Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest
+- Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)
+
+Data in Azure is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when required for compliance with a specific policy your organization is choosing to enforce.
+
+With this change, the recommendations to use CMKs are now **disabled by default**. When relevant for your organization, you can enable them by changing the *Effect* parameter for the corresponding security policy to **AuditIfNotExists** or **Enforce**. Learn more in [Enable a security policy](tutorial-security-policy.md#enable-a-security-policy).
+
+This change is reflected in the names of the recommendation with a new prefix, **[Enable if required]**, as shown in the following examples:
+
+- [Enable if required] Storage accounts should use customer-managed key to encrypt data at rest
+- [Enable if required] Container registries should be encrypted with a customer-managed key (CMK)
+- [Enable if required] Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest
+++ ### Prefix for Kubernetes alerts changed from "AKS_" to "K8S_" Azure Defender for Kubernetes recently expanded to protect Kubernetes clusters hosted on-premises and in multi cloud environments. Learn more in [Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)](release-notes.md#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multi-cloud-kubernetes-deployments-in-preview).
Any suppression rules that refer to alerts beginning "AKS_" were automatically c
For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-k8scluster).
+### Deprecated two recommendations from "Apply system updates" security control
+
+The following two recommendations were deprecated:
+
+- **OS version should be updated for your cloud service roles** - By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016.
+- **Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version** - This recommendation's evaluations aren't as wide-ranging as we'd like them to be. We plan to replace the recommendation with an enhanced version that's better aligned with your security needs.
++ ## May 2021 Updates in May include:
security-center Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
We recommend every organization carefully review their assigned Azure Policy ini
> [!TIP] > For details of reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
-Even though Security Center's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it'll sometimes be necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies. industry standards, regulatory standards, and benchmarks you're obligated to meet.<br><br>
+Even though Security Center's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it'll sometimes be necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks you're obligated to meet.<br><br>
<div class="foo"> <style type="text/css">
security-center Security Center Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-alerts-overview.md
Security alerts are triggered by advanced detections and are available only with
## What are security alerts and security incidents?
-**Alerts** are the notifications that Security Center generates when it detects threats on your resources. Security Center prioritizes and lists the alerts, along with the information needed for you to quickly investigate the problem. Security Center also provides recommendations for how you can remediate an attack.
+**Alerts** are the notifications that Security Center generates when it detects threats on your resources. Security Center prioritizes and lists the alerts, along with the information needed for you to quickly investigate the problem. Security Center also provides detailed steps to help you remediate attacks. Alerts data is retained for 90 days.
**A security incident** is a collection of related alerts, instead of listing each alert individually. Security Center uses [Cloud smart alert correlation](#cloud-smart-alert-correlation-in-azure-security-center-incidents) to correlate different alerts and low fidelity signals into security incidents.
The severity is based on how confident Security Center is in the finding or the
> [!NOTE] > Alert severity is displayed differently in the portal and versions of the REST API that predate 01-01-2019. If you're using an older version of the API, upgrade for the consistent experience described below.
-| Severity | Recommended response |
-|||
-| **High** | There is a high probability that your resource is compromised. You should look into it right away. Security Center has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. |
-| **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Security Center's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections. For example, a sign-in attempt from an anomalous location. |
+| Severity | Recommended response |
+|-||
+| **High** | There is a high probability that your resource is compromised. You should look into it right away. Security Center has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. |
+| **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Security Center's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections. For example, a sign-in attempt from an anomalous location. |
| **Low** | This might be a benign positive or a blocked attack. Security Center isn't confident enough that the intent is malicious and the activity might be innocent. For example, log clear is an action that might happen when an attacker tries to hide their tracks, but in many cases is a routine operation performed by admins. Security Center doesn't usually tell you when attacks were blocked, unless it's an interesting case that we suggest you look into. |
-| **Informational** | An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
+| **Informational** | An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
+| | |
## Export alerts You have a range of options for viewing your alerts outside of Security Center, including: - **Download CSV report** on the alerts dashboard provides a one-time export to CSV.-- **Continuous export** from pricing & settings allows you to configure streams of security alerts and recommendations to Log Analytics workspaces and Event Hubs. [Learn more about continuous export](continuous-export.md)-- **Azure Sentinel connector** streams security alerts from Azure Security Center into Azure Sentinel. [Learn more about connecting Azure Security Center with Azure Sentinel](../sentinel/connect-azure-security-center.md)
+- **Continuous export** from pricing & settings allows you to configure streams of security alerts and recommendations to Log Analytics workspaces and Event Hubs. [Learn more about continuous export](continuous-export.md).
+- **Azure Sentinel connector** streams security alerts from Azure Security Center into Azure Sentinel. [Learn more about connecting Azure Security Center with Azure Sentinel](../sentinel/connect-azure-security-center.md).
+
+Learn about all of the export options in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md) and [Continuously export Security Center data](continuous-export.md).
## Cloud smart alert correlation in Azure Security Center (incidents)
security-center Security Center Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-managing-and-responding-alerts.md
This topic shows you how to view and process Security Center's alerts and protec
Advanced detections that trigger security alerts are only available with Azure Defender. A free trial is available. To upgrade, see [Enable Azure Defender](enable-azure-defender.md). ## What are security alerts?
-Security Center automatically collects, analyzes, and integrates log data from your Azure resources, the network, and connected partner solutions, like firewall and endpoint protection solutions, to detect real threats and reduce false positives. A list of prioritized security alerts is shown in Security Center along with the information you need to quickly investigate the problem and recommendations for how to remediate an attack.
+Security Center automatically collects, analyzes, and integrates log data from your Azure resources, the network, and connected partner solutions - like firewall and endpoint protection solutions - to detect real threats and reduce false positives. A list of prioritized security alerts is shown in Security Center along with the information you need to quickly investigate the problem and steps to take to remediate an attack.
To learn about the different types of alerts, see [Security alerts - a reference guide](alerts-reference.md).
security-center Security Center Wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security
| Aspect | Details | ||:--|
-| Release state: | Generally available (GA) |
+| Release state: | General Availability (GA) |
| Pricing: | Requires [Azure Defender for servers](defender-for-servers-introduction.md) | | Supported platforms: | ΓÇó Azure machines running Windows<br> ΓÇó Azure Arc machines running Windows| | Supported versions of Windows for detection: | ΓÇó Windows Server 2019, 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md)<br> ΓÇó [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops (EVD)|
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 06/09/2021 Last updated : 06/13/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--||
-| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | June 2021 |
| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | June 2021 |
-| [Recommendations to encrypt with customer-managed keys (CMKs) being disabled](#recommendations-to-encrypt-with-customer-managed-keys-cmks-being-disabled) | June 2021 |
| [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) | Q3 2021 | | [Enable Azure Defender security control to be included in secure score](#enable-azure-defender-security-control-to-be-included-in-secure-score) | Q3 2021 | | | |
-### Two recommendations from "Apply system updates" security control being deprecated
-
-**Estimated date for change:** June 2021
-
-The following two recommendations are being deprecated:
--- **OS version should be updated for your cloud service roles** - By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016.-- **Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version** - This recommendation's evaluations aren't as wide-ranging as we'd like them to be. The current version of this recommendation will eventually be replaced with an enhanced version that's better aligned with our customer's security needs.-- ### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013 The legacy implementation of ISO 27001 will be removed from Security Center's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Security Center, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions, and the current legacy ISO 27001 will soon be removed from the dashboard. :::image type="content" source="media/upcoming-changes/removing-iso-27001-legacy-implementation.png" alt-text="Security Center's regulatory compliance dashboard showing the message about the removal of the legacy implementation of ISO 27001." lightbox="media/upcoming-changes/removing-iso-27001-legacy-implementation.png":::
-### Recommendations to encrypt with customer-managed keys (CMKs) being disabled
-
-**Estimated date for change:** June 2021
-
-Security Center includes multiple recommendations to encrypt data at rest with customer-managed keys, such as:
--- Container registries should be encrypted with a customer-managed key (CMK)-- Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest-- Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)-
-Data in Azure is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when required for compliance with a specific policy your organization is choosing to enforce.
-
-With this change, the recommendations to use CMKs will be **disabled by default**. When relevant for your organization, you can enable them by changing the *Effect* parameter for the corresponding security policy to **AuditIfNotExists** or **Enforce**. Learn more in [Enable a security policy](tutorial-security-policy.md#enable-a-security-policy).
-
-Initially, this change will be reflected in the names of the recommendation with a new prefix, **[Enable if required]**, as shown in the following examples:
--- [Enable if required] Storage accounts should use customer-managed key to encrypt data at rest-- [Enable if required] Container registries should be encrypted with a customer-managed key (CMK)-- [Enable if required] Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest--- ### Enhancements to SQL data classification recommendation **Estimated date for change:** Q3 2021
security Azure Disk Encryption Vms Vmss https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/azure-disk-encryption-vms-vmss.md
The following articles provide guidance for encrypting Linux virtual machines.
- [Creating and configuring a key vault for Azure Disk Encryption](../../virtual-machines/linux/disk-encryption-key-vault.md) - [Azure Disk Encryption sample scripts](../../virtual-machines/linux/disk-encryption-sample-scripts.md) - [Azure Disk Encryption troubleshooting](../../virtual-machines/linux/disk-encryption-troubleshooting.md)-- [Azure Disk Encryption frequently asked questions](../../virtual-machines/linux/disk-encryption-faq.md)
+- [Azure Disk Encryption frequently asked questions](../../virtual-machines/linux/disk-encryption-faq.yml)
### Azure disk encryption with Azure AD (previous version)
The following articles provide guidance for encrypting Windows virtual machines.
- [Creating and configuring a key vault for Azure Disk Encryption](../../virtual-machines/windows/disk-encryption-key-vault.md) - [Azure Disk Encryption sample scripts](../../virtual-machines/windows/disk-encryption-sample-scripts.md) - [Azure Disk Encryption troubleshooting](../../virtual-machines/windows/disk-encryption-troubleshooting.md)-- [Azure Disk Encryption frequently asked questions](../../virtual-machines/windows/disk-encryption-faq.md)
+- [Azure Disk Encryption frequently asked questions](../../virtual-machines/windows/disk-encryption-faq.yml)
### Azure disk encryption with Azure AD (previous version)
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/customize-alert-details.md
+
+ Title: Customize alert details in Azure Sentinel | Microsoft Docs
+description: Customize how alerts are named and described, along with their severity and assigned tactics, based on the alerts' content.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 02/10/2021+++
+# Customize alert details in Azure Sentinel
+
+> [!IMPORTANT]
+>
+> - The alert details feature is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Introduction
+
+When you define a name and description for your scheduled analytics rules, and you assign them severities and MITRE ATT&CK tactics, all alerts generated by a particular rule - and all incidents created as a result - will be displayed with the same name, description, and so on, without regard to the particular content of a specific instance of the alert.
+
+With the **alert details** feature, you can tailor an alert's appearance to its content. Here you can select parameters in your alert that can be represented in the name or description of each instance of the alert, or that can contain the tactics and severity assigned to that instance of the alert. If the selected parameter has no value (or an invalid value in the case of tactics and severity), the alert details will revert to the defaults specified in the first page of the wizard.
+
+The procedure detailed below is part of the analytics rule creation wizard. It's treated here independently to address the scenario of adding or changing alert details in an existing analytics rule.
+
+## How to customize alert details
+
+1. From the Azure Sentinel navigation menu, select **Analytics**.
+
+1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create > Scheduled query rule** at the top of the screen.
+
+1. Click the **Set rule logic** tab.
+
+1. In the **Alert enrichment (Preview)** section, expand **Alert details**.
+
+ :::image type="content" source="media/customize-alert-details/alert-enrichment.png" alt-text="Customize alert details":::
+
+1. In the now-expanded **Alert details** section, add free text that includes parameters corresponding to the details you want to display in the alert:
+
+ 1. In the **Alert Name Format** field, enter the text you want to appear as the name of the alert (the alert text), and include, in double curly brackets, any parameters you want to be part of the alert text.
+
+ Example: `Alert from {{ProviderName}}: {{AccountName}} failed to log on to computer {{ComputerName}} with IP address {{IPAddress}}.`
+
+ 1. Do the same with the **Alert Description Format** field.
+
+ 1. Use the **Tactic Column** and **Severity Column** fields only if your query results contain columns with this information in them. For each one, choose the column that contains the corresponding information.
+
+ If you change your mind, or if you made a mistake, you can remove an alert detail by clicking the trash can icon next to the **Tactic/Severity Column** fields or delete the free text from the **Alert Name/Description Format** fields.
+
+1. When you have finished customizing your alert details, continue to the next tab in the wizard. If you're editing an existing rule, click the **Review and create** tab. Once the rule validation is successful, click **Save**.
+
+## Next steps
+In this document, you learned how to customize alert details in Azure Sentinel analytics rules. To learn more about Azure Sentinel, see the following articles:
+- Get the complete picture on [scheduled query analytics rules](tutorial-detect-threats-custom.md).
+- Learn more about [entities in Azure Sentinel](entities-in-azure-sentinel.md).
sentinel Map Data Fields To Entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/map-data-fields-to-entities.md
The procedure detailed below is part of the analytics rule creation wizard. It's
1. From the Azure Sentinel navigation menu, select **Analytics**.
-1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create &#10132; Scheduled query rule** at the top of the screen.
+1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create > Scheduled query rule** at the top of the screen.
-1. Click the **Set rule logic** tab.
+1. Click the **Set rule logic** tab.
- :::image type="content" source="media/map-data-fields-to-entities/map-entities.png" alt-text="Map fields to entities":::
+1. In the **Alert enrichment (Preview)** section, expand **Entity mapping**.
+
+ :::image type="content" source="media/map-data-fields-to-entities/alert-enrichment.png" alt-text="Expand entity mapping":::
+
+1. In the now-expanded **Entity mapping** section, select an entity type from the **Entity type** drop-down list.
-1. In the **Alert enhancement** section, under **Entity mapping**, select an entity type from the **Entity type** drop-down list.
+ :::image type="content" source="media/map-data-fields-to-entities/choose-entity-type.png" alt-text="Choose an entity type":::
1. Select an **identifier** for the entity. Identifiers are attributes of an entity that can sufficiently identify it. Choose one from the **Identifier** drop-down list, and then choose a data field from the **Value** drop-down list that will correspond to the identifier. With some exceptions, the **Value** list is populated by the data fields in the table defined as the subject of the rule query. You can define **up to three identifiers** for a given entity. Some identifiers are required, others are optional. You must choose at least one required identifier. If you don't, a warning message will instruct you which identifiers are required. For best results - for maximum unique identification - you should use **strong identifiers** whenever possible, and using multiple strong identifiers will enable greater correlation between data sources. See the full list of available [entities and identifiers](entities-reference.md).
+ :::image type="content" source="media/map-data-fields-to-entities/map-entities.png" alt-text="Map fields to entities":::
+ 1. Click **Add new entity** to map more entities. You can map **up to five entities** in a single analytics rule. You can also map more than one of the same type. For example, you can map two **IP** entities, one from a *source IP address* field and one from a *destination IP address* field. This way you can track them both. If you change your mind, or if you made a mistake, you can remove an entity mapping by clicking the trash can icon next to the entity drop-down list.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Once the Microsoft 365 Defender integration is connected, all the component aler
- Using both mechanisms together is completely supported, and can be used to facilitate the transition to the new Microsoft 365 Defender incident creation logic. Doing so will, however, create **duplicate incidents** for the same alerts. -- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 products (Defender for Endpoint, Defender for Identity, and Defender for Office 365 - see Cloud App Security below) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration.
+- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 products (Defender for Endpoint, Defender for Identity, and Defender for Office 365, and Cloud App Security) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration.
-- For Microsoft Cloud App Security alerts, not all alert types are currently onboarded to Microsoft 365 Defender. To make sure you are still getting incidents for all Cloud App Security alerts, you must keep or create **Microsoft incident creation rules** for the [alert types *not onboarded* to Microsoft 365 Defender](microsoft-cloud-app-security-alerts-not-imported-microsoft-365-defender.md).
+ > [!NOTE]
+ > All Microsoft Cloud App Security alert types are now being onboarded to Microsoft 365 Defender.
### Working with Microsoft 365 Defender incidents in Azure Sentinel and bi-directional sync
sentinel Microsoft Cloud App Security Alerts Not Imported Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/microsoft-cloud-app-security-alerts-not-imported-microsoft-365-defender.md
- Title: Microsoft Cloud App Security alerts not imported into Azure Sentinel through Microsoft 365 Defender integration | Microsoft Docs
-description: This article displays the alerts from Microsoft Cloud App Security that must be ingested directly into Azure Sentinel, since they are not collected by Microsoft 365 Defender.
-
-cloud: na
------- Previously updated : 04/21/2021----
-# Microsoft Cloud App Security alerts not imported into Azure Sentinel through Microsoft 365 Defender integration
-
-Like the other Microsoft Defender components (Defender for Endpoint, Defender for Identity, and Defender for Office 365), Microsoft Cloud App Security generates alerts that are collected by Microsoft 365 Defender. Microsoft 365 Defender in turn produces incidents that are ingested by and [synchronized with Azure Sentinel](microsoft-365-defender-sentinel-integration.md#microsoft-365-defender-incidents-and-microsoft-incident-creation-rules) when the Microsoft 365 Defender connector is enabled.
-
-Unlike with the other three components, **not all types of** Cloud App Security alerts are onboarded to Microsoft 365 Defender, so that if you want the incidents from all Cloud App Security alerts in Azure Sentinel, you will have to adjust your Microsoft incident creation analytics rules accordingly, so that those alerts that are ingested directly to Sentinel continue to generate incidents, while those that are onboarded to Microsoft 365 Defender don't (so there won't be duplicates).
-
-## Cloud App Security alerts not onboarded to Microsoft 365 Defender
-
-The following alerts are not onboarded to Microsoft 365 Defender, and Microsoft incident creation rules resulting in these alerts should continue to be configured to generate incidents.
-
-| Cloud App Security alert display name | Cloud App Security alert name |
-|-|-|
-| **Access policy alert** | `ALERT_CABINET_INLINE_EVENT_MATCH` |
-| **Activity creation from Discovered Traffic log exceeded daily limit** | `ALERT_DISCOVERY_TRAFFIC_LOG_EXCEEDED_LIMIT` |
-| **Activity policy alert** | `ALERT_CABINET_EVENT_MATCH_AUDIT` |
-| **Anomalous exfiltration alert** | `ALERT_EXFILTRATION_DISCOVERY_ANOMALY_DETECTION` |
-| **Compromised account** | `ALERT_COMPROMISED_ACCOUNT` |
-| **Discovered app security breach alert** | `ALERT_MANAGEMENT_DISCOVERY_BREACHED_APP` |
-| **Inactive account** | `ALERT_ZOMBIE_USER` |
-| **Investigation Priority Score Increased** | `ALERT_UEBA_INVESTIGATION_PRIORITY_INCREASE` |
-| **Malicious OAuth app consent** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MALICIOUS_OAUTH_APP_CONSENT` |
-| **Misleading OAuth app name** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MISLEADING_APP_NAME` |
-| **Misleading publisher name for an OAuth app** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MISLEADING_PUBLISHER_NAME` |
-| **New app discovered** | `ALERT_CABINET_DISCOVERY_NEW_SERVICE` |
-| **Non-secure redirect URL is used by an OAuth app** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_NON_SECURE_REDIRECT_URL` |
-| **OAuth app policy alert** | `ALERT_CABINET_APP_PERMISSION` |
-| **Suspicious activity alert** | `ALERT_SUSPICIOUS_ACTIVITY` |
-| **Suspicious cloud use alert** | `ALERT_DISCOVERY_ANOMALY_DETECTION` |
-| **Suspicious OAuth app name** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_SUSPICIOUS_APP_NAME` |
-| **System alert app connector error** | `ALERT_MANAGEMENT_DISCONNECTED_API` |
-| **System alert Cloud Discovery automatic log upload error** | `ALERT_MANAGEMENT_LOG_COLLECTOR_LOW_RATE` |
-| **System alert Cloud Discovery log-processing error** | `ALERT_MANAGEMENT_LOG_COLLECTOR_CONSTANTLY_FAILED_PARSING` |
-| **System alert ICAP connector error** | `ALERT_MANAGEMENT_DLP_CONNECTOR_ERROR` |
-| **System alert SIEM agent error** | `ALERT_MANAGEMENT_DISCONNECTED_SIEM` |
-| **System alert SIEM agent notifications** | `ALERT_MANAGEMENT_NOTIFICATIONS_SIEM` |
-| **Unusual region for cloud resource** | `MCAS_ALERT_ANUBIS_DETECTION_UNCOMMON_CLOUD_REGION` |
-|
-
-## Next steps
--- [Connect Microsoft 365 Defender](connect-microsoft-365-defender.md) to Azure Sentinel.-- Learn more about [Azure Sentinel](overview.md), [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender), and [Cloud App Security](/cloud-app-security/what-is-cloud-app-security).
sentinel Surface Custom Details In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/surface-custom-details-in-alerts.md
The procedure detailed below is part of the analytics rule creation wizard. It's
1. From the Azure Sentinel navigation menu, select **Analytics**.
-1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create &#10132; Scheduled query rule** at the top of the screen.
+1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create > Scheduled query rule** at the top of the screen.
1. Click the **Set rule logic** tab.
-1. In the **Alert enhancement** section, select **Custom details**.
+1. In the **Alert enrichment (Preview)** section, expand **Custom details**.
- :::image type="content" source="media/surface-custom-details-in-alerts/alert-enhancement.png" alt-text="Find and select custom details":::
+ :::image type="content" source="media/surface-custom-details-in-alerts/alert-enrichment.png" alt-text="Find and select custom details":::
1. In the now-expanded **Custom details** section, add key-value pairs corresponding to the details you want to surface:
The procedure detailed below is part of the analytics rule creation wizard. It's
> [!NOTE] >
- >**Service limits**
+ > **Service limits**
> - You can define **up to 20 custom details** in a single analytics rule. > > - The size limit for all custom details, collectively, is **2 KB**.
sentinel Tutorial Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-custom.md
In the **Set rule logic** tab, you can either write a query directly in the **Ru
Learn more about surfacing custom details in alerts, and see the [complete instructions](surface-custom-details-in-alerts.md).
+- Use the **Alert details** configuration section to tailor the alert's presentation details to its actual content. Alert details allow you to display, for example, an attacker's IP address or account name in the title of the alert itself, so it will appear in your incidents queue, giving you a much richer and clearer picture of your threat landscape.
+
+ See complete instructions on [customizing your alert details](customize-alert-details.md).
+ ### Query scheduling and alert threshold - In the **Query scheduling** section, set the following parameters:
If you see that your query would trigger too many or too frequent alerts, you ca
Currently the number of alerts a rule can generate is capped at 20. If in a particular rule, **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns more than 20 events, each of the first 19 events will generate a unique alert, and the 20th alert will summarize the entire set of returned events. In other words, the 20th alert is what would have been generated under the **Group all events into a single alert** option.
+ If you choose this option, Azure Sentinel will add a new field, **OriginalQuery**, to the results of the query. Here is a comparison of the existing **Query** field and the new field:
+
+ | Field name | Contains | Running the query in this field<br>results in... |
+ | - | :-: | :-: |
+ | **Query** | The compressed record of the event that generated this instance of the alert | The event that generated this instance of the alert |
+ | **OriginalQuery** | The original query as written in the analytics&nbsp;rule | The most recent event in the timeframe in which the query runs, that fits the parameters defined by the query |
+ |
+
+ In other words, the **OriginalQuery** field behaves like the **Query** field usually behaves. The result of this extra field is that the problem described by the first item in the [Troubleshooting](#troubleshooting) section below has been solved.
+
> [!NOTE] > What's the difference between **events** and **alerts**? >
In the **Alert grouping** section, if you want a single incident to be generated
- **Group alerts triggered by this analytics rule into a single incident by**: Choose the basis on which alerts will be grouped together:
- |Option |Description |
- |||
- |**Group alerts into a single incident if all the entities match** | Alerts are grouped together if they share identical values for each of the mapped entities (defined in the [Set rule logic](#define-the-rule-query-logic-and-configure-settings) tab above). This is the recommended setting. |
- |**Group all alerts triggered by this rule into a single incident** | All the alerts generated by this rule are grouped together even if they share no identical values. |
- |**Group alerts into a single incident if the selected entities match** |Alerts are grouped together if they share identical values for some of the mapped entities, which you can select from the drop-down list). <br><br>You might want to use this setting if, for example, you want to create separate incidents based on the source or target IP addresses, or if you want to group alerts that match a specific entity and severity. <br><br>**Note**: When you select this option, you must have at least one entity type or field selected for the rule. Otherwise, the rule validation will fail and the the rule won't be created. |
- | | |
+ | Option | Description |
+ | - | - |
+ | **Group alerts into a single incident if all the entities match** | Alerts are grouped together if they share identical values for each of the mapped entities (defined in the [Set rule logic](#define-the-rule-query-logic-and-configure-settings) tab above). This is the recommended setting. |
+ | **Group all alerts triggered by this rule into a single incident** | All the alerts generated by this rule are grouped together even if they share no identical values. |
+ | **Group alerts into a single incident if the selected entities and details match** | Alerts are grouped together if they share identical values for all of the mapped entities, alert details, and custom details selected from the respective drop-down lists.<br><br>You might want to use this setting if, for example, you want to create separate incidents based on the source or target IP addresses, or if you want to group alerts that match a specific entity and severity.<br><br>**Note**: When you select this option, you must have at least one entity type or field selected for the rule. Otherwise, the rule validation will fail and the the rule won't be created. |
+ |
- **Re-open closed matching incidents**: If an incident has been resolved and closed, and later on another alert is generated that should belong to that incident, set this setting to **Enabled** if you want the closed incident re-opened, and leave as **Disabled** if you want the alert to create a new incident.
If **event grouping** is set to **trigger an alert for each event**, then in cer
To see the events, manually remove the line with the hash from the rule's query, and run the query.
+> [!NOTE]
+> This issue has been solved by the addition of a new field, **OriginalQuery**, to the results when this event grouping option is selected. See the [description](#event-grouping-and-rule-suppression) above.
+ ### Issue: A scheduled rule failed to execute, or appears with AUTO DISABLED added to the name It's a rare occurrence that a scheduled query rule fails to run, but it can happen. Azure Sentinel classifies failures up front as either transient or permanent, based on the specific type of the failure and the circumstances that led to it.
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/work-with-anomaly-rules.md
You can see how well an anomaly rule is performing by reviewing a sample of the
1. Set the **Time range** filter to **Last 24 hours**.
-1. Enter the following in the query window (in place of "Type your query here..."):
+1. Copy the Kusto query below and paste it in the query window (where it says "Type your query here or..."):
```kusto Anomalies
synapse-analytics Quickstart Gallery Sample Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/quickstart-gallery-sample-notebook.md
+
+ Title: 'Quickstart: Use a sample notebook from the Synapse Analytics gallery'
+description: Learn how to use a sample notebook from the Synapse Analytics gallery to explore data and build a machine learning model.
++++++ Last updated : 06/11/2021++++
+# Quickstart: Use a sample notebook from the Synapse Analytics gallery
+
+In this quickstart, you'll learn how to copy a sample machine learning notebook from the Synapse Analytics gallery into your workspace, modify it, and run it.
+The sample notebook ingests an Open Dataset of NYC Taxi trips and uses visualization to help you prepare the data. It then trains a model to predict whether there will be a tip on a given trip.
+
+This notebook demonstrates the basic steps used in creating a model: **data import**, **data prep**, **model training**, and **evaluation**. You can use this sample as a starting point for creating a model with your own data.
+
+## Prerequisites
+
+* [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with.
+* [Spark pool](../get-started-analyze-spark.md) in your Azure Synapse Analytics workspace.
+
+## Copy the notebook to your workspace
+
+1. Open your workspace and select **Learn** from the home page.
+1. In the **Knowledge center**, select **Browse gallery**.
+1. In the gallery, select **Notebooks**.
+1. Find and select the notebook "Data Exploration and ML Modeling - NYC taxi predict using Spark MLib".
+
+ :::image type="content" source="media\quickstart-gallery-sample-notebook\gallery-select-ml-notebook.png" alt-text="Select the machine learning sample notebook in the gallery.":::
+
+1. Select **Continue**.
+1. On the notebook preview page, select **Open notebook**. The sample notebook is copied into your workspace and opened.
+
+ :::image type="content" source="media\quickstart-gallery-sample-notebook\gallery-open-ml-notebook.png" alt-text="Open the machine learning sample notebook into your workspace.":::
+
+1. In the **Attach to** menu in the open notebook, select your Apache Spark pool.
+
+## Run the notebook
+
+The notebook is divided into multiple cells that each perform a specific function.
+You can manually run each cell, running cells sequentially, or select **Run all** to run all the cells.
+
+Here are descriptions for each of the cells in the notebook:
+
+1. Import PySpark functions that the notebook uses.
+1. **Ingest Date** - Ingest data from the Azure Open Dataset **NycTlcYellow** into a local dataframe for processing. The code extracts data within a specific time period - you can modify the start and end dates to get different data.
+1. Downsample the dataset to make development faster. You can modify this step to change the sample size or the sampling seed.
+1. **Exploratory Data Analysis** - Display charts to view the data. This can give you an idea what data prep might be needed before creating the model.
+1. **Data Prep and Featurization** - Filter out outlier data discovered through visualization and create some useful derived variables.
+1. **Data Prep and Featurization Part 2** - Drop unneeded columns and create some additional features.
+1. **Encoding** - Convert string variables to numbers that the Logistic Regression model is expecting.
+1. **Generation of Testing and Training Data Sets** - Split the data into separate testing and training data sets. You can modify the fraction and randomizing seed used to split the data.
+1. **Train the Model** - Train a Logistic Regression model and display its "Area under ROC" metric to see how well the model is working. This step also saves the trained model in case you want to use it elsewhere.
+1. **Evaluate and Visualize** - Plot the model's ROC curve to further evaluate the model.
+
+## Save the notebook
+
+To save your notebook by selecting **Publish** on the workspace command bar.
+
+## Copying the sample notebook
+
+To make a copy of this notebook, click the ellipsis in the top command bar and select **Clone** to create a copy in your workspace or **Export** to download a copy of the notebook (`.ipynb`) file.
++
+## Clean up resources
+
+To ensure the Spark instance is shut down when you're finished, end any connected sessions (notebooks). The pool shuts down when the **idle time** specified in the Apache Spark pool is reached. You can also select **stop session** from the status bar at the upper right of the notebook.
++
+## Next steps
+
+* [Check out more Synapse sample notebooks in GitHub](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
+* [Machine learning with Apache Spark](../spark/apache-spark-machine-learning-concept.md)
time-series-insights Concepts Streaming Ingress Throughput Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-streaming-ingress-throughput-limits.md
Title: 'Streaming ingestion throughput limitations- Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about ingress throughput limits in Azure Time Series Insights Gen2.---++++
time-series-insights Concepts Ux Panels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-ux-panels.md
Title: 'Visualize data in the Time Series Insights Explorer - Azure Time Series Insights Gen2| Microsoft Docs' description: Learn about features and options available in the Azure Time Series Insights Explorer.---++++
time-series-insights How To Edit Your Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-edit-your-model.md
Title: 'Data modeling in Gen2 environments - Azure Time Series Insights | Microsoft Docs' description: Learn about data modeling in Azure Time Series Insights Gen2.---++++
time-series-insights Quickstart Explore Tsi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/quickstart-explore-tsi.md
Title: 'Quickstart: Explore the Gen2 demo environment - Azure Time Series Insights Gen2 | Microsoft Docs' description: Explore key features of the Azure Time Series Insights Gen2 demo environment.-+ ---++++
In this quickstart, you learn how to use Azure Time Series Insights Gen2 to find
The Azure Time Series Insights Gen2 Explorer demonstrates historical data and root cause analysis. To get started:
-1. Go to theΓÇ»[Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) environment.
+1. Go to theΓÇ»[Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) environment.
1. If you're prompted, sign in to the Azure Time Series Insights Gen2 Explorer by using your Azure account credentials.
By using Azure Time Series Insights Gen2 and sensor telemetry, we've discovered
Two of the voltage sensors are operating comparably and within normal parameters. It looks like the **GridVoltagePhase3** sensor is the culprit.
-1. With highly contextual data added, the phase 3 drop-off appears even more to be the problem. Now, we have a good lead on the cause of the warning. We're ready to refer the issue to our maintenance team.
+1. With highly contextual data added, the phase 3 drop-off appears even more to be the problem. Now, we have a good lead on the cause of the warning. We're ready to refer the issue to our maintenance team.
* Change the display to overlay all **Generator System** sensors on the same chart scale.
time-series-insights Time Series Insights Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-customer-data-requests.md
Title: 'Customer data request featuresΓÇï - Azure Time Series Insights | Microsoft Docs' description: Learn about customer data request features in Azure Time Series Insights.---++++
time-series-insights Time Series Insights Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-explorer.md
Title: 'Explore data using the Explorer - Azure Time Series Insights | Microsoft
description: Learn how to use the Azure Time Series Insights Explorer to view your IoT data. ----++++ ms.devlang: csharp
time-series-insights Time Series Insights How To Configure Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-how-to-configure-retention.md
Title: 'How to configure retention in your environment - Azure Time Series Insights | Microsoft Docs'
-description: Learn how to configure retention in your Azure Time Series Insights environment.
+description: Learn how to configure retention in your Azure Time Series Insights environment.
---++++ Last updated 09/29/2020
Each Azure Time Series Insights environment has an additional setting **Storage
- **Purge old data** (default) - **Pause ingress**
-For detailed information to better understand these settings, review [Understanding retention in Azure Time Series Insights](time-series-insights-concepts-retention.md).
+For detailed information to better understand these settings, review [Understanding retention in Azure Time Series Insights](time-series-insights-concepts-retention.md).
## Configure data retention
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-encryption-overview.md
+
+ Title: Overview of managed disk encryption options
+description: Overview of managed disk encryption options
+ Last updated : 06/05/2021+++++++
+# Overview of managed disk encryption options
+
+There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host.
+
+- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE provides volume encryption for the OS and data disks of Azure virtual machines (VMs) through the use of feature of Linux or the [BitLocker](https://en.wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md).
+
+- **Server-Side Encryption** (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting it to the cloud. For full details, see [Server-side encryption of Azure Disk Storage](./disk-encryption.md).
+
+- **Encryption at host** ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled are not encrypted with SSE; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage.For full details, see [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+
+## Comparison
+
+Here is a comparison of SSE, ADE, and encryption at host.
+
+| | Encryption at rest (OS and data disks) | Temp disk encryption | Encryption of caches | Data flows encrypted between Compute and Storage | Customer control of keys | Azure Security Center disk encryption status |
+|--|--|--|--|--|--|--|
+| **Encryption at rest with platform-managed key (SSE+PMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#10060; | Unhealthy, not applicable if exempt |
+| **Encryption at rest with customer-managed key (SSE+CMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#x2705; | Unhealthy, not applicable if exempt |
+| **Azure Disk Encryption** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | Healthy |
+| **Encryption at Host** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
+
+> [!Important]
+> For Encryption at Host, Azure Security Center does not detect the encryption state.
+
+## Next steps
+
+- [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md)
+- [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md)
+- [Server-side encryption of Azure Disk Storage](./disk-encryption.md)
+- [Encryption at host](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data)
+- [Azure Security Fundamentals - Azure encryption overview](../security/fundamentals/encryption-overview.md)
virtual-machines Disk Encryption Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disk-encryption-faq.md
- Title: FAQ - Azure Disk Encryption for Linux VMs
-description: This article provides answers to frequently asked questions about Microsoft Azure Disk Encryption for Linux IaaS VMs.
------ Previously updated : 06/05/2019---
-# Azure Disk Encryption for Linux virtual machines FAQ
-
-This article provides answers to frequently asked questions (FAQ) about Azure Disk Encryption for Linux virtual machines (VMs). For more information about this service, see [Azure Disk Encryption overview](disk-encryption-overview.md).
-
-## What is Azure Disk Encryption for Linux VMs?
-
-Azure Disk Encryption for Linux VMs uses the dm-crypt feature of Linux to provide full disk encryption of the OS disk* and data disks. Additionally, it provides encryption of the temporary disk when using the [EncryptFormatAll feature](disk-encryption-linux.md#use-encryptformatall-feature-for-data-disks-on-linux-vms). The content flows encrypted from the VM to the Storage backend. Thereby, providing end-to-end encryption with a customer-managed key.
-
-See [Supported VMs and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems).
-
-## Where is Azure Disk Encryption in general availability (GA)?
-
-Azure Disk Encryption for Linux VMs is in general availability in all Azure public regions.
-
-## What user experiences are available with Azure Disk Encryption?
-
-Azure Disk Encryption GA supports Azure Resource Manager templates, Azure PowerShell, and Azure CLI. The different user experiences give you flexibility. You have three different options for enabling disk encryption for your VMs. For more information on the user experience and step-by-step guidance available in Azure Disk Encryption, see [Azure Disk Encryption scenarios for Linux](disk-encryption-linux.md).
-
-## How much does Azure Disk Encryption cost?
-
-There's no charge for encrypting VM disks with Azure Disk Encryption, but there are charges associated with the use of Azure Key Vault. For more information on Azure Key Vault costs, see the [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) page.
-
-## How can I start using Azure Disk Encryption?
-
-To get started, read the [Azure Disk Encryption overview](disk-encryption-overview.md).
-
-## What VM sizes and operating systems support Azure Disk Encryption?
-
-The [Azure Disk Encryption overview](disk-encryption-overview.md) article lists the [VM sizes](disk-encryption-overview.md#supported-vms) and [VM operating systems](disk-encryption-overview.md#supported-operating-systems) that support Azure Disk Encryption.
-
-## Can I encrypt both boot and data volumes with Azure Disk Encryption?
-
-Yes, you can encrypt both boot and data volumes, or you can encrypt the data volume without having to encrypt the OS volume first.
-
-After you've encrypted the OS volume, disabling encryption on the OS volume isn't supported. For Linux VMs in a scale set, only the data volume can be encrypted.
-
-## Can I encrypt an unmounted volume with Azure Disk Encryption?
-
-No, Azure Disk Encryption only encrypts mounted volumes.
-
-## What is Storage server-side encryption?
-
-Storage server-side encryption encrypts Azure managed disks in Azure Storage. Managed disks are encrypted by default with Server-side encryption with a platform-managed key (as of June 10, 2017). You can manage encryption of managed disks with your own keys by specifying a customer-managed key. For more information see: [Server-side encryption of Azure managed disks](../disk-encryption.md).
-
-## How is Azure Disk Encryption different from Storage server-side encryption with customer-managed key and when should I use each solution?
-
-Azure Disk Encryption provides end-to-end encryption for the OS disk, data disks, and the temporary disk, using a customer-managed key.
-- If your requirements include encrypting all of the above and end-to-end encryption, use Azure Disk Encryption. -- If your requirements include encrypting only data at rest with customer-managed key, then use [Server-side encryption with customer-managed keys](../disk-encryption.md). You cannot encrypt a disk with both Azure Disk Encryption and Storage server-side encryption with customer-managed keys. -- If your Linux distro is not listed under [supported operating systems for Azure Disk Encryption](disk-encryption-overview.md#supported-operating-systems) or you are using a scenario called out in the [unsupported scenarios for Windows](disk-encryption-linux.md#unsupported-scenarios), consider [Server-side encryption with customer-managed keys](../disk-encryption.md).-- If your organization's policy allows you to encrypt content at rest with an Azure-managed key, then no action is needed - the content is encrypted by default. For managed disks, the content inside storage is encrypted by default with Server-side encryption with platform-managed key. The key is managed by the Azure Storage service. ---
-## How do I rotate secrets or encryption keys?
-
-To rotate secrets, just call the same command you used originally to enable disk encryption, specifying a different Key Vault. To rotate the key encryption key, call the same command you used originally to enable disk encryption, specifying the new key encryption.
-
->[!WARNING]
-> - If you have previously used [Azure Disk Encryption with Azure AD app](disk-encryption-linux-aad.md) by specifying Azure AD credentials to encrypt this VM, you will have to continue use this option to encrypt your VM. You can't use Azure Disk Encryption on this encrypted VM as this isn't a supported scenario, meaning switching away from AAD application for this encrypted VM isn't supported yet.
-
-## How do I add or remove a key encryption key if I didn't originally use one?
-
-To add a key encryption key, call the enable command again passing the key encryption key parameter. To remove a key encryption key, call the enable command again without the key encryption key parameter.
-
-## Does Azure Disk Encryption allow you to bring your own key (BYOK)?
-
-Yes, you can supply your own key encryption keys. These keys are safeguarded in Azure Key Vault, which is the key store for Azure Disk Encryption. For more information on the key encryption keys support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## Can I use an Azure-created key encryption key?
-
-Yes, you can use Azure Key Vault to generate a key encryption key for Azure disk encryption use. These keys are safeguarded in Azure Key Vault, which is the key store for Azure Disk Encryption. For more information on the key encryption key, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## Can I use an on-premises key management service or HSM to safeguard the encryption keys?
-
-You can't use the on-premises key management service or HSM to safeguard the encryption keys with Azure Disk Encryption. You can only use the Azure Key Vault service to safeguard the encryption keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## What are the prerequisites to configure Azure Disk Encryption?
-
-There are prerequisites for Azure Disk Encryption. See the [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md) article to create a new key vault, or set up an existing key vault for disk encryption access to enable encryption, and safeguard secrets and keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## What are the prerequisites to configure Azure Disk Encryption with an Azure AD app (previous release)?
-
-There are prerequisites for Azure Disk Encryption. See the [Azure Disk Encryption with Azure AD](disk-encryption-linux-aad.md) content to create an Azure Active Directory application, create a new key vault, or set up an existing key vault for disk encryption access to enable encryption, and safeguard secrets and keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption with Azure AD](disk-encryption-key-vault-aad.md).
-
-## Is Azure Disk Encryption using an Azure AD app (previous release) still supported?
-Yes. Disk encryption using an Azure AD app is still supported. However, when encrypting new VMs it's recommended that you use the new method rather than encrypting with an Azure AD app.
-
-## Can I migrate VMs that were encrypted with an Azure AD app to encryption without an Azure AD app?
- Currently, there isn't a direct migration path for machines that were encrypted with an Azure AD app to encryption without an Azure AD app. Additionally, there isn't a direct path from encryption without an Azure AD app to encryption with an AD app.
-
-## What version of Azure PowerShell does Azure Disk Encryption support?
-
-Use the latest version of the Azure PowerShell SDK to configure Azure Disk Encryption. Download the latest version of [Azure PowerShell](https://github.com/Azure/azure-powershell/releases). Azure Disk Encryption is *not* supported by Azure SDK version 1.1.0.
-
-> [!NOTE]
-> The Linux Azure disk encryption preview extension "Microsoft.OSTCExtension.AzureDiskEncryptionForLinux" is deprecated. This extension was published for Azure disk encryption preview release. You should not use the preview version of the extension in your testing or production deployment.
-
-> For deployment scenarios like Azure Resource Manager (ARM), where you have a need to deploy Azure disk encryption extension for Linux VM to enable encryption on your Linux IaaS VM, you must use the Azure disk encryption production supported extension "Microsoft.Azure.Security.AzureDiskEncryptionForLinux".
-
-## Can I apply Azure Disk Encryption on my custom Linux image?
-
-You can't apply Azure Disk Encryption on your custom Linux image. Only the gallery Linux images for the supported distributions called out previously are supported. Custom Linux images aren't currently supported.
-
-## Can I apply updates to a Linux Red Hat VM that uses the yum update?
-
-Yes, you can perform a yum update on a Red Hat Linux VM. For more information, see [Azure Disk Encryption on an isolated network](disk-encryption-isolated-network.md).
-
-## What is the recommended Azure disk encryption workflow for Linux?
-
-The following workflow is recommended to have the best results on Linux:
-* Start from the unmodified stock gallery image corresponding to the needed OS distro and version
-* Back up any mounted drives that will be encrypted. This back up allows for recovery if there's a failure, for example if the VM is rebooted before encryption has completed.
-* Encrypt (can take several hours or even days depending on VM characteristics and size of any attached data disks)
-* Customize, and add software to the image as needed.
-
-If this workflow isn't possible, relying on [Storage Service Encryption](../../storage/common/storage-service-encryption.md) (SSE) at the platform storage account layer may be an alternative to full disk encryption using dm-crypt.
-
-## What is the disk "Bek Volume" or "/mnt/azure_bek_disk"?
-
-The "Bek volume" is a local data volume that securely stores the encryption keys for Encrypted Azure VMs.
-> [!NOTE]
-> Do not delete or edit any contents in this disk. Do not unmount the disk since the encryption key presence is needed for any encryption operations on the IaaS VM.
--
-## What encryption method does Azure Disk Encryption use?
-
-Azure Disk Encryption uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key.
-
-## If I use EncryptFormatAll and specify all volume types, will it erase the data on the data drives that we already encrypted?
-No, data won't be erased from data drives that are already encrypted using Azure Disk Encryption. Similar to how EncryptFormatAll didn't re-encrypt the OS drive, it won't re-encrypt the already encrypted data drive. For more information, see the [EncryptFormatAll criteria](disk-encryption-linux.md#use-encryptformatall-feature-for-data-disks-on-linux-vms).
-
-## Is XFS filesystem supported?
-Encryption of XFS OS disks is supported.
-
-Encryption of XFS data disks is supported only when the EncryptFormatAll parameter is used. This will reformat the volume, erasing any data previously there. For more information, see the [EncryptFormatAll criteria](disk-encryption-linux.md#use-encryptformatall-feature-for-data-disks-on-linux-vms).
-
-## Can I backup and restore an encrypted VM?
-
-Azure Backup provides a mechanism to backup and restore encrypted VM's within the same subscription and region. For instructions, please see [Back up and restore encrypted virtual machines with Azure Backup](../../backup/backup-azure-vms-encryption.md). Restoring an encrypted VM to a different region is not currently supported.
-
-## Where can I go to ask questions or provide feedback?
-
-You can ask questions or provide feedback on the [Microsoft Q&A question page for Azure Disk Encryption](/answers/topics/azure-disk-encryption.html).
-
-## Next steps
-In this document, you learned more about the most frequent questions related to Azure Disk Encryption. For more information about this service, see the following articles:
--- [Azure Disk Encryption Overview](disk-encryption-overview.md)-- [Apply disk encryption in Azure Security Center](../../security-center/asset-inventory.md)-- [Azure data encryption at rest](../../security/fundamentals/encryption-atrest.md)
virtual-machines Disk Encryption Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-faq.md
- Title: FAQ - Azure Disk Encryption for Windows VMs
-description: This article provides answers to frequently asked questions about Microsoft Azure Disk Encryption for Windows IaaS VMs.
------ Previously updated : 11/01/2019---
-# Azure Disk Encryption for Windows virtual machines FAQ
-
-This article provides answers to frequently asked questions (FAQ) about Azure Disk Encryption for Windows VMs. For more information about this service, see [Azure Disk Encryption overview](disk-encryption-overview.md).
-
-## What is Azure Disk Encryption for Windows VMs?
-
-Azure Disk Encryption for Windows VMs uses the BitLocker feature of Windows to provide full disk encryption of the OS disk and data disks. Additionally, it provides encryption of the temporary disk when the [VolumeType parameter is All](disk-encryption-windows.md#enable-encryption-on-a-newly-added-data-disk). The content flows encrypted from the VM to the Storage backend. Thereby, providing end-to-end encryption with a customer-managed key.
-
-See [Supported VMs and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems).
-
-## Where is Azure Disk Encryption in general availability (GA)?
-
-Azure Disk Encryption is in general availability in all Azure public regions.
-
-## What user experiences are available with Azure Disk Encryption?
-
-Azure Disk Encryption GA supports Azure Resource Manager templates, Azure PowerShell, and Azure CLI. The different user experiences give you flexibility. You have three different options for enabling disk encryption for your VMs. For more information on the user experience and step-by-step guidance available in Azure Disk Encryption, see [Azure Disk Encryption scenarios for Windows](disk-encryption-windows.md).
-
-## How much does Azure Disk Encryption cost?
-
-There's no charge for encrypting VM disks with Azure Disk Encryption, but there are charges associated with the use of Azure Key Vault. For more information on Azure Key Vault costs, see the [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) page.
-
-## How can I start using Azure Disk Encryption?
-
-To get started, read the [Azure Disk Encryption overview](disk-encryption-overview.md).
-
-## What VM sizes and operating systems support Azure Disk Encryption?
-
-The [Azure Disk Encryption overview](disk-encryption-overview.md) article lists the [VM sizes](disk-encryption-overview.md#supported-vms) and [VM operating systems](disk-encryption-overview.md#supported-operating-systems) that support Azure Disk Encryption.
-
-## Can I encrypt both boot and data volumes with Azure Disk Encryption?
-
-You can encrypt both boot and data volumes, but you can't encrypt the data without first encrypting the OS volume.
-
-## Can I encrypt an unmounted volume with Azure Disk Encryption?
-
-No, Azure Disk Encryption only encrypts mounted volumes.
-
-## What is Storage server-side encryption?
-
-Storage server-side encryption encrypts Azure managed disks in Azure Storage. Managed disks are encrypted by default with Server-side encryption with a platform-managed key (as of June 10, 2017). You can manage encryption of managed disks with your own keys by specifying a customer-managed key. For more information, see [Server-side encryption of Azure managed disks](../disk-encryption.md).
-
-## How is Azure Disk Encryption different from Storage server-side encryption with customer-managed key and when should I use each solution?
-
-Azure Disk Encryption provides end-to-end encryption for the OS disk, data disks, and the temporary disk with a customer-managed key.
--- If your requirements include encrypting all of the above and end-to-end encryption, use Azure Disk Encryption. -- If your requirements include encrypting only data at rest with customer-managed key, then use [Server-side encryption with customer-managed keys](../disk-encryption.md). You cannot encrypt a disk with both Azure Disk Encryption and Storage server-side encryption with customer managed keys.-- If you are using a scenario called out in [unsupported scenarios for Windows](disk-encryption-windows.md#unsupported-scenarios), consider [Server-side encryption with customer-managed keys](../disk-encryption.md). -- If your organization's policy allows you to encrypt content at rest with an Azure-managed key, then no action is needed - the content is encrypted by default. For managed disks, the content inside storage is encrypted by default with Server-side encryption with platform-managed key. The key is managed by the Azure Storage service. -
-## How do I rotate secrets or encryption keys?
-
-To rotate secrets, just call the same command you used originally to enable disk encryption, specifying a different Key Vault. To rotate the key encryption key, call the same command you used originally to enable disk encryption, specifying the new key encryption.
-
->[!WARNING]
-> - If you have previously used [Azure Disk Encryption with Azure AD app](disk-encryption-windows-aad.md) by specifying Azure AD credentials to encrypt this VM, you will have to continue use this option to encrypt your VM. You can't use Azure Disk Encryption on this encrypted VM as this isn't a supported scenario, meaning switching away from AAD application for this encrypted VM isn't supported yet.
-
-## How do I add or remove a key encryption key if I didn't originally use one?
-
-To add a key encryption key, call the enable command again passing the key encryption key parameter. To remove a key encryption key, call the enable command again without the key encryption key parameter.
-
-## Does Azure Disk Encryption allow you to bring your own key (BYOK)?
-
-Yes, you can supply your own key encryption keys. These keys are safeguarded in Azure Key Vault, which is the key store for Azure Disk Encryption. For more information on the key encryption keys support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## Can I use an Azure-created key encryption key?
-
-Yes, you can use Azure Key Vault to generate a key encryption key for Azure disk encryption use. These keys are safeguarded in Azure Key Vault, which is the key store for Azure Disk Encryption. For more information on the key encryption key, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## Can I use an on-premises key management service or HSM to safeguard the encryption keys?
-
-You can't use the on-premises key management service or HSM to safeguard the encryption keys with Azure Disk Encryption. You can only use the Azure Key Vault service to safeguard the encryption keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## What are the prerequisites to configure Azure Disk Encryption?
-
-There are prerequisites for Azure Disk Encryption. See the [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md) article to create a new key vault, or set up an existing key vault for disk encryption access to enable encryption, and safeguard secrets and keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md).
-
-## What are the prerequisites to configure Azure Disk Encryption with an Azure AD app (previous release)?
-
-There are prerequisites for Azure Disk Encryption. See the [Azure Disk Encryption with Azure AD](disk-encryption-windows-aad.md) content to create an Azure Active Directory application, create a new key vault, or set up an existing key vault for disk encryption access to enable encryption, and safeguard secrets and keys. For more information on the key encryption key support scenarios, see [Creating and configuring a key vault for Azure Disk Encryption with Azure AD](disk-encryption-key-vault-aad.md).
-
-## Is Azure Disk Encryption using an Azure AD app (previous release) still supported?
-Yes. Disk encryption using an Azure AD app is still supported. However, when encrypting new VMs it's recommended that you use the new method rather than encrypting with an Azure AD app.
-
-## Can I migrate VMs that were encrypted with an Azure AD app to encryption without an Azure AD app?
- Currently, there isn't a direct migration path for machines that were encrypted with an Azure AD app to encryption without an Azure AD app. Additionally, there isn't a direct path from encryption without an Azure AD app to encryption with an AD app.
-
-## What version of Azure PowerShell does Azure Disk Encryption support?
-
-Use the latest version of the Azure PowerShell SDK to configure Azure Disk Encryption. Download the latest version of [Azure PowerShell](https://github.com/Azure/azure-powershell/releases). Azure Disk Encryption is *not* supported by Azure SDK version 1.1.0.
-
-## What is the disk "Bek Volume" or "/mnt/azure_bek_disk"?
-
-The "Bek volume" is a local data volume that securely stores the encryption keys for Encrypted Azure VMs.
-
-> [!NOTE]
-> Do not delete or edit any contents in this disk. Do not unmount the disk since the encryption key presence is needed for any encryption operations on the IaaS VM.
-
-## What encryption method does Azure Disk Encryption use?
-
-Azure Disk Encryption selects the encryption method in BitLocker based on the version of Windows as follows:
-
-| Windows Versions | Version | Encryption Method |
-|-|--|--|
-| Windows Server 2012, Windows 10, or greater | >=1511 |XTS-AES 256 bit |
-| Windows Server 2012, Windows 8, 8.1, 10 | < 1511 |AES 256 bit * |
-| Windows Server 2008R2 | |AES 256 bit with Diffuser |
-
-\* AES 256 bit with Diffuser isn't supported in Windows 2012 and later.
-
-To determine Windows OS version, run the 'winver' tool in your virtual machine.
-
-## Can I backup and restore an encrypted VM?
-
-Azure Backup provides a mechanism to backup and restore encrypted VM's within the same subscription and region. For instructions, please see [Back up and restore encrypted virtual machines with Azure Backup](../../backup/backup-azure-vms-encryption.md). Restoring an encrypted VM to a different region is not currently supported.
-
-## Where can I go to ask questions or provide feedback?
-
-You can ask questions or provide feedback on the [Microsoft Q&A question page for Azure Disk Encryption](/answers/topics/azure-disk-encryption.html).
-
-## Next steps
-In this document, you learned more about the most frequent questions related to Azure Disk Encryption. For more information about this service, see the following articles:
--- [Azure Disk Encryption Overview](disk-encryption-overview.md)-- [Apply disk encryption in Azure Security Center](../../security-center/asset-inventory.md)-- [Azure data encryption at rest](../../security/fundamentals/encryption-atrest.md)
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/manage-subnet-delegation.md
The built-in [Network Contributor](../role-based-access-control/built-in-roles.m
In this section, you delegate the subnet that you created in the preceding section to an Azure service. 1. In the portal's search bar, enter *myVirtualNetwork*. When **myVirtualNetwork** appears in the search results, select it.
-2. In the search results, select *myVirtualNetwork*.
-3. Select **Subnets**, under **SETTINGS**, and then select **mySubnet**.
-4. On the *mySubnet* page, for the **Subnet delegation** list, select from the services listed under **Delegate subnet to a service** (for example, **Microsoft.DBforPostgreSQL/serversv2**).
+2. Select **Subnets**, under **SETTINGS**, and then select **mySubnet**.
+3. On the *mySubnet* page, for the **Subnet delegation** list, select from the services listed under **Delegate subnet to a service** (for example, **Microsoft.DBforPostgreSQL/serversv2**).
### Remove subnet delegation from an Azure service 1. In the portal's search bar, enter *myVirtualNetwork*. When **myVirtualNetwork** appears in the search results, select it.
-2. In the search results, select *myVirtualNetwork*.
-3. Select **Subnets**, under **SETTINGS**, and then select **mySubnet**.
-4. In *mySubnet* page, for the **Subnet delegation** list, select **None** from the services listed under **Delegate subnet to a service**.
+2. Select **Subnets**, under **SETTINGS**, and then select **mySubnet**.
+3. In *mySubnet* page, for the **Subnet delegation** list, select **None** from the services listed under **Delegate subnet to a service**.
## Azure CLI