Updates from: 02/16/2021 04:04:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-policy-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-get-started.md
@@ -107,7 +107,7 @@ Next, expose the API by adding a scope:
Next, specify that the application should be treated as a public client: 1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Advanced settings**, enable **Treat application as a public client** (select **Yes**). Ensure that **"allowPublicClient": true** is set in the application manifest.
+1. Under **Advanced settings**, in the **Allow public client flows** section, set **Enable the following mobile and desktop flows** to **Yes**. Ensure that **"allowPublicClient": true** is set in the application manifest.
1. Select **Save**. Now, grant permissions to the API scope you exposed earlier in the *IdentityExperienceFramework* registration:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
@@ -148,7 +148,7 @@ There are three types of passwordless sign-in deployments available with securit
- Azure Active Directory web apps on a supported browser - Azure Active Directory Joined Windows 10 devices - Hybrid Azure Active Directory Joined Windows 10 devices (preview)
- - Provides access to both cloud-based and on premises resources. For more information about access to on-premises resources, see [SSO to on-premises resources using FIDOP2 keys](./howto-authentication-passwordless-security-key-on-premises.md)
+ - Provides access to both cloud-based and on premises resources. For more information about access to on-premises resources, see [SSO to on-premises resources using FIDO2 keys](./howto-authentication-passwordless-security-key-on-premises.md)
You must enable **Compatible FIDO2 security keys**. Microsoft announced [key partnerships with FIDO2 key vendors](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Microsoft-passwordless-partnership-leads-to-innovation-and-great/ba-p/566493).
@@ -328,4 +328,4 @@ Follow the steps in the article, [Enable passwordless security key sign in for A
- [Enable passwordless security keys for sign in for Azure AD](howto-authentication-passwordless-security-key.md) - [Enable passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md)-- [Learn more about Authentication methods usage & insights](howto-authentication-methods-usage-insights.md)
+- [Learn more about Authentication methods usage & insights](howto-authentication-methods-usage-insights.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/cybsafe-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cybsafe-provisioning-tutorial.md
@@ -117,6 +117,10 @@ This section guides you through the steps to configure the Azure AD provisioning
|locale|String| |timezone|String| |userType|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to CybSafe**.
@@ -150,6 +154,10 @@ Once you've configured provisioning, use the following resources to monitor your
2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion 3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change log
+
+* 02/15/2021 - User enterprise extension attribute **department**, **division** and **organization** have been added.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/hoxhunt-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/hoxhunt-provisioning-tutorial.md
@@ -37,18 +37,14 @@ The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant) * A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * A Hoxhunt tenant.
-* A user account in Hoxhunt with Admin permissions.
-
+* SCIM API key and SCIM endpoint URL for your organization (configured by Hoxhunt support).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning). 2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts). 3. Determine what data to [map between Azure AD and Hoxhunt](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes). ## Step 2. Configure Hoxhunt to support provisioning with Azure AD-
-To configure Hoxhunt to support provisioning with Azure AD - please write an email to Hoxhunt Support (support@hoxhunt.com).
-They will provide the **Authentication Token** and **SCIM Endpoint URL**.
-
+Contact [Hoxhunt support](mailto:support@hoxhunt.com) to receive SCIM API key and SCIM endpoint URL to configure Hoxhunt to support provisioning with Azure AD.
## Step 3. Add Hoxhunt from the Azure AD application gallery Add Hoxhunt from the Azure AD application gallery to start managing provisioning to Hoxhunt. If you have previously setup Hoxhunt for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/parsable-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/parsable-provisioning-tutorial.md
@@ -28,6 +28,7 @@ This tutorial describes the steps you need to perform in both Parsable and Azure
> * Create users in Parsable > * Remove users in Parsable when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Parsable
+> * Provision groups and group memberships in Parsable
## Prerequisites
@@ -103,17 +104,25 @@ This section guides you through the steps to configure the Azure AD provisioning
|userName|String|✓| |displayName|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Parsable**.
-11. To enable the Azure AD provisioning service for Parsable, change the **Provisioning Status** to **On** in the **Settings** section.
+11. Review the group attributes that are synchronized from Azure AD to Parsable in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Parsable for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|✓|
+ |members|Reference|
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Parsable, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-12. Define the users and/or groups that you would like to provision to Parsable by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and/or groups that you would like to provision to Parsable by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-13. When you are ready to provision, click **Save**.
+15. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
@@ -126,6 +135,10 @@ Once you've configured provisioning, use the following resources to monitor your
2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion 3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change log
+
+* 02/15/2021 - Group provisioning has been enabled.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tutorial-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
@@ -100,6 +100,7 @@ To find more tutorials, use the table of contents on the left.
| ![logo-OneDesk](./medi)| | ![logo-OpsGenie](./medi)| | ![logo-People](./medi)|
+| ![logo-Perimeter 81](./medi)|
| ![logo-productboard](./medi)| | ![logo-PurelyHR](./medi)| | ![logo-RingCentral](./medi)|
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
@@ -36,8 +36,8 @@ You can only install one version of Core Tools on a given computer. Unless other
## Prerequisites
-Azure Functions Core Tools currently depends on the Azure CLI for authenticating with your Azure account.
-This means that you must [install the Azure CLI locally](/cli/azure/install-azure-cli) to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
+Azure Functions Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) for authenticating with your Azure account.
+This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
## Install the Azure Functions Core Tools
@@ -501,7 +501,7 @@ func run MyHttpTrigger -c '{\"name\": \"Azure\"}'
The Azure Functions Core Tools supports two types of deployment: deploying function project files directly to your function app via [Zip Deploy](functions-deployment-technologies.md#zip-deploy) and [deploying a custom Docker container](functions-deployment-technologies.md#docker-container). You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you'll deploy your code. Projects that require compilation should be built so that the binaries can be deployed. >[!IMPORTANT]
->You must have the [Azure CLI](/cli/azure/install-azure-cli) installed locally to be able to publish to Azure from Core Tools.
+>You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) installed locally to be able to publish to Azure from Core Tools.
A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
@@ -516,7 +516,7 @@ func azure functionapp publish <FunctionAppName>
>[!IMPORTANT] > Java uses Maven to publish your local project to Azure. Use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment.
-This command publishes to an existing function app in Azure. You'll get an error if you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. To learn how to create a function app from the command prompt or terminal window using the Azure CLI, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md). By default, this command uses [remote build](functions-deployment-technologies.md#remote-build) and deploys your app to [run from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the `--nozip` option.
+This command publishes to an existing function app in Azure. You'll get an error if you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md). By default, this command uses [remote build](functions-deployment-technologies.md#remote-build) and deploys your app to [run from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the `--nozip` option.
>[!IMPORTANT] > When you create a function app in the Azure portal, it uses version 3.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/ip-collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-collection.md
@@ -236,7 +236,7 @@ requests
Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets either zeroed out.
-If testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. `::1` represents the loopback address in IPv6. It's equivalent to `127.0.01` in IPv4.
+If testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. `::1` represents the loopback address in IPv6. It's equivalent to `127.0.0.1` in IPv4.
## Next Steps
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat4-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/container-insights-azure-redhat4-setup.md
@@ -116,7 +116,7 @@ If you don't have a workspace to specify, you can skip to the [Integrate with th
export kubeContext="<kubeContext name of your ARO v4 cluster>" ```
- Example:
+ Here is the command you must run once you have populated the 3 variables with Export commands:
`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --kube-context $kubeContext --workspace-id $logAnalyticsWorkspaceResourceId`
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor.md
@@ -120,7 +120,7 @@ The script creates registry keys required by the solution. It also creates Windo
### Configure the solution
-1. Add the Network Performance Monitor solution to your workspace from the [Azure marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.NetworkMonitoringOMS?tab=Overview). You also can use the process described in [Add Azure Monitor solutions from the Solutions Gallery](./solutions.md).
+1. Add the Network Performance Monitor solution to your workspace from the [Azure marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). You also can use the process described in [Add Azure Monitor solutions from the Solutions Gallery](./solutions.md).
2. Open your Log Analytics workspace, and select the **Overview** tile. 3. Select the **Network Performance Monitor** tile with the message *Solution requires additional configuration*.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connector-deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connector-deletion.md
@@ -0,0 +1,46 @@
+
+ Title: Deletion of ITSM connector and the action that are associated to it
+description: This article provides an explanation of how to delete ITSM connector and the action groups that are associated to it.
++++ Last updated : 12/29/2020++++
+# Deletion of unused ITSM connectors
+
+The process of deletion of unused connector contain 2 phases:
+
+1. Deletion of the associated actions: all the actions that are associated with the ITSM connector should be deleted. This should be done in order not to have actions without connector that might cause errors in your subscription.
+
+2. Deletion of the unused ITSM connector.
+
+## Deletion of the associated actions
+
+1. In order to find the action group you should go into ΓÇ£MonitorΓÇ¥
+ ![Screenshot of monitor selection.](media/itsmc-connector-deletion/itsmc-monitor-selection.png)
+
+2. Select ΓÇ£AlertsΓÇ¥
+ ![Screenshot of alerts selection.](media/itsmc-connector-deletion/itsmc-alert-selection.png)
+3. Select ΓÇ£Manage ActionsΓÇ¥
+ ![Screenshot of manage actions selection.](media/itsmc-connector-deletion/itsmc-actions-selection.png)
+4. Select all the ITSM connectors that is connected to Cherwell
+ ![Screenshot of ITSM connectors that is connected to Cherwell.](media/itsmc-connector-deletion/itsmc-actions-screen.png)
+5. Delete the action group
+ ![Screenshot of action group deletion.](media/itsmc-connector-deletion/itsmc-action-deletion.png)
+
+## Deletion of the unused ITSM connector
+
+1. You should search and select ΓÇ£ServiceDeskΓÇ¥ LA in the top search bar
+ ![Screenshot of search and select ΓÇ£ServiceDeskΓÇ¥ LA.](media/itsmc-connector-deletion/itsmc-connector-selection.png)
+2. Select the ΓÇ£ITSM ConnectionsΓÇ¥ and select the Cherwell connector
+ ![Screenshot of Cherwell ITSM connectors.](media/itsmc-connector-deletion/itsmc-cherwell-connector.png)
+3. Select ΓÇ£DeleteΓÇ¥
+ ![Screenshot of ITSM connector deletion.](media/itsmc-connector-deletion/itsmc-connector-deletion.png)
+
+## Next steps
+
+* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/logicapp-flow-connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/logicapp-flow-connector.md
@@ -21,6 +21,7 @@ The Azure Monitor Logs connector has these limits:
* Max query response size 100 MB * Max number of records: 500,000 * Max query timeout 110 second.
+* Chart visualizations could be available in Logs page and missing in the connector since the connector and Logs page don't use the same charting libraries currently.
Depending on the size of your data and the query you use, the connector may hit its limits and fail. You can work around such cases when adjusting the trigger recurrence to run more frequently and query less data. You can use queries that aggregate your data to return less records and columns.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/partners.md
@@ -272,6 +272,24 @@ SIGNL4 - the mobile alerting app for operations teams - is the fastest way to ro
[SIGNL4 documentation](https://www.signl4.com/blog/mobile-alert-notifications-azure-monitor/)
+## Site24x7
+
+![Site24x7 Logo](./media/partners/site24-7.png)
+
+Site24x7 provides advanced and full stack Azure monitoring solution, delivering visibility and insight into your applications allowing application owners to detect performance bottlenecks rapidly, automate fault resolution, and optimize their performance.
+With Site24x7 Azure Monitoring, you will be able to:
+
+* Monitor more than 100 Azure IaaS and PaaS services.
+* Get in-depth monitoring for Windows and Linux VMs with exclusive Azure extensions, right from the Azure Marketplace.
+* Troubleshoot applications with insight on logs from Azure. Send logs to Site24x7, save search queries, set query-based alerts, and manage Azure logs from a single dashboard.
+* Detect any service health issues and ensure reliable deployments via the Azure Deployment Manager (ADM) Health Check.
+* Automate fault resolution with a set of IT Automation tools.
+* Monitor your complete Microsoft ecosphere including SQL, Exchange, Active Directory, Office 365, IIS, and Hyper-V applications.
+* Integrate seamlessly with third party services like Microsoft Teams, PagerDuty, Zapier, and more.
+
+[Site 24X7 documentation](https://www.site24x7.com/)
++ ## SolarWinds [SolarWinds documentation](https://www.solarwinds.com/topics/azure-monitoring)
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
@@ -437,17 +437,18 @@ This tutorial explains how to create a Cloud Service (extended support) deployme
] } }
- }
+ }
+ ]
} ```
-8. Deploy the template and create the Cloud Service (extended support) deployment.
+8. Deploy the template and parameter file (defining parameters in template file) to create the Cloud Service (extended support) deployment. Please refer these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support) as required.
```powershell
- New-AzResourceGroupDeployment -ResourceGroupName “ContosOrg -TemplateFile "file path to your template file” 
+ New-AzResourceGroupDeployment -ResourceGroupName “ContosOrg" -TemplateFile "file path to your template file” -TemplateParameterFile "file path to your parameter file"
``` ## Next steps - Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/encrypt-data-at-rest.md
@@ -0,0 +1,40 @@
+
+ Title: Content Moderator encryption of data at rest
+
+description: Content Moderator encryption of data at rest.
++++++ Last updated : 03/13/2020+
+#Customer intent: As a user of the Content Moderator service, I want to learn how encryption at rest works.
++
+# Content Moderator encryption of data at rest
+
+Content Moderator automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Content Moderator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Content Moderator service, you will need to create a new Content Moderator resource and select E0 as the Pricing Tier. Once your Content Moderator resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
+++
+## Enable data encryption for your Content Moderator Team
+
+To enable data encryption for your Content Moderator Review Team, see the [Quickstart: Try Content Moderator on the web](quick-start.md#create-a-review-team).
+
+> [!NOTE]
+> You'll need to provide a _Resource ID_ with the Content Moderator E0 pricing tier.
+
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest.md
@@ -0,0 +1,40 @@
+
+ Title: Custom Vision encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Custom Vision, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Custom Vision encryption of data at rest
+
+Azure Custom Vision automatically encrypts your data when persisted it to the cloud. Custom Vision encryption protects your data and to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Custom Vision, you will need to create a new Custom Vision resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
+
+## Regional availability
+
+Customer-managed keys are currently available in these regions:
+
+* US South Central
+* West US 2
+* East US
+* US Gov Virginia
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Encryption/cognitive-services-encryption-keys-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Encryption/cognitive-services-encryption-keys-portal.md
@@ -17,20 +17,20 @@ The process to enable Customer-Managed Keys with Azure Key Vault for Cognitive S
## Vision
-* [Custom Vision encryption of data at rest](../Custom-Vision-Service/custom-vision-encryption-of-data-at-rest.md)
-* [Face Services encryption of data at rest](../Face/face-encryption-of-data-at-rest.md)
-* [Form Recognizer encryption of data at rest](../form-recognizer/form-recognizer-encryption-of-data-at-rest.md)
+* [Custom Vision encryption of data at rest](../Custom-Vision-Service/encrypt-data-at-rest.md)
+* [Face Services encryption of data at rest](../Face/encrypt-data-at-rest.md)
+* [Form Recognizer encryption of data at rest](../form-recognizer/encrypt-data-at-rest.md)
## Language
-* [Language Understanding service encryption of data at rest](../LUIS/luis-encryption-of-data-at-rest.md)
-* [QnA Maker encryption of data at rest](../QnAMaker/qna-maker-encryption-of-data-at-rest.md)
-* [Translator encryption of data at rest](../translator/translator-encryption-of-data-at-rest.md)
+* [Language Understanding service encryption of data at rest](../LUIS/encrypt-data-at-rest.md)
+* [QnA Maker encryption of data at rest](../QnAMaker/encrypt-data-at-rest.md)
+* [Translator encryption of data at rest](../translator/encrypt-data-at-rest.md)
## Decision
-* [Content Moderator encryption of data at rest](../Content-Moderator/content-moderator-encryption-of-data-at-rest.md)
-* [Personalizer encryption of data at rest](../personalizer/personalizer-encryption-of-data-at-rest.md)
+* [Content Moderator encryption of data at rest](../Content-Moderator/encrypt-data-at-rest.md)
+* [Personalizer encryption of data at rest](../personalizer/encrypt-data-at-rest.md)
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/encrypt-data-at-rest.md
@@ -0,0 +1,31 @@
+
+ Title: Face service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Face service encryption of data at rest
+
+The Face service automatically encrypts your data when persisted it to the cloud. The Face service encryption protects your data and to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Face Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Face service, you will need to create a new Face resource and select E0 as the Pricing Tier. Once your Face resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/encrypt-data-at-rest.md
@@ -0,0 +1,86 @@
+
+ Title: Language Understanding service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Language Understanding (LUIS), and how to enable and manage CMK.
++++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works.
++
+# Language Understanding service encryption of data at rest
+
+The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments.
+
+## About Cognitive Services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Customer-managed keys with Azure Key Vault
+
+There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+### Customer-managed keys for Language Understanding
+
+To request the ability to use customer-managed keys, fill out and submit theΓÇ»[LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with LUIS, you'll need to create a new Language Understanding resource from the Azure portal and select E0 as the Pricing Tier. The new SKU will function the same as the F0 SKU that is already available except for CMK. Users won't be able to upgrade from the F0 to the new E0 SKU.
+
+![LUIS subscription image](../media/cognitive-services-encryption/luis-subscription.png)
+
+### Limitations
+
+There are some limitations when using the E0 tier with existing/previously created applications:
+
+* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier. Learn more about [migration here](./luis-migration-authoring.md).
+* Moving applications to or from an E0 resource will be blocked. A work around for this limitation is to export your existing application, and import it as an E0 resource.
+* The Bing Spell check feature isn't supported.
+* Logging end-user traffic is disabled if your application is E0.
+* The Speech priming capability from the Azure Bot service isn't supported for applications in the E0 tier. This feature is available via the Azure Bot Service, which doesn't support CMK.
+* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging).
+
+### Enable customer-managed keys
+
+A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
+
+To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+### Rotate customer-managed keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Cognitive Services by using the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md).
+
+Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, as the encryption key is inaccessible by Cognitive Services.
+
+## Next steps
+
+* [LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/role-based-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
@@ -20,7 +20,7 @@ This Azure RBAC feature includes:
* Quickly add authors and editors to all knowledge bases in the resource because control is at the resource level, not at the knowledge base level. > [!NOTE]
-> When you ar Make sure to add a custom subdomain for the resource. [Custom Subdomain](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains) should be present by default, but if not, please add it
+> Make sure to add a custom subdomain for the resource. [Custom Subdomain](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-custom-subdomains) should be present by default, but if not, please add it
## Access is provided by a defined role
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/encrypt-data-at-rest.md
@@ -0,0 +1,88 @@
+
+ Title: QnA Maker encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for QnA Maker, and how to enable and manage CMK.
+++++ Last updated : 11/09/2020+
+#Customer intent: As a user of the QnA Maker service, I want to learn how encryption at rest works.
++
+# QnA Maker encryption of data at rest
+
+QnA Maker automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+
+# [QnA Maker GA (stable release)](#tab/v1)
+
+QnA Maker uses CMK support from Azure search. Configure [CMK in Azure Search using Azure Key Vault](../../search/search-security-manage-encryption-keys.md). This Azure instance should be associated with QnA Maker service to make it CMK enabled.
+
+# [QnA Maker managed (preview release)](#tab/v2)
+
+QnA Maker uses [CMK support from Azure search](../../search/search-security-manage-encryption-keys.md), and automatically associates the provided CMK to encrypt the data stored in Azure search index.
+++
+> [!IMPORTANT]
+> Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
+
+## Enable customer-managed keys
+
+The QnA Maker service uses CMK from the Azure Search service. Follow these steps to enable CMKs:
+
+# [QnA Maker GA (stable release)](#tab/v1)
+
+1. Create a new Azure Search instance and enable the prerequisites mentioned in the [customer-managed key prerequisites for Azure Cognitive Search](../../search/search-security-manage-encryption-keys.md#prerequisites).
+
+ ![View Encryption settings 1](../media/cognitive-services-encryption/qna-encryption-1.png)
+
+2. When you create a QnA Maker resource, it's automatically associated with an Azure Search instance. This instance cannot be used with CMK. To use CMK, you'll need to associate your newly created instance of Azure Search that was created in step 1. Specifically, you'll need to update the `AzureSearchAdminKey` and `AzureSearchName` in your QnA Maker resource.
+
+ ![View Encryption settings 2](../media/cognitive-services-encryption/qna-encryption-2.png)
+
+3. Next, create a new application setting:
+ * **Name**: Set to `CustomerManagedEncryptionKeyUrl`
+ * **Value**: Use the value that you got in Step 1 when creating your Azure Search instance.
+
+ ![View Encryption settings 3](../media/cognitive-services-encryption/qna-encryption-3.png)
+
+4. When finished, restart the runtime. Now your QnA Maker service is CMK-enabled.
+
+# [QnA Maker managed (preview release)](#tab/v2)
+
+1. Go to the **Encryption** tab of your QnA Maker managed (Preview) service.
+2. Select the **Customer Managed Keys** option. Provide the details of your [customer-managed keys](../../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal) and click on **Save**.
+
+ :::image type="content" source="../media/cognitive-services-encryption/qnamaker-v2-encryption-cmk.png" alt-text="QnA Maker managed (Preview) CMK setting" lightbox="../media/cognitive-services-encryption/qnamaker-v2-encryption-cmk.png":::
+
+3. On a successful save, the CMK will be used to encrypt the data stored in the Azure Search Index.
+
+> [!IMPORTANT]
+> It is recommended to set your CMK in a fresh Azure Cognitive Search service before any knowledge bases are created. If you set CMK in a QnA Maker service with existing knowledge bases, you might lose access to them. Read more about [working with encrypted content](../../search/search-security-manage-encryption-keys.md#work-with-encrypted-content) in Azure Cognitive search.
+
+> [!NOTE]
+> To request the ability to use customer-managed keys, fill out and submit the [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk).
+++
+## Regional availability
+
+Customer-managed keys are available in all Azure Search regions.
+
+## Encryption of data in transit
+
+QnA Maker portal runs in the user's browser. Every action triggers a direct call to the respective cognitive service API. Hence, QnA Maker is compliant for data in transit.
+However, as the QnA Maker portal service is hosted in West-US, it is still not ideal for non-US customers.
+
+## Next steps
+
+* [Encryption in Azure Search using CMKs in Azure Key Vault](../../search/search-security-manage-encryption-keys.md)
+* [Data encryption at rest](../../security/fundamentals/encryption-atrest.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/encrypt-data-at-rest.md
@@ -0,0 +1,79 @@
+
+ Title: Translator encryption of data at rest
+
+description: Microsoft lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Translator, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Translator service, I want to learn how encryption at rest works.
++
+# Translator encryption of data at rest
+
+Translator automatically encrypts your data, which you upload to build custom translation models, when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
+
+## About Cognitive Services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. If you are using a pricing tier that supports Customer-managed keys, you can see the encryption settings for your resource in the **Encryption** section of the [Azure portal](https://portal.azure.com), as shown in the following image.
+
+![View Encryption settings](../media/cognitive-services-encryption/encryptionblade.png)
+
+For subscriptions that only support Microsoft-managed encryption keys, you will not have an **Encryption** section.
+
+## Customer-managed keys with Azure Key Vault
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+
+> [!IMPORTANT]
+> Customer-managed keys are available for all pricing tiers for the Translator service. To request the ability to use customer-managed keys, fill out and submit the [Translator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk) It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Translator service, you will need to create a new Translator resource. Once your Translator resource is created, you can use Azure Key Vault to set up your managed identity.
+
+Follow these steps to enable customer-managed keys for Translator:
+
+1. Create your new regional Translator or regional Cognitive Services resource. This will not work with a global resource.
+2. Enabled Managed Identity in the Azure portal, and add your customer-managed key information.
+3. Create a new workspace in Custom Translator and associate this subscription information.
+
+### Enable customer-managed keys
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available as soon as the resource is created.
+
+To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working. Any models that you have deployed will also be undeployed. All uploaded data will be deleted from Custom Translator. If the managed identities are re-enabled, we will not automatically redeploy the model for you.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+> [!NOTE]
+> If the entire key vault is deleted, your data will no longer be displayed and all your models will be undeployed. All uploaded data will be deleted from Custom Translator.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource and your models will be undeployed, as the encryption key is inaccessible by Cognitive Services. All uploaded data will also be deleted from Custom Translator.
++
+## Next steps
+
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/encrypt-data-at-rest.md
@@ -0,0 +1,29 @@
+
+ Title: Form Recognizer service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Form Recognizer, and how to enable and manage CMK.
+++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Form Recognizer service, I want to learn how encryption at rest works.
++
+# Form Recognizer encryption of data at rest
+
+Azure Form Recognizer automatically encrypts your data when persisting it to the cloud. Form Recognizer encryption protects your data to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Form Recognizer, you will need to create a new Form Recognizer resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* [Form Recognizer Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/encrypt-data-at-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
@@ -0,0 +1,29 @@
+
+ Title: Personalizer service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK.
+++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
++
+# Personalizer service encryption of data at rest
+
+The Personalizer service automatically encrypts your data when persisted it to the cloud. The Personalizer service encryption protects your data and to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Personalizer service, you will need to create a new Personalizer resource and select E0 as the Pricing Tier. Once your Personalizer resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
@@ -122,11 +122,11 @@ az keyvault set-policy \
--key-permissions get unwrapKey wrapKey ```
-Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) (preview) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command:
+Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command:
```azurecli az role assignment create --assignee $identityPrincipalID \
- --role "Key Vault Crypto Service Encryption (preview)" \
+ --role "Key Vault Crypto Service Encryption User" \
--scope $keyvaultID ```
@@ -262,12 +262,12 @@ Configure a policy for the key vault so that the identity can access it.
:::image type="content" source="media/container-registry-customer-managed-keys/add-key-vault-access-policy.png" alt-text="Create key vault access policy":::
-Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) (preview) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity.
+Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity.
1. Navigate to your key vault. 1. Select **Access control (IAM)** > **+Add** > **Add role assignment**. 1. In the **Add role assignment** window:
- 1. Select **Key Vault Crypto Service Encryption (preview)** role.
+ 1. Select **Key Vault Crypto Service Encryption User** role.
1. Assign access to **User assigned managed identity**. 1. Select the resource name of your user-assigned managed identity, and select **Save**.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-reserved-capacity-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-reserved-capacity-overview.md
@@ -9,6 +9,8 @@ Last updated 02/05/2021
# Save costs for resources with reserved capacity - Azure Data Factory data flows + Save money with Azure Data Factory data flow costs by committing to a reservation for compute resources compared to pay-as-you-go prices. With reserved capacity, you make a commitment for ADF data flow usage for a period of one or three years to get a significant discount on the compute costs. To purchase reserved capacity, you need to specify the Azure region, compute type, core count quantity, and term. You do not need to assign the reservation to a specific factory or integration runtime. Existing factories or newly deployed factories automatically get the benefit. By purchasing a reservation, you commit to usage for the data flow compute costs for a period of one or three years. As soon as you buy a reservation, the compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
@@ -14,7 +14,7 @@ Last updated 09/11/2020
This article explores common troubleshooting methods for mapping data flows in Azure Data Factory.
-## Common errors and messages
+## Common error codes and messages
### Error code: DF-Executor-SourceInvalidPayload - **Message**: Data preview, debug, and pipeline data flow execution failed because container does not exist
@@ -24,7 +24,7 @@ This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Executor-SystemImplicitCartesian - **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.-- **Causes**: Implicit cartesian product for INNER join between logical plans is not supported. If the columns used in the join create the unique key, at least one column from both sides of the relationship are required.
+- **Causes**: Implicit cartesian product for INNER join between logical plans is not supported. If the columns are used in the join, then the unique key with at least one column from both sides of the relationship is required.
- **Recommendation**: For non-equality based joins you have to opt for CUSTOM CROSS JOIN. ### Error code: DF-Executor-SystemInvalidJson
@@ -37,9 +37,11 @@ This article explores common troubleshooting methods for mapping data flows in A
- **Message**: Broadcast join timeout error, make sure broadcast stream produces data within 60 secs in debug runs and 300 secs in job runs - **Causes**: Broadcast has a default timeout of 60 secs in debug runs and 300 seconds in job runs. Stream chosen for broadcast seems too large to produce data within this limit.-- **Recommendation**: Check the Optimize tab on your data flow transformations for Join, Exists, and Lookup. The default option for Broadcast is "Auto". If "Auto" is set, or if you are manually setting the left or right side to broadcast under "Fixed", then you can either set a larger Azure Integration Runtime configuration, or switch off broadcast. The recommended approach for best performance in data flows is to allow Spark to broadcast using "Auto" and use a Memory Optimized Azure IR.
+- **Recommendation**: Check the Optimize tab on your data flow transformations for Join, Exists, and Lookup. The default option for Broadcast is "Auto". If "Auto" is set, or if you are manually setting the left or right side to broadcast under "Fixed", then you can either set a larger Azure Integration Runtime configuration, or switch off broadcast. The recommended approach for best performance in data flows is to allow Spark to broadcast using "Auto" and use a Memory Optimized Azure IR. If you are executing the data flow in a debug test execution from a debug pipeline run, you may run into this condition more frequently. This is because ADF throttles the broadcast timeout to 60 secs in order to maintain a faster debug experience. If you would like to extend that to the 300-seconds timeout from a triggered run, you can use the Debug > Use Activity Runtime option to utilize the Azure IR defined in your Execute Data Flow pipeline activity.
-If you are executing the data flow in a debug test execution from a debug pipeline run, you may run into this condition more frequently. This is because ADF throttles the broadcast timeout to 60 secs in order to maintain a faster debug experience. If you would like to extend that to the 300-seconds timeout from a triggered run, you can use the Debug > Use Activity Runtime option to utilize the Azure IR defined in your Execute Data Flow pipeline activity.
+- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
+- **Causes**: Broadcast has a default timeout of 60 secs in debug runs and 300 secs in job runs. On broadcast join, the stream chosen for broadcast seems too large to produce data within this limit. If a broadcast join is not used, the default broadcast done by dataflow can reach the same limit
+- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams where the processing can take more than 60 secs. Choose a smaller stream to broadcast instead. Large SQL/DW tables and source files are typically bad candidates. In the absence of a broadcast join, use a larger cluster if the error occurs.
### Error code: DF-Executor-Conversion
@@ -48,158 +50,108 @@ If you are executing the data flow in a debug test execution from a debug pipeli
- **Recommendation**: Use the correct data type ### Error code: DF-Executor-InvalidColumn- - **Message**: Column name needs to be specified in the query, set an alias if using a SQL function - **Causes**: No column name was specified - **Recommendation**: Set an alias if using a SQL function such as min()/max(), etc.
- ### Error code: DF-Executor-DriverError
+### Error code: DF-Executor-DriverError
- **Message**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types. - **Causes**: Driver error - **Recommendation**: INT96 is legacy timestamp type, which is not supported by ADF Dataflow. Consider upgrading the column type to the latest types.
- ### Error code: DF-Executor-BlockCountExceedsLimitError
+### Error code: DF-Executor-BlockCountExceedsLimitError
- **Message**: The uncommitted block count cannot exceed the maximum limit of 100,000 blocks. Check blob configuration. - **Causes**: There can be a maximum of 100,000 uncommitted blocks in a blob. - **Recommendation**: Contact Microsoft product team regarding this issue for more details
- ### Error code: DF-Executor-PartitionDirectoryError
+### Error code: DF-Executor-PartitionDirectoryError
- **Message**: The specified source path has either multiple partitioned directories (for e.g. <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/<Partition Root Directory 2>/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for example <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation. - **Causes**: Source path has either multiple partitioned directories or partitioned directory with other file or non-partitioned directory. - **Recommendation**: Remove partitioned root directory from source path and read it through separate source transformation.
- ### Error code: DF-Executor-OutOfMemoryError
+### Error code: DF-Executor-OutOfMemoryError
- **Message**: Cluster ran into out of memory issue during execution, please retry using an integration runtime with bigger core count and/or memory optimized compute type - **Causes**: Cluster is running out of memory - **Recommendation**: Debug clusters are meant for development purposes. Leverage data sampling, appropriate compute type, and size to run the payload. Refer to the [mapping data flow performance guide](concepts-data-flow-performance.md) for tuning to achieve best performance.
- ### Error code: DF-Executor-illegalArgument
-- **Message**: Please make sure that the access key in your Linked Service is correct-- **Causes**: Account Name or Access Key incorrect-- **Recommendation**: Ensure the account name or access key specified in your linked service is correct. -
- ### Error code: DF-Executor-InvalidType
+### Error code: DF-Executor-InvalidType
- **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported. - **Causes**: Incompatible data types between declared type and actual parameter value-- **Recommendation**: Check that your parameter values passed into a data flow match the declared type.
+- **Recommendation**: Please check that your parameter values passed into a data flow match the declared type.
- ### Error code: DF-Executor-ColumnUnavailable
+### Error code: DF-Executor-ColumnUnavailable
- **Message**: Column name used in expression is unavailable or invalid - **Causes**: Invalid or unavailable column name used in expressions-- **Recommendation**: Check column name(s) used in expressions
+- **Recommendation**: Please check column name(s) used in expressions
- ### Error code: DF-Executor-ParseError
+### Error code: DF-Executor-ParseError
- **Message**: Expression cannot be parsed - **Causes**: Expression has parsing errors due to formatting-- **Recommendation**: Check formatting in expression-
-### Error code: GetCommand OutputAsync failed
--- **Message**: During Data Flow debug and data preview: GetCommand OutputAsync failed with ...-- **Causes**: This is a back-end service error. You can retry the operation and also restart your debug session.-- **Recommendation**: If retry and restart do not resolve the issue, contact customer support.-
-### Error code: Hit unexpected exception and execution failed
--- **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.-- **Causes**: This is a back-end service error. You can retry the operation and also restart your debug session.-- **Recommendation**: If retry and restart do not resolve the issue, contact customer support.-
-### Error code: Debug data preview No Output Data on Join
+- **Recommendation**: Please check formatting in expression.
-- **Message**: There are a high number of null values or missing values which may be caused by having too few rows sampled. Try updating the debug row limit and refreshing the data.-- **Causes**: Join condition did not match any rows or resulted in high number of NULLs during data preview.-- **Recommendation**: Go to Debug Settings and increase the number of rows in the source row limit. Make sure that you have select an Azure IR with a large enough data flow cluster to handle more data.-
-### Error code: Validation Error at Source with multiline CSV files
--- **Message**: You might see one of the following error messages:
- - The last column is null or missing.
- - Schema validation at source fails.
- - Schema import fails to show correctly in the UX and the last column has a new line character in the name.
-- **Causes**: In the Mapping data flow, currently, the multiline CSV source does not work with the \r\n as row delimiter. Sometimes extra lines at carriage returns break source values. -- **Recommendation**: Either generate the file at the source with \n as row delimiter rather than \r\n. Or, use Copy Activity to convert CSV file with \r\n to \n as a row delimiter.-
-### Error code: DF-Executor-SourceInvalidPayload
-- **Message**: Data preview, debug, and pipeline data flow execution failed because container does not exist-- **Causes**: When dataset contains a container that does not exist in the storage-- **Recommendation**: Make sure that the container referenced in your dataset exists or accessible.--
- ### Error code: DF-Executor-SystemImplicitCartesian
+### Error code: DF-Executor-SystemImplicitCartesian
- **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows. - **Causes**: Implicit cartesian product for INNER join between logical plans is not supported. If the columns used in the join creates the unique key - **Recommendation**: For non-equality based joins you have to opt for CROSS JOIN. -
- ### Error code: DF-Executor-SystemInvalidJson
+### Error code: DF-Executor-SystemInvalidJson
- **Message**: JSON parsing error, unsupported encoding or multiline - **Causes**: Possible issues with the JSON file: unsupported encoding, corrupt bytes, or using JSON source as single document on many nested lines - **Recommendation**: Verify the JSON file's encoding is supported. On the Source transformation that is using a JSON dataset, expand 'JSON Settings' and turn on 'Single Document'.
- ### Error code: DF-Executor-BroadcastTimeout
-- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.-- **Causes**: Broadcast has a default timeout of 60 secs in debug runs and 300 secs in job runs. On broadcast join, the stream chosen for broadcast seems too large to produce data within this limit. If a broadcast join is not used, the default broadcast done by dataflow can reach the same limit-- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams where the processing can take more than 60 secs. Choose a smaller stream to broadcast instead. Large SQL/DW tables and source files are typically bad candidates. In the absence of a broadcast join, use a larger cluster if the error occurs.-
- ### Error code: DF-Executor-Conversion
+### Error code: DF-Executor-Conversion
- **Message**: Converting to a date or time failed due to an invalid character - **Causes**: Data is not in the expected format-- **Recommendation**: Use the correct data type--
- ### Error code: DF-Executor-InvalidColumn
-- **Message**: Column name needs to be specified in the query, set an alias if using a SQL function-- **Causes**: No column name was specified.
+- **Recommendation**: Please use the correct data type.
- ### Error code: DF-Executor-DriverError
-- **Message**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.-- **Causes**: It is a driver error.-- **Recommendation**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.--
- ### Error code: DF-Executor-BlockCountExceedsLimitError
+### Error code: DF-Executor-BlockCountExceedsLimitError
- **Message**: The uncommitted block count cannot exceed the maximum limit of 100,000 blocks. Check blob configuration. - **Causes**: There can be a maximum of 100,000 uncommitted blocks in a blob. - **Recommendation**: Please contact Microsoft product team regarding this issue for more details
- ### Error code: DF-Executor-PartitionDirectoryError
-- **Message**: The specified source path has either multiple partitioned directories (for e.g. <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/<Partition Root Directory 2>/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for e.g. <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.
+### Error code: DF-Executor-PartitionDirectoryError
+- **Message**: The specified source path has either multiple partitioned directories (for e.g. *<Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/<Partition Root Directory 2>/c=10/d=30*) or partitioned directory with other file or non-partitioned directory (for e.g. *<Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/Directory 2/file1*), remove partition root directory from source path and read it through separate source transformation.
- **Causes**: Source path has either multiple partitioned directories or partitioned directory with other file or non-partitioned directory. - **Recommendation**: Remove partitioned root directory from source path and read it through separate source transformation.
+### Error code: GetCommand OutputAsync failed
+- **Message**: During Data Flow debug and data preview: GetCommand OutputAsync failed with ...
+- **Causes**: This is a back-end service error. You can retry the operation and also restart your debug session.
+- **Recommendation**: If retry and restart do not resolve the issue, contact customer support.
- ### Error code: DF-Executor-OutOfMemoryError
+### Error code: DF-Executor-OutOfMemoryError
+
- **Message**: Cluster ran into out of memory issue during execution, please retry using an integration runtime with bigger core count and/or memory optimized compute type - **Causes**: Cluster is running out of memory. - **Recommendation**: Debug clusters are meant for development purposes. Leverage data sampling appropriate compute type and size to run the payload. Refer to [Dataflow Performance Guide](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance) for tuning the dataflows for best performance. -
- ### Error code: DF-Executor-illegalArgument
+### Error code: DF-Executor-illegalArgument
- **Message**: Please make sure that the access key in your Linked Service is correct. - **Causes**: Account Name or Access Key is incorrect. - **Recommendation**: Please supply right account name or access key.
+- **Message**: Please make sure that the access key in your Linked Service is correct
+- **Causes**: Account Name or Access Key incorrect
+- **Recommendation**: Ensure the account name or access key specified in your linked service is correct.
- ### Error code: DF-Executor-InvalidType
+### Error code: DF-Executor-InvalidType
- **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported. - **Causes**: Incompatible data types between declared type and actual parameter value - **Recommendation**: Please supply right data types. -
- ### Error code: DF-Executor-ColumnUnavailable
+### Error code: DF-Executor-ColumnUnavailable
- **Message**: Column name used in expression is unavailable or invalid. - **Causes**: Invalid or unavailable column name is used in expressions.-- **Recommendation**: Check column name(s) used in expressions.
+- **Recommendation**: Please check column name(s) used in expressions.
- ### Error code: DF-Executor-ParseError
+### Error code: DF-Executor-ParseError
- **Message**: Expression cannot be parsed. - **Causes**: Expression has parsing errors due to formatting.-- **Recommendation**: Check formatting in expression.
+- **Recommendation**: Please check formatting in expression.
### Error code: DF-Executor-OutOfDiskSpaceError
@@ -213,59 +165,83 @@ If you are executing the data flow in a debug test execution from a debug pipeli
- **Causes**: Undetermined - **Recommendation**: Please check parameter value assignment in the pipeline. Parameter expression may contain invalid characters. -
- ### Error code: DF-Excel-InvalidConfiguration
+### Error code: DF-Excel-InvalidConfiguration
- **Message**: Excel sheet name or index is required. - **Causes**: Undetermined - **Recommendation**: Please check parameter value and specify sheet name or index to read Excel data. -
- ### Error code: DF-Excel-InvalidConfiguration
- **Message**: Excel sheet name and index cannot exist at the same time. - **Causes**: Undetermined - **Recommendation**: Please check parameter value and specify sheet name or index to read Excel data. -
- ### Error code: DF-Excel-InvalidConfiguration
- **Message**: Invalid range is provided. - **Causes**: Undetermined - **Recommendation**: Please check parameter value and specify valid range by reference: [Excel properties](https://docs.microsoft.com/azure/data-factory/format-excel#dataset-properties).
+- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported
+- **Causes**: Undetermined
+- **Recommendation**: Make sure Excel file extension is either .xlsx or .xls.
+ ### Error code: DF-Excel-InvalidData - **Message**: Excel worksheet does not exist. - **Causes**: Undetermined - **Recommendation**: Please check parameter value and specify valid sheet name or index to read Excel data.
- ### Error code: DF-Excel-InvalidData
- **Message**: Reading excel files with different schema is not supported now. - **Causes**: Undetermined - **Recommendation**: Use correct Excel file. -
- ### Error code: DF-Excel-InvalidData
- **Message**: Data type is not supported. - **Causes**: Undetermined-- **Recommendation**: Use Excel file right data types.-
- ### Error code: DF-Excel-InvalidConfiguration
-- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported-- **Causes**: Undetermined-- **Recommendation**: Make sure Excel file extension is either .xlsx or .xls.
+- **Recommendation**: Please use Excel file right data types.
+
+### Error code: 4502
+- **Message**: There are substantial concurrent MappingDataflow executions which are causing failures due to throttling under Integration Runtime.
+- **Causes**: A lot of Dataflow Activity runs are going on concurrently on the Integration Runtime. Please learn more about the [Azure Data Factory limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#data-factory-limits).
+- **Recommendation**: In case you are looking to run more Data flow activities in parallel, please distribute those on multiple integration runtimes.
++
+### Error code: InvalidTemplate
+- **Message**: The pipeline expression cannot be evaluated.
+- **Causes**: Pipeline expression passed in the dataflow activity is not being processed correctly because of syntax error.
+- **Recommendation**: Please check your activity in activity monitoring to verify the expression.
+
+### Error code: 2011
+- **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.
+- **Causes**: Data flow does not support the linked services with self-hosted integration runtime.
+- **Recommendation**: Please configure Data flow to run on integration runtime with 'Managed Virtual Network'.
+
+## Miscellaneous troubleshooting tips
+- **Issue**: Hit unexpected exception and execution failed
+ - **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.
+ - **Causes**: This is a back-end service error. You can retry the operation and also restart your debug session.
+ - **Recommendation**: If retry and restart do not resolve the issue, contact customer support.
+
+- **Issue**: Debug data preview No Output Data on Join
+ - **Message**: There are a high number of null values or missing values which may be caused by having too few rows sampled. Try updating the debug row limit and refreshing the data.
+ - **Causes**: Join condition did not match any rows or resulted in high number of NULLs during data preview.
+ - **Recommendation**: Go to Debug Settings and increase the number of rows in the source row limit. Make sure that you have select an Azure IR with a large enough data flow cluster to handle more data.
+
+- **Issue**: Validation Error at Source with multiline CSV files
+ - **Message**: You might see one of the following error messages:
+ - The last column is null or missing.
+ - Schema validation at source fails.
+ - Schema import fails to show correctly in the UX and the last column has a new line character in the name.
+ - **Causes**: In the Mapping data flow, currently, the multiline CSV source does not work with the \r\n as row delimiter. Sometimes extra lines at carriage returns break source values.
+ - **Recommendation**: Either generate the file at the source with \n as row delimiter rather than \r\n. Or, use Copy Activity to convert CSV file with \r\n to \n as a row delimiter.
## General troubleshooting guidance 1. Check the status of your dataset connections. In each Source and Sink transformation, visit the Linked Service for each dataset that you are using and test connections. 2. Check the status of your file and table connections from the data flow designer. Switch on Debug and click on Data Preview on your Source transformations to ensure that you are able to access your data. 3. If everything looks good from data preview, go into the Pipeline designer and put your data flow in a pipeline activity. Debug the pipeline for an end-to-end test. - ## Next steps
-For more troubleshooting help, try these resources:
-* [Data Factory blog](https://techcommunity.microsoft.com/t5/azure-data-factory/bg-p/AzureDataFactoryBlog)
+For more help with troubleshooting, try the following resources:
+
+* [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
-* [Azure videos](https://www.youtube.com/channel/UC2S0k7NeLcEm5_IhHUwpN0g/videos)
-* [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
-* [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+* [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
-* [ADF mapping data flows Performance Guide](concepts-data-flow-performance.md)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-rapid-response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-rapid-response.md
@@ -25,7 +25,7 @@ During an active access, Azure DDoS Protection Standard customers have access to
You should only engage DRR if: -- During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available. Review step 2 above on configuring monitors to detect resource availability and performance issues.
+- During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available.
- You think your resource is under DDoS attack, but DDoS Protection service is not mitigating the attack effectively. - You're planning a viral event that will significantly increase your network traffic. - For attacks that have a critical business impact.
@@ -52,4 +52,4 @@ To learn more, read the [DDoS Protection Standard documentation](./ddos-protecti
- Learn how to [test through simulations](test-through-simulations.md). - Learn how to [view and configure DDoS protection telemetry](telemetry.md).-- Learn how to [view and configure DDoS diagnostic logging](diagnostic-logging.md).
+- Learn how to [view and configure DDoS diagnostic logging](diagnostic-logging.md).
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
@@ -65,7 +65,7 @@ The following [metrics](../azure-monitor/platform/metrics-supported.md#microsoft
## View DDoS protection telemetry
-Telemetry for an attack is provided through Azure Monitor in real time. The telemetry is available only for the duration that a public IP address is under mitigation. You don't see telemetry before or after an attack is mitigated.
+Telemetry for an attack is provided through Azure Monitor in real time. Telemetry is available only when a public IP address has been under mitigation.
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan. 2. Under **Monitoring**, select **Metrics**.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-accelerate-alert-incident-response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-accelerate-alert-incident-response.md
@@ -66,11 +66,11 @@ The relevant alert group appears in partner output solutions.
The alert group will appear in supported partner solutions with the following prefixes:
- - **cat** for QRadar, ArcSight, Syslog CEF, Syslog LEEF
+- **cat** for QRadar, ArcSight, Syslog CEF, Syslog LEEF
- - **Alert Group** for Syslog text messages
+- **Alert Group** for Syslog text messages
- - **alert_group** for Syslog objects
+- **alert_group** for Syslog objects
These fields should be configured in the partner solution to display the alert group name. If there is no alert associated with an alert group, the field in the partner solution will display **NA**.
@@ -88,11 +88,29 @@ The following alert groups are automatically defined:
| Command failures | Operational issues | | | Configuration changes | Programming | |
-Alert groups are predefined. For details about alerts associated with alert groups, and about creating custom alert groups, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+Alert groups are predefined. For details about alerts associated with alert groups, and about creating custom alert groups, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).
## Customize alert rules
-You can add custom alert rules based on information that individual sensors detect. For example, define a rule that instructs a sensor to trigger an alert based on a source IP, destination IP, or command (within a protocol). When the sensor detects the traffic defined in the rule, an alert or event is generated.
+Use custom alert rules to more specifically pinpoint activity of interest to you.
+
+You can add custom alert rules based on:
+
+- A category, for example a protocol, port or file.
+- Source and destination addresses
+- A condition based on the category chosen, for example a function associated with a protocol, a file name, port or transport number.
+- A condition based on date and time reference, for example if a detection was made on a specific day or a certain part of the day.
+
+If the sensor detects the activity described in the rule, the alert is sent.
+ information that individual sensors detect. For example, define a rule that instructs a sensor to trigger an alert based on a source IP, destination IP, or command (within a protocol). When the sensor detects the traffic defined in the rule, an alert or event is generated.
+
+You can also use alert rule actions to instruct Defender for IoT to:
+
+- Allow users to access PCAP file from the alert.
+- Assign an alert severity.
+- Generate an event rather than alert. The detected information will appear in the event timeline.
+ The alert message indicates that a user-defined rule triggered the alert.
@@ -102,16 +120,16 @@ To create a custom alert rule:
1. Select **Custom Alerts** from the side menu of a sensor. 1. Select the plus sign (**+**) to create a rule.-
- :::image type="content" source="media/how-to-work-with-alerts-sensor/user-defined-rule.png" alt-text="Screenshot that shows a user-defined rule.":::
- 1. Define a rule name. 1. Select a category or protocol from the **Categories** pane. 1. Define a specific source and destination IP or MAC address, or choose any address.
-1. Add a condition. A list of conditions and their properties is unique for each category. You can select more than one condition for each alert.
-1. Indicate if the rule triggers an **Alarm** or **Event**.
-1. Assign a severity level to the alert.
-1. Indicate if the alert will include a PCAP file.
+1. Define one or several rule conditions. Two categories of conditions can be created:
+ - Conditions based on unique values associated with the category selected. Select Add and define the values.
+ - Conditions based on the when the activity was detected. In the Detections section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected anytime, during or after working hours. Use the Define working hours option to instruct Defender for IoT working hours for your organization.
+1. Define rule actions:
+ - Indicate if the rule triggers an **Alarm** or **Event**.
+ - Assign a severity level to the alert.
+ - Indicate if the alert will include a PCAP file.
1. Select **Save**. The rule is added to the **Customized Alerts Rules** list, where you can review basic rule parameters, the last time the rule was triggered, and more. You can also enable and disable the rule from the list.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-individual-sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
@@ -4,7 +4,7 @@ description: Learn how to manage individual sensors, including managing activati
Previously updated : 1/12/2021 Last updated : 02/02/2021
@@ -81,7 +81,7 @@ You'll receive an error message if the activation file could not be uploaded. Th
- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific Defender for IoT hub should be opened in your firewall and/or proxy. For details, see [Reference - IoT Hub endpoints](../iot-hub/iot-hub-devguide-endpoints.md). -- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sensor Management** page of the Defender for IoT portal. If this doesn't work, contact Microsoft Support.
+- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the Sites and Sensors page of the Defender for IoT portal. If this doesn't work, contact Microsoft Support.
## Manage certificates
@@ -109,7 +109,7 @@ The Defender for IoT sensor, and on-premises management console use SSL, and TLS
- Secure communications between the sensors and an on-premises management console.
-Once installed, the appliance generates a local self-signed certificate to allow preliminary access to the web console. Enterprise SSL, and TLS certificates may be installed using the [`cyberx-xsense-certificate-import`](#cli-commands) command line tool.
+Once installed, the appliance generates a local self-signed certificate to allow preliminary access to the web console. Enterprise SSL, and TLS certificates may be installed using the [`cyberx-xsense-certificate-import`](#cli-commands) command line tool.
> [!NOTE] > For integrations and forwarding rules where the appliance is the client and initiator of the session, specific certificates are used and are not related to the system certificates.
@@ -358,15 +358,23 @@ If your sensor was registered as a cloud-connected sensor, the sensor name is de
To change the name:
-1. In the Azure Defender for IoT portal, go to the **Sensor Management** page.
+1. In the Azure Defender for IoT portal, go to the Sites and Sensors page.
-1. Delete the sensor from the **Sensor Management** window.
+1. Delete the sensor from the Sites and Sensors page.
-1. Re-register with the new name.
+1. Register with the new name by selecting **Onboard sensor** from the Getting Started page.
1. Download the new activation file.
-1. Sign in to the sensor and upload the new activation file.
+1. Sign in to the Defender for IoT sensor console.
+
+1. In the sensor console, select **System Settings** and then select **Reactivation**.
+
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/reactivate.png" alt-text="Upload your activation file to reactivate the sensor.":::
+
+1. Select **Upload** and select the file you saved.
+
+1. Select **Activate**.
## Update the sensor network configuration
@@ -382,7 +390,7 @@ To change the configuration:
:::image type="content" source="media/how-to-manage-individual-sensors/edit-network-configuration-screen.png" alt-text="Configure your network settings.":::
-3. Set the parameters as follows:
+3. Set the parameters:
| Parameter | Description | |--|--|
@@ -453,7 +461,7 @@ To save the backup to an external SMB server:
- `sudo chmod 777 /<backup_folder_name_on_cyberx_server>/`
-3. Edit `fstab`:
+3. Edit `fstab`:
- `sudo nano /etc/fstab`
@@ -521,7 +529,7 @@ The following procedure describes how to update a standalone sensor by using the
:::image type="content" source="media/how-to-manage-individual-sensors/defender-for-iot-version.png" alt-text="Screenshot of the upgrade version that appears after you sign in.":::
-## Forward sensor failure alerts
+## Forward sensor failure alerts
You can forward alerts to third parties to provide details about:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-sensors-on-the-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
@@ -47,12 +47,10 @@ To download an activation file:
## View onboarded sensors
-On the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), you can view basic information about onboarded sensors.
+On the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), you can view basic information about onboarded sensors.
1. Select **Sites and Sensors**.
-1. On the **Sites and Sensors** page, use filter and search tools to find sensor information that you need.
-
-The available information includes:
+1. Use filter and search tools to find sensor and threat intelligence information that you need.
- How many sensors were onboarded - The number of sensors that are cloud connected and locally managed
@@ -63,32 +61,40 @@ The available information includes:
You use the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) for management tasks related to sensors.
-### Export
+Onboarded sensors can be viewed on the **Sites and Sensors** page. You can also edit sensor information from this page.
+
+### Export sensor details
To export onboarded sensor information, select the **Export** icon on the top of the **Sites and Sensors** page.
-### Edit
+### Edit sensor zone details
+
+Use the **Sites and Sensors** edit options to edit the sensor name and zone.
+
+To edit:
-Use the **Sites and Sensors** editing tools to add and edit the site name, zone, and tags.
+1. Right-click the ellipsis (**...**) for the sensor you want to edit.
+1. Select Edit.
+1. Update the sensor zone or create a new zone.
-### Delete
+### Delete a sensor
If you delete a cloud-connected sensor, information won't be sent to the IoT hub. Delete locally connected sensors when you're no longer working with them. To delete a sensor:
-1. Select the ellipsis (**...**) for the sensor you want to delete.
+1. Select the ellipsis (**...**) for the sensor you want to delete.
1. Confirm the deletion.
-### Reactivate
+### Reactivate a sensor
-You might want to update the mode that your sensor is managed in. For example:
+You may need to reactivate your sensor because you want to:
-- **Work in cloud-connected mode instead of locally managed mode**: To do this, update the activation file for your locally connected sensor with an activation file for a cloud-connected sensor. After reactivation, sensor detections are displayed in both the sensor and the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). After the reactivation file is successfully uploaded, newly detected alert information is sent to Azure.
+- **Work in cloud-connected mode instead of locally managed mode**: After reactivation, sensor detections are displayed in the sensor and newly detected alert information is delivered through the IoT hub. This information can be shared with other Azure services, such as Azure Sentinel.
-- **Work in locally connected mode instead of cloud-connected mode**: To do this, update the activation file for a cloud-connected sensor with an activation file for a locally managed sensor. After reactivation, sensor detection information is displayed only in the sensor.
+- **Work in locally managed mode instead of cloud-connected mode**: After reactivation, sensor detection information is displayed only in the sensor.
-- **Associate the sensor to a new IoT hub**: To do this, re-register then sensor and upload a new activation file.
+- **Associate the sensor to a new IoT hub**: To do this, re-register the sensor with a new hub, and then download a new activation file.
To reactivate a sensor:
@@ -98,19 +104,19 @@ To reactivate a sensor:
3. Delete the sensor.
-4. Onboard the sensor again from the **Onboarding** page in the new mode or with a new IoT hub.
+4. Onboard the sensor again in the new mode or with a new IoT hub by selecting **Onboard a sensor** from the Getting Started page.
-5. Download the activation file from the **Download Activation File** page.
+5. Download the activation file.
-6. Sign in to the Defender for IoT sensor console.
+1. Sign in to the Defender for IoT sensor console.
7. In the sensor console, select **System Settings** and then select **Reactivation**. :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/reactivate.png" alt-text="Upload your activation file to reactivate the sensor.":::
-8. Select **Upload** and select the file you saved.
+8. Select **Upload** and select the file you saved from the Onboard sensor page.
-9. Select **Activate**.
+9. Select **Activate**.
## See also
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-the-alert-event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-alert-event.md
@@ -18,7 +18,8 @@ The following options are available for managing alert events:
| **Learn** | Authorize the detected event. For more information, see [About learning and unlearning events](#about-learning-and-unlearning-events). | | **Acknowledge** | Hide the alert once for the detected event. The alert will be triggered again if the event is detected again. For more information, see [About acknowledging and unacknowledging events](#about-acknowledging-and-unacknowledging-events). | | **Mute** | Continuously ignore activity with identical devices and comparable traffic. For more information, see [About muting and unmuting events](#about-muting-and-unmuting-events). |-
+
+You can also export alert information.
## About learning and unlearning events Events that indicate deviations of the learned network might reflect valid network changes. Examples might include a new authorized device that joined the network or an authorized firmware update.
@@ -63,9 +64,9 @@ In these situations, learning is not available. When learning can't be carried o
> [!NOTE] > You can't mute events in which an internet device is defined as the source or destination.
-### What traffic is muted?
+### What alert activity is muted?
-A muted scenario includes the network devices, and traffic detected for an event. The alert title describes the traffic that's being muted.
+A muted scenario includes the network devices and traffic detected for an event. The alert title describes the traffic that is being muted.
The device or devices being muted will be displayed as an image in the alert. If two devices are shown, the specific alerted traffic between them will be muted.
@@ -83,7 +84,7 @@ When an event is muted, it's ignored any time the source sends an HTTP header wi
**After an event is muted:** -- The alert will be accessible in the **Acknowledged** alert view until it's unmuted.
+- The alert will be accessible in the **Acknowledged** alert view until it is unmuted.
- The mute action will appear in the **Event Timeline**.
@@ -101,6 +102,25 @@ When an event is muted, it's ignored any time the source sends an HTTP header wi
2. Hover over an alert to see if it's muted.
+## Export alert information
+
+Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view.The following information is exported:
+
+- Source address
+- Destination address
+- Alert title
+- Alert severity
+- Alert message
+- Additional information
+- Acknowledged status
+- PCAP availability
+
+To export:
+
+1. Select Alerts from the side menu.
+1. Select Export.
+1. Select Export Extended Alerts to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert event with the unique items in each row. Using this option makes it easier to investigate exported alert events.
+ ## See also [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-set-up-your-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-your-network.md
@@ -529,24 +529,23 @@ Review this list before site deployment:
| **#** | **Task or activity** | **Status** | **Comments** | |--|--|--|--|
-| 1 | Provide global. | ΓÿÉ | |
-| 3 | Order appliances. | ΓÿÉ | |
-| 4 | Prepare a list of subnets in the network. | ΓÿÉ | |
-| 5 | Provide a VLAN list of the production networks. | ΓÿÉ | |
-| 6 | Provide a list of switch models in the network. | ΓÿÉ | |
-| 7 | Provide a list of vendors and protocols of the industrial equipment. | ΓÿÉ | |
-| 8 | Provide network details for sensors (IP address, subnet, D-GW, DNS). | ΓÿÉ | |
-| 9 | Create necessary firewall rules and the access list. | ΓÿÉ | |
-| 10 | Create spanning ports on switches for port monitoring, or configure network taps as desired. | ΓÿÉ | |
-| 11 | Prepare rack space for sensor appliances. | ΓÿÉ | |
-| 12 | Prepare a workstation for personnel. | ΓÿÉ | |
-| 13 | Provide a keyboard, monitor, and mouse for the Defender for IoT rack devices. | ΓÿÉ | |
-| 14 | Rack and cable the appliances. | ΓÿÉ | |
-| 15 | Allocate site resources to support deployment. | ΓÿÉ | |
-| 16 | Create Active Directory groups or local users. | ΓÿÉ | |
-| 17 | Set-up training (self-learning). | ΓÿÉ | |
-| 18 | Go or no-go. | ΓÿÉ | |
-| 19 | Schedule the deployment date. | ΓÿÉ | |
+| 1 | Order appliances. | ΓÿÉ | |
+| 2 | Prepare a list of subnets in the network. | ΓÿÉ | |
+| 3 | Provide a VLAN list of the production networks. | ΓÿÉ | |
+| 4 | Provide a list of switch models in the network. | ΓÿÉ | |
+| 5 | Provide a list of vendors and protocols of the industrial equipment. | ΓÿÉ | |
+| 6 | Provide network details for sensors (IP address, subnet, D-GW, DNS). | ΓÿÉ | |
+| 7 | Create necessary firewall rules and the access list. | ΓÿÉ | |
+| 8 | Create spanning ports on switches for port monitoring, or configure network taps as desired. | ΓÿÉ | |
+| 9 | Prepare rack space for sensor appliances. | ΓÿÉ | |
+| 10 | Prepare a workstation for personnel. | ΓÿÉ | |
+| 11 | Provide a keyboard, monitor, and mouse for the Defender for IoT rack devices. | ΓÿÉ | |
+| 12 | Rack and cable the appliances. | ΓÿÉ | |
+| 13 | Allocate site resources to support deployment. | ΓÿÉ | |
+| 14 | Create Active Directory groups or local users. | ΓÿÉ | |
+| 15 | Set-up training (self-learning). | ΓÿÉ | |
+| 16 | Go or no-go. | ΓÿÉ | |
+| 17 | Schedule the deployment date. | ΓÿÉ | |
| **Date** | **Note** | **Deployment date** | **Note** |
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-view-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-view-alerts.md
@@ -1,6 +1,6 @@
Title: View alerts
-description: View alerts according to various categories, and use search features to help you find alerts of interest.
+description: View alerts according to various categories, and uses search features to help you find alerts of interest.
@@ -13,13 +13,13 @@
This article describes how to view alerts triggered by your sensor and manage them with alert tools.
-You can view alerts based on various categories, such as alerts that have been archived or pinned. Or you can search for alerts of interest, such as alerts based on an IP or MAC address.
+You can view alerts based on various categories, such as alerts that have been archived or pinned. You also can search for alerts of interest, such as alerts based on an IP or MAC address.
You can also view alerts from the sensor dashboard. To view alerts: -- Select **Alerts** from the side menu. The **Alerts** window displays the alerts that your sensor has detected.
+- Select **Alerts** from the side menu. The Alerts window displays the alerts that your sensor has detected.
:::image type="content" source="media/how-to-work-with-alerts-sensor/alerts-screen.png" alt-text="View of the Alerts screen.":::
@@ -37,21 +37,21 @@ You can view alerts according to various categories from the **Alerts** main vie
## Search for alerts of interest
-The **Alerts** main view provides various search features to help you find alerts of interest.
+The Alerts main view provides various search features to help you find alerts of interest.
:::image type="content" source="media/how-to-work-with-alerts-sensor/main-alerts-view.png" alt-text="Alerts learning screenshot.":::
-### Text search
+### Text search
-Use the **Free Search** option to search for alerts by text, numbers, or characters.
+Use the Free Search option to search for alerts by text, numbers, or characters.
To search: -- Type the required text in the **Free Search** field and press Enter on your keyboard.
+- Type the required text in the Free Search field and press Enter on your keyboard.
To clear the search: -- Delete the text in the **Free Search** field and press Enter on your keyboard.
+- Delete the text in the Free Search field and press Enter on your keyboard.
### Device group or device IP address search
@@ -95,7 +95,7 @@ Alert messages provide the following actions:
- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/learn-and-acknowledge-all-alerts.png" border="false"::: to learn and acknowledge all alerts. -- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-csv.png" border="false"::: to export the alert list to a CSV file and select the export option. Choose **Alert Export** for the regular export-to-CSV option. Or choose **Extended Alert Export** for the possibility to add separate rows for additional information about an alert in the CSV file.
+- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-csv.png" border="false"::: to export alert information to a .csv file. Use the **Extended Alert Export** option to export alert information in separate rows for each alert that covers multiple devices.
## Alert pop-up window options
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-work-with-alerts-on-premises-management-console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-alerts-on-premises-management-console.md
@@ -114,7 +114,7 @@ To view the devices in a zone map:
## Manage alert events
-You can manage alert events detected by organizational sensors as follows:
+Several options are available for managing alert events from the on-premises management console.
- Learn or acknowledge alert events. Select **Learn & Acknowledge** to learn all alert events that can be authorized and to acknowledge all alert events that are currently not acknowledged.
@@ -122,6 +122,27 @@ You can manage alert events detected by organizational sensors as follows:
- Mute and unmute alert events.
+To learn more about learning, acknowledging and muting alert events, see the sensor [Manage alert events](how-to-manage-the-alert-event.md) article.
+
+## Export alert information
+
+Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view.The following information is exported:
+
+- Source Address
+- Destination Address
+- Alert title
+- Alert severity
+- Alert message
+- Additional information
+- Acknowledged status
+- PCAP availability
+
+To export:
+
+1. Select Alerts from the side menu.
+1. Select Export.
+1. Select Export Extended Alerts to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert with the unique items in each row. Using this option makes it easier to investigate exported alert events.
+ ## Create alert exclusion rules Instruct Defender for IoT to ignore alert triggers based on:
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
@@ -50,11 +50,11 @@ To create a relationship, you need to specify:
The relationship ID must be unique within the given source twin. It doesn't need to be globally unique. For example, for the twin *foo*, each specific relationship ID must be unique. However, another twin *bar* can have an outgoing relationship that matches the same ID of a *foo* relationship.
-The following code sample illustrates how to create a relationship in your Azure Digital Twins instance.
+The following code sample illustrates how to create a relationship in your Azure Digital Twins instance. It uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
-In your main method, you can now call the `CreateRelationship()` function to create a _contains_ relationship like this:
+This custom function can now be called to create a _contains_ relationship like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseCreateRelationship":::
@@ -83,11 +83,11 @@ To access the list of **outgoing** relationships for a given twin in the graph,
This returns an `Azure.Pageable<T>` or `Azure.AsyncPageable<T>`, depending on whether you use the synchronous or asynchronous version of the call.
-Here is an example that retrieves a list of relationships:
+Here is an example that retrieves a list of relationships. It uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
-You can now call this method to see the outgoing relationships of the twins like this:
+You can now call this custom method to see the outgoing relationships of the twins like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFindOutgoingRelationships":::
@@ -97,23 +97,24 @@ You can use the retrieved relationships to navigate to other twins in your graph
Azure Digital Twins also has an API to find all **incoming** relationships to a given twin. This is often useful for reverse navigation, or when deleting a twin.
-The previous code sample was focused on finding outgoing relationships from a twin. The following example is structured similarly, but finds *incoming* relationships to the twin instead.
+>[!NOTE]
+> `IncomingRelationship` calls don't return the full body of the relationship. For more information on the `IncomingRelationship` class, see its [reference documentation](/dotnet/api/azure.digitaltwins.core.incomingrelationship?view=azure-dotnet&preserve-view=true).
-Note that the `IncomingRelationship` calls don't return the full body of the relationship.
+The code sample in the previous section focused on finding outgoing relationships from a twin. The following example is structured similarly, but finds *incoming* relationships to the twin instead. This example also uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
-You can now call this method to see the incoming relationships of the twins like this:
+You can now call this custom method to see the incoming relationships of the twins like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFindIncomingRelationships"::: ### List all twin properties and relationships
-Using the above methods for listing outgoing and incoming relationships to a twin, you can create a method that prints full twin information, including the twin's properties and both types of its relationships. Here is an example method, called `FetchAndPrintTwinAsync()`, showing how to do this.
+Using the above methods for listing outgoing and incoming relationships to a twin, you can create a method that prints full twin information, including the twin's properties and both types of its relationships. Here is an example custom method showing how to combine the above custom methods for this purpose.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FetchAndPrintMethod":::
-You can now call this function in your main method like this:
+You can now call this custom function like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFetchAndPrint":::
@@ -126,9 +127,11 @@ Relationships are updated using the `UpdateRelationship` method.
The required parameters for the client call are the ID of the source twin (the twin where the relationship originates), the ID of the relationship to update, and a [JSON Patch](http://jsonpatch.com/) document containing the properties and new values you'd like to update.
+Here is sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
+
-Here is an example of a call to this method, passing in a JSON Patch document with the information to update a property.
+Here is an example of a call to this custom method, passing in a JSON Patch document with the information to update a property.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseUpdateRelationship":::
@@ -136,9 +139,11 @@ Here is an example of a call to this method, passing in a JSON Patch document wi
The first parameter specifies the source twin (the twin where the relationship originates). The other parameter is the relationship ID. You need both the twin ID and the relationship ID, because relationship IDs are only unique within the scope of a twin.
+Here is sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
+
-You can now call this method to delete a relationship like this:
+You can now call this custom method to delete a relationship like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseDeleteRelationship":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
@@ -87,7 +87,7 @@ You can access the details of any digital twin by calling the `GetDigitalTwin()`
This call returns twin data as a strongly-typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which will return the core twin metadata and properties in pre-parsed form. Here's an example of how to use this to view twin details: Only properties that have been set at least once are returned when you retrieve a twin with the `GetDigitalTwin()` method.
@@ -209,9 +209,9 @@ The two calls that modify *Twin1* are executed one after another, and change mes
You can delete twins using the `DeleteDigitalTwin()` method. However, you can only delete a twin when it has no more relationships. So, delete the twin's incoming and outgoing relationships first.
-Here is an example of the code to delete twins and their relationships:
+Here is an example of the code to delete twins and their relationships. The `DeleteDigitalTwin` SDK call is highlighted to clarify where it falls in the wider example context.
### Delete all digital twins
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-administer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-administer.md
@@ -46,7 +46,7 @@ To learn more, see the following GitHub repositories and packages:
| Language | Repository | Package | | | - | - |
-| Node | [https://github.com/Azure/azure-sdk-for-node](https://github.com/Azure/azure-sdk-for-node) | [https://www.npmjs.com/package/azure-arm-iotcentral](https://www.npmjs.com/package/azure-arm-iotcentral)
+| Node | [https://github.com/Azure/azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js) | [https://www.npmjs.com/package/@azure/arm-iotcentral](https://www.npmjs.com/package/@azure/arm-iotcentral)
| Python |[https://github.com/Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | [https://pypi.org/project/azure-mgmt-iotcentral](https://pypi.org/project/azure-mgmt-iotcentral) | C# | [https://github.com/Azure/azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net) | [https://www.nuget.org/packages/Microsoft.Azure.Management.IotCentral](https://www.nuget.org/packages/Microsoft.Azure.Management.IotCentral) | Ruby | [https://github.com/Azure/azure-sdk-for-ruby](https://github.com/Azure/azure-sdk-for-ruby) | [https://rubygems.org/gems/azure_mgmt_iot_central](https://rubygems.org/gems/azure_mgmt_iot_central)
@@ -55,4 +55,4 @@ To learn more, see the following GitHub repositories and packages:
## Next steps
-Now that you've learned about how to administer your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
+Now that you've learned about how to administer your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-export-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
@@ -163,14 +163,14 @@ Now that you have a destination to export your data to, set up data export in yo
## Monitor your export
-In addition to seeing the status of your exports in IoT Central, you can monitor how much data is flowing through your exports and observe export errors in the Azure Monitor data platform. You can access metrics about your exports and device health in charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI. Currently, you can monitor these data export metrics in Azure Monitor:
-
-1. Number of messages incoming to export before filters are applied
-2. Number of messages that pass through filters
-3. Number of messages successfully exported to destinations
-4. Number of errors encountered
-
-[Learn more about how to access IoT Central metrics.](howto-monitor-application-health.md)
+In addition to seeing the status of your exports in IoT Central, you can use [Azure Monitor](../../azure-monitor/overview.md) to see how much data you're exporting and any export errors. You can access export and device health metrics in charts in the Azure portal, with a REST API, or with queries in PowerShell or the Azure CLI. Currently, you can monitor the following data export metrics in Azure Monitor:
+
+- Number of messages incoming to export before filters are applied.
+- Number of messages that pass through filters.
+- Number of messages successfully exported to destinations.
+- Number of errors encountered.
+
+To learn more, see [Monitor the overall health of an IoT Central application](howto-monitor-application-health.md).
## Destinations
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/overview-iot-central-tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-tour.md
@@ -92,8 +92,8 @@ The top menu appears on every page:
* To search for device templates and devices, enter a **Search** value. * To change the UI language or theme, choose the **Settings** icon. Learn more about [managing your application preferences](howto-manage-preferences.md)
-* To sign out of the application, choose the **Account** icon.
* To get help and support, choose the **Help** drop-down for a list of resources. You can [get information about your application](./howto-get-app-info.md) from the **About your app** link. In an application on the free pricing plan, the support resources include access to [live chat](howto-show-hide-chat.md).
+* To sign out of the application, choose the **Account** icon.
You can choose between a light theme or a dark theme for the UI:
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/rbac-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-guide.md
@@ -11,7 +11,7 @@ Last updated 8/30/2020
-# Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control (preview)
+# Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control
> [!NOTE] > Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md).
@@ -44,20 +44,20 @@ More about Azure Key Vault management guidelines, see:
- [Azure Key Vault security overview](security-overview.md) - [Azure Key Vault service limits](service-limits.md)
-## Azure built-in roles for Key Vault data plane operations (preview)
+## Azure built-in roles for Key Vault data plane operations
> [!NOTE] > `Key Vault Contributor` role is for management plane operations to manage key vaults. It does not allow access to keys, secrets and certificates. | Built-in role | Description | ID | | | | |
-| Key Vault Administrator (preview) | Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 |
-| Key Vault Certificates Officer (preview) | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
-| Key Vault Crypto Officer (preview)| Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 |
-| Key Vault Crypto Service Encryption (preview) | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 |
-| Key Vault Crypto User (preview) | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 |
-| Key Vault Reader (preview)| Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 |
-| Key Vault Secrets Officer (preview)| Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | b86a8fe4-44ce-4948-aee5-eccb2c155cd7 |
-| Key Vault Secrets User (preview)| Read secret contents. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
+| Key Vault Administrator| Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 |
+| Key Vault Certificates Officer | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
+| Key Vault Crypto Officer | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 |
+| Key Vault Crypto Service Encryption User | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 |
+| Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 |
+| Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 |
+| Key Vault Secrets Officer| Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | b86a8fe4-44ce-4948-aee5-eccb2c155cd7 |
+| Key Vault Secrets User | Read secret contents. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
For more information about Azure built-in roles definitions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
@@ -74,8 +74,8 @@ To add role assignments, you must have:
### Enable Azure RBAC permissions on Key Vault
-> [!IMPORTANT]
-> Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren't assigned.
+> [!NOTE]
+> Changing permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is part of [Owner](../../role-based-access-control/built-in-roles.md#owner) and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) roles. Classic subscription administrator roles like 'Service Administrator' and 'Co-Administrator' are not supported.
1. Enable Azure RBAC permissions on new key vault:
@@ -85,10 +85,13 @@ To add role assignments, you must have:
![Enable Azure RBAC permissions - existing vault](../media/rbac/image-2.png)
+> [!IMPORTANT]
+> Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren't assigned.
+ ### Assign role > [!Note]
-> It's recommended to use the unique role ID instead of the role name in scripts. Therefore, if a role is renamed, your scripts would continue to work. During preview every role would have "(preview)" suffix, which would be removed later. In this document role name is used only for readability.
+> It's recommended to use the unique role ID instead of the role name in scripts. Therefore, if a role is renamed, your scripts would continue to work. In this document role name is used only for readability.
Azure CLI command to create a role assignment:
@@ -107,13 +110,13 @@ In the Azure portal, the Azure role assignments screen is available for all reso
2. Click Access control (IAM) \> Add-role assignment\>Add
-3. Create Key Vault Reader role "Key Vault Reader (preview)" for current user
+3. Create Key Vault Reader role "Key Vault Reader" for current user
![Add role - resource group](../media/rbac/image-5.png) Azure CLI: ```azurecli
-az role assignment create --role "Key Vault Reader (preview)" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}
+az role assignment create --role "Key Vault Reader" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}
``` Above role assignment provides ability to list key vault objects in key vault.
@@ -124,14 +127,14 @@ Above role assignment provides ability to list key vault objects in key vault.
2. Click Add-role assignment\>Add
-3. Create Key Secrets Officer role "Key Vault Secrets Officer (preview)" for current user.
+3. Create Key Secrets Officer role "Key Vault Secrets Officer" for current user.
![Role assignment - key vault](../media/rbac/image-6.png) Azure CLI: ```azurecli
-az role assignment create --role "Key Vault Secrets Officer (preview)" --assignee {i.e jalichwa@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
+az role assignment create --role "Key Vault Secrets Officer" --assignee {i.e jalichwa@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
``` After creating above role assignment you can create/update/delete secrets.
@@ -142,18 +145,18 @@ After creating above role assignment you can create/update/delete secrets.
### Secret scope role assignment
-1. Open one of previously created secrets, notice Overview and Access control (IAM) (preview)
+1. Open one of previously created secrets, notice Overview and Access control (IAM)
-2. Click Access control(IAM)(preview) tab
+2. Click Access control(IAM) tab
![Role assignment - secret](../media/rbac/image-8.png)
-3. Create Key Secrets Officer role "Key Vault Secrets Officer (preview)" for current user, same like it was done above for the Key Vault.
+3. Create Key Secrets Officer role "Key Vault Secrets Officer" for current user, same like it was done above for the Key Vault.
Azure CLI: ```azurecli
-az role assignment create --role "Key Vault Secrets Officer (preview)" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}/secrets/RBACSecret
+az role assignment create --role "Key Vault Secrets Officer" --assignee {i.e user@microsoft.com} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}/secrets/RBACSecret
``` ### Test and verify
@@ -164,7 +167,7 @@ az role assignment create --role "Key Vault Secrets Officer (preview)" --assigne
1. Validate adding new secret without "Key Vault Secrets Officer" role on key vault level.
-Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer (preview)" role assignment for this resource.
+Go to key vault Access control (IAM) tab and remove "Key Vault Secrets Officer" role assignment for this resource.
![Remove assignment - key vault](../media/rbac/image-9.png)
@@ -178,8 +181,8 @@ Create new secret ( Secrets \> +Generate/Import) should show below error:
2. Validate secret editing without "Key Vault Secret Officer" role on secret level. -- Go to previously created secret Access Control (IAM) (preview) tab
- and remove "Key Vault Secrets Officer (preview)" role assignment for
+- Go to previously created secret Access Control (IAM) tab
+ and remove "Key Vault Secrets Officer" role assignment for
this resource. - Navigate to previously created secret. You can see secret properties.
@@ -188,7 +191,7 @@ Create new secret ( Secrets \> +Generate/Import) should show below error:
3. Validate secrets read without reader role on key vault level. -- Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader (preview)" role assignment.
+- Go to key vault resource group Access control (IAM) tab and remove "Key Vault Reader" role assignment.
- Navigating to key vault's Secrets tab should show below error:
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/rbac-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-migration.md
@@ -11,11 +11,11 @@ Last updated 8/30/2020
-# Migrate from vault access policy to an Azure role-based access control (preview) permission model
+# Migrate from vault access policy to an Azure role-based access control permission model
Vault access policy model is an existing authorization system built in Key Vault to provide access to keys, secrets, and certificates. You can control access by assigning individual permissions to security principal(user, group, service principal, managed identity) at Key Vault scope.
-Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. Azure RBAC for Key Vault keys, secrets, and certificates access management is currently in Public Preview. With Azure RBAC you control access to resources by creating roles assignments, which consists of three elements: security principal, role definition (predefined set of permissions), and scope (group of resources or individual resource). For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. With Azure RBAC you control access to resources by creating roles assignments, which consists of three elements: security principal, role definition (predefined set of permissions), and scope (group of resources or individual resource). For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
Before migrating to Azure RBAC, it's important to understand its benefits and limitations.
@@ -34,13 +34,14 @@ Azure RBAC disadvantages:
Azure RBAC has several Azure built-in roles that you can assign to users, groups, service principals, and managed identities. If the built-in roles don't meet the specific needs of your organization, you can create your own [Azure custom roles](../../role-based-access-control/custom-roles.md). Key Vault built-in roles for keys, certificates, and secrets access management:-- Key Vault Administrator (preview)-- Key Vault Reader (preview)-- Key Vault Certificate Officer (preview)-- Key Vault Crypto Officer (preview)-- Key Vault Crypto User (preview)-- Key Vault Secrets Officer (preview)-- Key Vault Secrets User (preview)
+- Key Vault Administrator
+- Key Vault Reader
+- Key Vault Certificate Officer
+- Key Vault Crypto Officer
+- Key Vault Crypto User
+- Key Vault Crypto Service Encryption User
+- Key Vault Secrets Officer
+- Key Vault Secrets User
For more information about existing built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md)
@@ -63,17 +64,17 @@ Access policies predefined permission templates:
### Access policies templates to Azure roles mapping | Access policy template | Operations | Azure role | | | | |
-| Key, Secret, Certificate Management | Keys: all operations <br>Certificates: all operations<br>Secrets: all operations | Key Vault Administrator (preview) |
-| Key & Secret Management | Keys: all operations <br>Secrets: all operations| Key Vault Crypto Officer (preview)<br> Key Vault Secrets Officer (preview)|
-| Secret & Certificate Management | Certificates: all operations <br>Secrets: all operations| Key Vault Certificates Officer (preview)<br> Key Vault Secrets Officer (preview)|
-| Key Management | Keys: all operations| Key Vault Crypto Officer (preview)|
-| Secret Management | Secrets: all operations| Key Vault Secrets Officer (preview)|
-| Certificate Management | Certificates: all operations | Key Vault Certificates Officer (preview)|
-| SQL Server Connector | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption (preview)|
+| Key, Secret, Certificate Management | Keys: all operations <br>Certificates: all operations<br>Secrets: all operations | Key Vault Administrator |
+| Key & Secret Management | Keys: all operations <br>Secrets: all operations| Key Vault Crypto Officer <br> Key Vault Secrets Officer |
+| Secret & Certificate Management | Certificates: all operations <br>Secrets: all operations| Key Vault Certificates Officer <br> Key Vault Secrets Officer|
+| Key Management | Keys: all operations| Key Vault Crypto Officer|
+| Secret Management | Secrets: all operations| Key Vault Secrets Officer|
+| Certificate Management | Certificates: all operations | Key Vault Certificates Officer|
+| SQL Server Connector | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption User|
| Azure Data Lake Storage or Azure Storage | Keys: get, list, unwrap key | N/A<br> Custom role required| | Azure Backup | Keys: get, list, backup<br> Certificate: get, list, backup | N/A<br> Custom role required|
-| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption (preview)|
-| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption (preview)|
+| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption User|
+| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption User|
| Azure Information BYOK | Keys: get, decrypt, sign | N/A<br>Custom role required|
@@ -97,11 +98,14 @@ In general, it's best practice to have one key vault per application and manage
## Vault access policy to Azure RBAC migration steps There are many differences between Azure RBAC and vault access policy permission model. In order, to avoid outages during migration, below steps are recommended.
-1. **Identify and assign roles**: identify built-in roles based on mapping table above and create custom roles when needed. Assign roles at scopes, based on scopes mapping guidance. For more information on how to assign roles to key vault, see [Provide access to Key Vault with an Azure role-based access control (preview)](rbac-guide.md)
+1. **Identify and assign roles**: identify built-in roles based on mapping table above and create custom roles when needed. Assign roles at scopes, based on scopes mapping guidance. For more information on how to assign roles to key vault, see [Provide access to Key Vault with an Azure role-based access control](rbac-guide.md)
1. **Validate roles assignment**: role assignments in Azure RBAC can take several minutes to propagate. For guide how to check role assignments, see [List roles assignments at scope](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-at-a-scope) 1. **Configure monitoring and alerting on key vault**: it's important to enable logging and setup alerting for access denied exceptions. For more information, see [Monitoring and alerting for Azure Key Vault](./alert.md) 1. **Set Azure role-based access control permission model on Key Vault**: enabling Azure RBAC permission model will invalidate all existing access policies. If an error, permission model can be switched back with all existing access policies remaining untouched.
+> [!NOTE]
+> Changing permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is part of [Owner](../../role-based-access-control/built-in-roles.md#owner) and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) roles. Classic subscription administrator roles like 'Service Administrator' and 'Co-Administrator' are not supported.
+ > [!NOTE] > When Azure RBAC permission model is enabled, all scripts which attempt to update access policies will fail. It is important to update those scripts to use Azure RBAC.
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/secure-your-key-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/secure-your-key-vault.md
@@ -22,7 +22,7 @@ For more information on Key Vault, see [About Azure Key Vault](overview.md); for
Access to a key vault is controlled through two interfaces: the **management plane** and the **data plane**. The management plane is where you manage Key Vault itself. Operations in this plane include creating and deleting key vaults, retrieving Key Vault properties, and updating access policies. The data plane is where you work with the data stored in a key vault. You can add, delete, and modify keys, secrets, and certificates.
-Both planes use [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) for authentication. For authorization, the management plane uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and the data plane uses a [Key Vault access policy](./assign-access-policy-portal.md) and [Azure RBAC for Key Vault data plane operations (preview)](./rbac-guide.md).
+Both planes use [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) for authentication. For authorization, the management plane uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and the data plane uses a [Key Vault access policy](./assign-access-policy-portal.md) and [Azure RBAC for Key Vault data plane operations](./rbac-guide.md).
To access a key vault in either plane, all callers (users or applications) must have proper authentication and authorization. Authentication establishes the identity of the caller. Authorization determines which operations the caller can execute. Authentication with Key Vault works in conjunction with [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md), which is responsible for authenticating the identity of any given **security principal**.
@@ -108,7 +108,7 @@ When an Azure role is assigned to an Azure AD security principal, Azure grants a
Key benefits of using Azure RBAC permission over vault access policies are centralized access control management and its integration with [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md). Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about.
-For more information about Key Vault data plane with Azure RBAC, see [Key Vault keys, certificates, and secrets with an Azure role-based access control (preview)](rbac-guide.md)
+For more information about Key Vault data plane with Azure RBAC, see [Key Vault keys, certificates, and secrets with an Azure role-based access control](rbac-guide.md)
## Firewalls and virtual networks
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/soft-delete-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-overview.md
@@ -14,6 +14,9 @@ Last updated 12/15/2020
> [!IMPORTANT] > You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete will be deprecated soon. See full details [here](soft-delete-change.md)
+> [!IMPORTANT]
+> Soft-deleted vault triggers delete settings for integrated with Key Vault services i.e. Azure RBAC roles assignments, Event Grid subscriptions, Azure Monitor diagnostics settings. After recovery of soft-deleted Key Vault settings for integrated services will need to be manually recreated.
+ Key Vault's soft-delete feature allows recovery of the deleted vaults and deleted key vault objects (for example, keys, secrets, certificates), known as soft-delete. Specifically, we address the following scenarios: This safeguard offer the following protections: - Once a secret, key, certificate, or key vault is deleted, it will remain recoverable for a configurable period of 7 to 90 calendar days. If no configuration is specified the default recovery period will be set to 90 days. This provides users with sufficient time to notice an accidental secret deletion and respond.
@@ -22,7 +25,7 @@ Key Vault's soft-delete feature allows recovery of the deleted vaults and delete
## Supporting interfaces
-The soft-delete feature is available through the [REST API](/rest/api/keyvault/), the [Azure CLI](./key-vault-recovery.md), [Azure PowerShell](./key-vault-recovery.md), and [.NET/C#](/dotnet/api/microsoft.azure.keyvault?view=azure-dotnet) interfaces, as well as [ARM templates](/azure/templates/microsoft.keyvault/2019-09-01/vaults).
+The soft-delete feature is available through the [REST API](/rest/api/keyvault/), the [Azure CLI](./key-vault-recovery.md), [Azure PowerShell](./key-vault-recovery.md), and [.NET/C#](/dotnet/api/microsoft.azure.keyvault?view=azure-dotnet&preserve-view=true) interfaces, as well as [ARM templates](/azure/templates/microsoft.keyvault/2019-09-01/vaults).
## Scenarios
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/about-secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/about-secrets.md
@@ -70,7 +70,7 @@ How-to guides to control access in Key Vault:
- [Assign a Key Vault access policy using CLI](../general/assign-access-policy-cli.md) - [Assign a Key Vault access policy using PowerShell](../general/assign-access-policy-powershell.md) - [Assign a Key Vault access policy using the Azure portal](../general/assign-access-policy-portal.md)-- [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control (preview)](../general/rbac-guide.md)
+- [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../general/rbac-guide.md)
## Secret tags You can specify additional application-specific metadata in the form of tags. Key Vault supports up to 15 tags, each of which can have a 256 character name and a 256 character value.
@@ -120,7 +120,7 @@ How-to guides to control access in Key Vault:
- [Assign a Key Vault access policy using CLI](../general/assign-access-policy-cli.md) - [Assign a Key Vault access policy using PowerShell](../general/assign-access-policy-powershell.md) - [Assign a Key Vault access policy using the Azure portal](../general/assign-access-policy-portal.md)-- [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control (preview)](../general/rbac-guide.md)
+- [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../general/rbac-guide.md)
## Next steps
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-output-json-v2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/video-indexer-output-json-v2.md
@@ -100,7 +100,7 @@ This section shows the summary of the insights.
|faces/animatedCharacters|May contain zero or more faces. For more detailed information, see [faces/animatedCharacters](#facesanimatedcharacters).| |keywords|May contain zero or more keywords. For more detailed information, see [keywords](#keywords).| |sentiments|May contain zero or more sentiments. For more detailed information, see [sentiments](#sentiments).|
-|audioEffects| May contain zero or more audioEffects. For more detailed information, see [audioEffects](#audioeffects).|
+|audioEffects| May contain zero or more audioEffects. For more detailed information, see [audioEffects](#audioeffects-public-preview).|
|labels| May contain zero or more labels. For detailed more information, see [labels](#labels).| |brands| May contain zero or more brands. For more detailed information, see [brands](#brands).| |statistics | For more detailed information, see [statistics](#statistics).|
@@ -177,7 +177,7 @@ A face might have an ID, a name, a thumbnail, other metadata, and a list of its
|labels|The [labels](#labels) insight.| |shots|The [shots](#shots) insight.| |brands|The [brands](#brands) insight.|
-|audioEffects|The [audioEffects](#audioeffects) insight.|
+|audioEffects|The [audioEffects](#audioeffects-public-preview) insight.|
|sentiments|The [sentiments](#sentiments) insight.| |visualContentModeration|The [visualContentModeration](#visualcontentmoderation) insight.| |textualContentModeration|The [textualContentModeration](#textualcontentmoderation) insight.|
@@ -586,26 +586,28 @@ Business and product brand names detected in the speech to text transcript and/o
|SpeakerLongestMonolog|The speaker's longest monolog. If the speaker has silences inside the monolog it is included. Silence at the beginning and the end of the monolog is removed.| |SpeakerTalkToListenRatio|The calculation is based on the time spent on the speaker's monolog (without the silence in between) divided by the total time of the video. The time is rounded to the third decimal point.|
-#### audioEffects
+#### audioEffects (public preview)
-|Name|Description|
+|Name|Description
|||
-|id|The audio effect ID.|
-|type|The audio effect type (for example, Clapping, Speech, Silence).|
-|instances|A list of time ranges where this audio effect appeared.|
+|id|The audio effect ID|
+|type|The audio effect type|
+|instances|A list of time ranges where this audio effect appeared. Each instance has a confidence field.|
```json "audioEffects": [ { "id": 0,
- "type": "Clapping",
+ "type": "Siren",
"instances": [ {
+ "confidence": 0.87,
"start": "00:00:00", "end": "00:00:03" }, {
- "start": "00:01:13",
+ "confidence": 0.87,
+ "start": "00:01:13",
"end": "00:01:21" } ]
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/video-indexer-overview.md
@@ -9,7 +9,7 @@
Previously updated : 09/11/2020 Last updated : 02/05/2021
@@ -80,7 +80,7 @@ The following list shows the insights you can retrieve from your videos using Vi
* **Speaker enumeration**: Maps and understands which speaker spoke which words and when. Sixteen speakers can be detected in a single audio-file. * **Speaker statistics**: Provides statistics for speakers' speech ratios. * **Textual content moderation**: Detects explicit text in the audio transcript.
-* **Audio effects**: Identifies audio effects like hand claps, speech, and silence.
+* **Audio effects** (public preview): Detects the following audio effects in the non-speech segments of the content: Gunshot, Glass shatter, Alarm, Siren, Explosion, Dog Bark, Screaming, Laughter, Crowd reactions (cheering, clapping, and booing) and Silence. Note: the full set of events is available only when choosing ΓÇÿAdvanced Audio AnalysisΓÇÖ in upload preset, otherwise only ΓÇÿSilenceΓÇÖ and ΓÇÿCrowd reactionΓÇÖ will be available.
* **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear. * **Translation**: Creates translations of the audio transcript to 54 different languages.
mysql https://docs.microsoft.com/en-us/azure/mysql/flexible-server/quickstart-create-server-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-server-portal.md
@@ -80,17 +80,35 @@ By default, these databases are created under your server: information_schema, m
## Connect to the server by using mysql.exe
-If you created your flexible server by using private access (VNet Integration), you'll need to connect to your server from a resource within the same virtual network as your server. You can create a virtual machine and add it to the virtual network created with your flexible server.
+If you created your flexible server by using private access (VNet Integration), you'll need to connect to your server from a resource within the same virtual network as your server. You can create a virtual machine and add it to the virtual network created with your flexible server. Refer configuring [private access documentation](how-to-manage-virtual-network-portal.md) to learn more.
-If you created your flexible server by using public access (allowed IP addresses), you can add your local IP address to the list of firewall rules on your server.
+If you created your flexible server by using public access (allowed IP addresses), you can add your local IP address to the list of firewall rules on your server. Refer [create or manage firewall rules documentation](how-to-manage-firewall-portal.md) for step by step guidance.
You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment.
-If you're using mysql.exe, connect by using the following command. Use your server name, user name, and password in the command.
- ```bash
- mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl=true --ssl-ca=DigiCertGlobalRootCA.crt.pem
```+
+If you have provisioned your flexible server using **public access**, you can also use [Azure Cloud Shell](https://shell.azure.com/bash) to connect to your flexible server using pre-installed mysql client as shown below:
+
+In order to use Azure Cloud Shell to connect to your flexible server, you will need to allow networking access from Azure Cloud Shell to your flexible server. To achieve this, you can go to **Networking** blade on Azure portal for your MySQL flexible server and check the box under **Firewall** section which says, "Allow public access from any Azure service within Azure to this server" and click Save to persist the setting.
+
+> [!NOTE]
+> Checking the **Allow public access from any Azure service within Azure to this server** should be used for development or testing only. It configures the firewall to allow connections from IP addresses allocated to any Azure service or asset, including connections from the subscriptions of other customers.
+
+Click on **Try it** to launch the Azure Cloud Shell and using the following commands to connect to your flexible server. Use your server name, user name, and password in the command.
+
+```azurecli-interactive
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl=true --ssl-ca=DigiCertGlobalRootCA.crt.pem
+```
+
+If you see the following error message while connecting to your flexible server following the command earlier, you missed setting the firewall rule using the "Allow public access from any Azure service within Azure to this server" mentioned earlier or the option isn't saved. Please retry setting firewall and try again.
+
+ERROR 2002 (HY000): Can't connect to MySQL server on <servername> (115)
+ ## Clean up resources You have now created an Azure Database for MySQL flexible server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the MySQL server. To delete the resource group, complete these steps:
@@ -107,4 +125,4 @@ To delete the server, you can select **Delete** on **Overview** page for your se
## Next steps > [!div class="nextstepaction"]
-> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
@@ -355,7 +355,7 @@ https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
- Performance Tier: Currently, only standard tier storage accounts are supported. - Self-manage key rotation: If you change/rotate the access keys to your storage account, NSG Flow Logs will stop working. To fix this issue, you must disable and then re-enable NSG Flow Logs.
-**Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you do not require the retention policy feature, we recommend that you set this value to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details.
+**Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you want to retain data forever and do not want to apply any retention policy, set retention (days) to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details.
**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. An opt-in flag that fixes these issues is scheduled to be available by March 2021 latest. In the interim, customers facing severe issues due to this behavior can request opting-in via Support, please raise a support request under Network Watcher > NSG Flow Logs.
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/concepts/sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/concepts/sessions.md
@@ -34,9 +34,9 @@ Every session undergoes multiple phases.
### Session startup
-When you ask ARR to [create a new session](../how-tos/session-rest-api.md#create-a-session), the first thing it does is to return a session [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier). This UUID allows you to query information about the session. The UUID and some basic information about the session are persisted for 30 days, so you can query that information even after the session has been stopped. At this point, the **session state** will be reported as **Starting**.
+When you ask ARR to [create a new session](../how-tos/session-rest-api.md), the first thing it does is to return a session [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier). This UUID allows you to query information about the session. The UUID and some basic information about the session are persisted for 30 days, so you can query that information even after the session has been stopped. At this point, the **session state** will be reported as **Starting**.
-Next, Azure Remote Rendering tries to find a server that can host your session. There are two parameters for this search. First, it will only reserve servers in your [region](../reference/regions.md). That's because the network latency across regions may be too high to guarantee a decent experience. The second factor is the desired *size* that you specified. In each region, there is a limited number of servers that can fulfill the [*Standard*](../reference/vm-sizes.md) or [*Premium*](../reference/vm-sizes.md) size request. Consequently, if all servers of the requested size are currently in use in your region, session creation will fail. The reason for failure [can be queried](../how-tos/session-rest-api.md#get-sessions-properties).
+Next, Azure Remote Rendering tries to find a server that can host your session. There are two parameters for this search. First, it will only reserve servers in your [region](../reference/regions.md). That's because the network latency across regions may be too high to guarantee a decent experience. The second factor is the desired *size* that you specified. In each region, there is a limited number of servers that can fulfill the [*Standard*](../reference/vm-sizes.md) or [*Premium*](../reference/vm-sizes.md) size request. Consequently, if all servers of the requested size are currently in use in your region, session creation will fail. The reason for failure [can be queried](../how-tos/session-rest-api.md).
> [!IMPORTANT] > If you request a *Standard* server size and the request fails due to high demand, that doesn't imply that requesting a *Premium* server will fail, as well. So if it is an option for you, you can try falling back to a *Premium* server size.
@@ -72,7 +72,7 @@ In all cases, you won't be billed further once a session is stopped.
#### Extend a session's lease time
-You can [extend the lease time](../how-tos/session-rest-api.md#modify-and-query-session-properties) of an active session, if it turns out that you need it longer.
+You can [extend the lease time](../how-tos/session-rest-api.md) of an active session, if it turns out that you need it longer.
## Example code
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/how-tos/conversion/conversion-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/conversion/conversion-rest-api.md
@@ -9,140 +9,15 @@
# Use the model conversion REST API
-The [model conversion](model-conversion.md) service is controlled through a [REST API](https://en.wikipedia.org/wiki/Representational_state_transfer). This article describes the conversion service API details.
+The [model conversion](model-conversion.md) service is controlled through a [REST API](https://en.wikipedia.org/wiki/Representational_state_transfer). This API can be used to create conversions, get conversion properties, and list existing conversions.
-## Regions
+## REST API reference
-See the [list of available regions](../../reference/regions.md) for the base URLs to send the requests to.
+The Remote Rendering REST API reference documentation can be found [here](https://docs.microsoft.com/rest/api/mixedreality/2021-01-01preview/remoterendering), and the swagger definitions [here](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mixedreality/data-plane/Microsoft.MixedReality).
-## Common headers
-
-### Common request headers
-
-These headers must be specified for all requests:
--- The **Authorization** header must have the value of "Bearer [*TOKEN*]", where [*TOKEN*] is a [service access token](../tokens.md).-
-### Common response headers
-
-All responses contain these headers:
--- The **MS-CV** header contains a unique string that can be used to trace the call within the service.-
-## Endpoints
-
-The conversion service provides three REST API endpoints to:
--- start model conversion using a storage account linked with your Azure Remote Rendering account. -- start model conversion using provided *Shared Access Signatures (SAS)*.-- query the conversion status-
-### Start conversion using a linked storage account
-Your Azure Remote Rendering Account needs to have access to the provided storage account by following the steps on how to [Link storage accounts](../create-an-account.md#link-storage-accounts).
-
-| Endpoint | Method |
-|--|:--|
-| /v1/accounts/**accountID**/conversions/create | POST |
-
-Returns the ID of the ongoing conversion, wrapped in a JSON document. The field name is "conversionId".
-
-#### Request body
-
-> [!NOTE]
-> Everything under `input.folderPath` will get retrieved to perform the conversion on Azure. If `input.folderPath` is not specified, the whole contents of the container will get retrieved. All blobs and folders which get retrieved must have [valid Windows file names](/windows/win32/fileio/naming-a-file#naming-conventions).
-
-```json
-{
- "input":
- {
- "storageAccountname": "<the name of a connected storage account - this does not include the domain suffix (.blob.core.windows.net)>",
- "blobContainerName": "<the name of the blob container containing your input asset data>",
- "folderPath": "<optional: can be omitted or empty - a subpath in the input blob container>",
- "inputAssetPath" : "<path to the model in the input blob container relative to the folderPath (or container root if no folderPath is specified)>"
- },
- "output":
- {
- "storageAccountname": "<the name of a connected storage account - this does not include the domain suffix (.blob.core.windows.net)>",
- "blobContainerName": "<the name of the blob container where the converted asset will be copied to>",
- "folderPath": "<optional: can be omitted or empty - a subpath in the output blob container. Will contain the asset and log files>",
- "outputAssetFileName": "<optional: can be omitted or empty. The filename of the converted asset. If provided the filename needs to end in .arrAsset>"
- }
-}
-```
-### Start conversion using provided shared access signatures
-If your ARR account isn't linked to your storage account, this REST interface allows you to provide access using *Shared Access Signatures (SAS)*.
-
-| Endpoint | Method |
-|--|:--|
-| /v1/accounts/**accountID**/conversions/createWithSharedAccessSignature | POST |
-
-Returns the ID of the ongoing conversion, wrapped in a JSON document. The field name is `conversionId`.
-
-#### Request body
-
-The request body is the same as in the create REST call above, but input and output contain *Shared Access Signatures (SAS) tokens*.
-These tokens provide access to the storage account for reading the input and writing the conversion result.
-
-> [!NOTE]
-> These SAS URI tokens are the query strings and not the full URI.
-
-> [!NOTE]
-> Everything under `input.folderPath` will get retrieved to perform the conversion on Azure. If `input.folderPath` is not specified, the whole contents of the container will get retrieved. All blobs and folders which get retrieved must have [valid Windows file names](/windows/win32/fileio/naming-a-file#naming-conventions).
-
-```json
-{
- "input":
- {
- "storageAccountname": "<the name of a connected storage account - this does not include the domain suffix (.blob.core.windows.net)>",
- "blobContainerName": "<the name of the blob container containing your input asset data>",
- "folderPath": "<optional: can be omitted or empty - a subpath in the input blob container>",
- "inputAssetPath" : "<path to the model in the input blob container relative to the folderPath (or container root if no folderPath is specified)>",
- "containerReadListSas" : "<a container SAS token which gives read and list access to the given input blob container>"
- },
- "output":
- {
- "storageAccountname": "<the name of a connected storage account - this does not include the domain suffix (.blob.core.windows.net)>",
- "blobContainerName": "<the name of the blob container where the converted asset will be copied to>",
- "folderPath": "<optional: can be omitted or empty - a subpath in the output blob container. Will contain the asset and log files>",
- "outputAssetFileName": "<optional: can be omitted or empty. The filename of the converted asset. If provided the filename needs to end in .arrAsset>",
- "containerWriteSas" : "<a container SAS token which gives write access to the given output blob container>"
- }
-}
-```
-
-### Poll conversion status
-The status of an ongoing conversion started with one of the REST calls above can be queried using the following interface:
--
-| Endpoint | Method |
-|--|:--|
-| /v1/accounts/**accountID**/conversions/**conversionId** | GET |
-
-Returns a JSON document with a "status" field that can have the following values:
--- "Created"-- "Running"-- "Success"-- "Failure"-
-If the status is "Failure", there will be an additional "error" field with a "message" subfield containing error information. Additional logs will be uploaded to your output container.
-
-## List Conversions
-
-To get a list of all conversions for an account, use the interface:
-
-| Endpoint | Method |
-|--|:--|
-| /v1/accounts/**accountID**/conversions?skiptoken=**skipToken** | GET |
-
-| Parameter | Required |
-|--|:--|
-| accountID | Yes |
-| skiptoken | No |
-
-Returns a json document that contains an array of conversions and their details. This query returns a maximum of 50 conversions at a time. In the situation where there are more conversions to retrieve, the response will contain a **nextLink** property containing the skipToken that can be queried to retrieve the next set of results.
+We provide a PowerShell script in the [ARR samples repository](https://github.com/Azure/azure-remote-rendering) in the *Scripts* folder, called *Conversion.ps1*, which demonstrates the use of our service. The script and its configuration are described here: [Example PowerShell scripts](../../samples/powershell-example-scripts.md). We also provide SDKs for [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/mixedreality/Azure.MixedReality.RemoteRendering), Java, and Python.
## Next steps - [Use Azure Blob Storage for model conversion](blob-storage.md)-- [Model conversion](model-conversion.md)
+- [Model conversion](model-conversion.md)
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/how-tos/session-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/session-rest-api.md
@@ -9,283 +9,21 @@
# Use the session management REST API
-To use Azure Remote Rendering functionality, you need to create a *session*. Each session corresponds to a virtual machine (VM) being allocated in Azure and waiting for a client device to connect. When a device connects, the VM renders the requested data and serves the result as a video stream. During session creation, you chose which kind of server you want to run on, which determines pricing. Once the session is not needed anymore, it should be stopped. If not stopped manually, it will be shut down automatically when the session's *lease time* expires.
+To use Azure Remote Rendering functionality, you need to create a *session*. Each session corresponds to a server being allocated in Azure to which a client device can connect. When a device connects, the server renders the requested data and serves the result as a video stream. During session creation, you chose which [kind of server](../reference/vm-sizes.md) you want to run on, which determines pricing. Once the session isn't needed anymore, it should be stopped. If not stopped manually, it will be shut down automatically when the session's *lease time* expires.
-We provide a PowerShell script in the [ARR samples repository](https://github.com/Azure/azure-remote-rendering) in the *Scripts* folder, called *RenderingSession.ps1*, which demonstrates the use of our service. The script and its configuration are described here: [Example PowerShell scripts](../samples/powershell-example-scripts.md)
+## REST API reference
-> [!TIP]
-> The PowerShell commands listed on this page are meant to complement each other. If you run all scripts in sequence within the same PowerShell command prompt, they will build on top of each other.
-
-## Regions
-
-See the [list of available regions](../reference/regions.md) for the base URLs to send the requests to.
-
-For the sample scripts below we chose the region *westus2*.
-
-### Example script: Choose an endpoint
-
-```PowerShell
-$endPoint = "https://remoterendering.westus2.mixedreality.azure.com"
-```
-
-## Accounts
-
-If you don't have a Remote Rendering account, [create one](create-an-account.md). Each resource is identified by an *accountId*, which is used throughout the session APIs.
-
-### Example script: Set accountId, accountKey and account domain
-
-Account domain is the location of remote rendering account. In this example, the account's location is region *eastus*.
-
-```PowerShell
-$accountId = "********-****-****-****-************"
-$accountKey = "*******************************************="
-$accountDomain = "eastus.mixedreality.azure.com"
-```
-
-## Common request headers
-
-* The *Authorization* header must have the value of "`Bearer TOKEN`", where "`TOKEN`" is the authentication token [returned by the Secure Token Service](tokens.md).
-
-### Example script: Request a token
-
-```PowerShell
-[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12
-$webResponse = Invoke-WebRequest -Uri "https://sts.$accountDomain/accounts/$accountId/token" -Method Get -ContentType "application/json" -Headers @{ Authorization = "Bearer ${accountId}:$accountKey" }
-$response = ConvertFrom-Json -InputObject $webResponse.Content
-$token = $response.AccessToken;
-```
-
-## Common response headers
-
-* The *MS-CV* header can be used by the product team to trace the call within the service.
-
-## Create a session
-
-This command creates a session. It returns the ID of the new session. You need the session ID for all other commands.
-
-| URI | Method |
-|--|:--|
-| /v1/accounts/*accountId*/sessions/create | POST |
-
-**Request body:**
-
-* maxLeaseTime (timespan): a timeout value when the session will be decommissioned automatically
-* models (array): asset container URLs to preload
-* size (string): the server size to configure ([**"standard"**](../reference/vm-sizes.md) or [**"premium"**](../reference/vm-sizes.md)). See specific [size limitations](../reference/limits.md#overall-number-of-polygons).
-
-**Responses:**
-
-| Status code | JSON payload | Comments |
-|--|:--|:--|
-| 202 | - sessionId: GUID | Success |
-
-### Example script: Create a session
-
-```PowerShell
-Invoke-WebRequest -Uri "$endPoint/v1/accounts/$accountId/sessions/create" -Method Post -ContentType "application/json" -Body "{ 'maxLeaseTime': '4:0:0', 'models': [], 'size': 'standard' }" -Headers @{ Authorization = "Bearer $token" }
-```
-
-Example output:
-
-```PowerShell
-StatusCode : 202
-StatusDescription : Accepted
-Content : {"sessionId":"d31bddca-dab7-498e-9bc9-7594bc12862f"}
-RawContent : HTTP/1.1 202 Accepted
- MS-CV: 5EqPJ1VdTUakDJZc6/ZhTg.0
- Content-Length: 52
- Content-Type: application/json; charset=utf-8
- Date: Thu, 09 May 2019 16:17:50 GMT
- Location: accounts/11111111-1111-1111-11...
-Forms : {}
-Headers : {[MS-CV, 5EqPJ1VdTUakDJZc6/ZhTg.0], [Content-Length, 52], [Content-Type, application/json;
- charset=utf-8], [Date, Thu, 09 May 2019 16:17:50 GMT]...}
-Images : {}
-InputFields : {}
-Links : {}
-ParsedHtml : mshtml.HTMLDocumentClass
-RawContentLength : 52
-```
-
-### Example script: Store sessionId
-
-The response from the request above includes a **sessionId**, which you need for all followup requests.
-
-```PowerShell
-$sessionId = "d31bddca-dab7-498e-9bc9-7594bc12862f"
-```
-
-## Modify and query session properties
-
-There are a few commands to query or modify the parameters of existing sessions.
-
-> [!CAUTION]
-> As for all REST calls, sending these commands too frequently will cause the server to throttle and return failure eventually. The status code in this case is 429 ("too many requests"). As a rule of thumb, there should be a delay of **5-10 seconds between subsequent calls**.
-
-### Update session parameters
-
-This command updates a session's parameters. Currently you can only extend the lease time of a session.
+The REST API reference can be found [here](https://docs.microsoft.com/rest/api/mixedreality/2021-01-01preview/remoterendering) and the swagger definitions [here](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mixedreality/data-plane/Microsoft.MixedReality).
+We provide a PowerShell script in the [ARR samples repository](https://github.com/Azure/azure-remote-rendering) in the *Scripts* folder, called *RenderingSession.ps1*, which demonstrates the use of our service. The script and its configuration are described here: [Example PowerShell scripts](../samples/powershell-example-scripts.md).
+We also provide SDKs for [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/mixedreality/Azure.MixedReality.RemoteRendering), Java, and Python.
> [!IMPORTANT]
-> The lease time is always given as a total time since the session's beginning. That means if you created a session with a lease time of one hour, and you want to extend its lease time for another hour, you have to update its maxLeaseTime to two hours.
-
-| URI | Method |
-|--|:--|
-| /v1/accounts/*accountID*/sessions/*sessionId* | PATCH |
-
-**Request body:**
-
-* maxLeaseTime (timespan): a timeout value when the session will be decommissioned automatically
-
-**Responses:**
+> Latency is an important factor when using remote rendering. For the best experience create sessions in the region that is closest to you. The [Azure Latency Test](https://www.azurespeed.com/Azure/Latency) can be used to determine which region is closest to you.
-| Status code | JSON payload | Comments |
-|--|:--|:--|
-| 200 | | Success |
-
-#### Example script: Update a session
-
-```PowerShell
-Invoke-WebRequest -Uri "$endPoint/v1/accounts/$accountId/sessions/$sessionId" -Method Patch -ContentType "application/json" -Body "{ 'maxLeaseTime': '5:0:0' }" -Headers @{ Authorization = "Bearer $token" }
-```
-
-Example output:
-
-```PowerShell
-StatusCode : 200
-StatusDescription : OK
-Content : {}
-RawContent : HTTP/1.1 200 OK
- MS-CV: Fe+yXCJumky82wuoedzDTA.0
- Content-Length: 0
- Date: Thu, 09 May 2019 16:27:31 GMT
--
-Headers : {[MS-CV, Fe+yXCJumky82wuoedzDTA.0], [Content-Length, 0], [Date, Thu, 09 May 2019 16:27:31 GMT]}
-RawContentLength : 0
-```
-
-### Get active sessions
-
-This command returns a list of active sessions.
-
-| URI | Method |
-|--|:--|
-| /v1/accounts/*accountId*/sessions | GET |
-
-**Responses:**
-
-| Status code | JSON payload | Comments |
-|--|:--|:--|
-| 200 | - sessions: array of session properties | see "Get session properties" section for a description of session properties |
-
-#### Example script: Query active sessions
-
-```PowerShell
-Invoke-WebRequest -Uri "$endPoint/v1/accounts/$accountId/sessions" -Method Get -Headers @{ Authorization = "Bearer $token" }
-```
-
-Example output:
-
-```PowerShell
-StatusCode : 200
-StatusDescription : OK
-Content : []
-RawContent : HTTP/1.1 200 OK
- MS-CV: WfB9Cs5YeE6S28qYkp7Bhw.1
- Content-Length: 15
- Content-Type: application/json; charset=utf-8
- Date: Thu, 25 Jul 2019 16:23:50 GMT
-
- {"sessions":[]}
-Forms : {}
-Headers : {[MS-CV, WfB9Cs5YeE6S28qYkp7Bhw.1], [Content-Length, 2], [Content-Type, application/json;
- charset=utf-8], [Date, Thu, 25 Jul 2019 16:23:50 GMT]}
-Images : {}
-InputFields : {}
-Links : {}
-ParsedHtml : mshtml.HTMLDocumentClass
-RawContentLength : 2
-```
-
-### Get sessions properties
-
-This command returns information about a session, such as its VM hostname.
-
-| URI | Method |
-|--|:--|
-| /v1/accounts/*accountId*/sessions/*sessionId*/properties | GET |
-
-**Responses:**
-
-| Status code | JSON payload | Comments |
-|--|:--|:--|
-| 200 | - message: string<br/>- sessionElapsedTime: timespan<br/>- sessionHostname: string<br/>- sessionId: string<br/>- sessionMaxLeaseTime: timespan<br/>- sessionSize: enum<br/>- sessionStatus: enum | enum sessionStatus { starting, ready, stopping, stopped, expired, error}<br/>If the status is 'error' or 'expired', the message will contain more information |
-
-#### Example script: Get session properties
-
-```PowerShell
-Invoke-WebRequest -Uri "$endPoint/v1/accounts/$accountId/sessions/$sessionId/properties" -Method Get -Headers @{ Authorization = "Bearer $token" }
-```
-
-Example output:
-
-```PowerShell
-StatusCode : 200
-StatusDescription : OK
-Content : {"message":null,"sessionElapsedTime":"00:00:01","sessionHostname":"5018fee8-817e-4366-9179-556af79a4240.remoterenderingvm.westeurope.mixedreality.azure.com","sessionId":"e13d2c44-63e0-4591-991e-f9e05e599a93","sessionMaxLeaseTime":"04:00:00","sessionStatus":"Ready"}
-RawContent : HTTP/1.1 200 OK
- MS-CV: CMXegpZRMECH4pbOW2j5GA.0
- Content-Length: 60
- Content-Type: application/json; charset=utf-8
- Date: Thu, 09 May 2019 16:30:38 GMT
-
- {"message":null,...
-Forms : {}
-Headers : {[MS-CV, CMXegpZRMECH4pbOW2j5GA.0], [Content-Length, 60], [Content-Type, application/json;
- charset=utf-8], [Date, Thu, 09 May 2019 16:30:38 GMT]}
-Images : {}
-InputFields : {}
-Links : {}
-ParsedHtml : mshtml.HTMLDocumentClass
-RawContentLength : 60
-```
-
-## Stop a session
-
-This command stops a session. The allocated VM will be reclaimed shortly after.
-
-| URI | Method |
-|--|:--|
-| /v1/accounts/*accountId*/sessions/*sessionId* | DELETE |
-
-**Responses:**
-
-| Status code | JSON payload | Comments |
-|--|:--|:--|
-| 204 | | Success |
-
-### Example script: Stop a session
-
-```PowerShell
-Invoke-WebRequest -Uri "$endPoint/v1/accounts/$accountId/sessions/$sessionId" -Method Delete -Headers @{ Authorization = "Bearer $token" }
-```
-
-Example output:
-
-```PowerShell
-StatusCode : 204
-StatusDescription : No Content
-Content : {}
-RawContent : HTTP/1.1 204 No Content
- MS-CV: YDxR5/7+K0KstH54WG443w.0
- Date: Thu, 09 May 2019 16:45:41 GMT
--
-Headers : {[MS-CV, YDxR5/7+K0KstH54WG443w.0], [Date, Thu, 09 May 2019 16:45:41 GMT]}
-RawContentLength : 0
-```
+> [!IMPORTANT]
+> An ARR runtime SDK is needed for a client device to connect to a rendering session. These SDKs are available in [.NET](https://docs.microsoft.com/dotnet/api/microsoft.azure.remoterendering?view=remoterendering) and [C++](https://docs.microsoft.com/cpp/api/remote-rendering/). Apart from connecting to the service, these SDKs can also be used to start and stop sessions.
## Next steps
+* [Using the Azure Frontend APIs for authentication](frontend-apis.md)
* [Example PowerShell scripts](../samples/powershell-example-scripts.md)
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/quickstarts/native-cpp/hololens/deploy-native-cpp-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/quickstarts/native-cpp/hololens/deploy-native-cpp-tutorial.md
@@ -76,7 +76,7 @@ Since the account credentials are hardcoded in the tutorial's source code, chang
Specifically, change the following values: * `init.AccountId`, `init.AccountKey`, and `init.AccountDomain` to use your account data. See paragraph about how to [retrieve account information](../../../how-tos/create-an-account.md#retrieve-the-account-information). * Specify where to create the remote rendering session by modifying the region part of the `init.RemoteRenderingDomain` string for other regions than `westus2`, for instance `"westeurope.mixedreality.azure.com"`.
-* In addition, `m_sessionOverride` can be changed to an existing session ID. Sessions can be created outside this sample, for instance by using [the PowerShell script](../../../samples/powershell-example-scripts.md#script-renderingsessionps1) or using the [session REST API](../../../how-tos/session-rest-api.md#create-a-session) directly.
+* In addition, `m_sessionOverride` can be changed to an existing session ID. Sessions can be created outside this sample, for instance by using [the PowerShell script](../../../samples/powershell-example-scripts.md#script-renderingsessionps1) or using the [session REST API](../../../how-tos/session-rest-api.md) directly.
Creating a session outside the sample is recommended when the sample should run multiple times. If no session is passed in, the sample will create a new session upon each startup, which may take several minutes. Now the application can be compiled.
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/reference/limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/reference/limits.md
@@ -31,7 +31,7 @@ The following limitations apply to the frontend API (C++ and C#):
### Overall number of polygons
-The allowable number of polygons for all loaded models depends on the size of the VM as passed to [the session management REST API](../how-tos/session-rest-api.md#create-a-session):
+The allowable number of polygons for all loaded models depends on the size of the VM as passed to [the session management REST API](../how-tos/session-rest-api.md):
| Server size | Maximum number of polygons | |:--|:|
remote-rendering https://docs.microsoft.com/en-us/azure/remote-rendering/samples/sample-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/samples/sample-model.md
@@ -21,7 +21,7 @@ Model statistics:
| Name | Value | |--|:--|
-| [Required server size](../how-tos/session-rest-api.md#create-a-session) | standard |
+| [Required server size](../reference/vm-sizes.md) | standard |
| Number of triangles | 18.7 Million | | Number of movable parts | 2073 | | Number of materials | 94 |
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
@@ -120,12 +120,10 @@ No. When you enable [Azure Defender for servers](defender-for-servers-introducti
| Starting | VM is starting up. | Not billed | | Running | Normal working state for a VM | Billed | | Stopping | This is a transitional state. When completed, it will show as Stopped. | Billed |
-| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed (1) |
-| Deallocating | Transitional state. When completed, the VM will show as Deallocated. | Not billed (1) |
+| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed |
+| Deallocating | Transitional state. When completed, the VM will show as Deallocated. | Not billed |
| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed |
-(1) Some Azure resources, such as Disks and Networking, incur charges. Software licenses on the instance do not incur charges.
- :::image type="content" source="media/security-center-pricing/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine"::: ### Will I be charged for machines without the Log Analytics agent installed?
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
@@ -10,7 +10,7 @@ ms.devlang: na
na Previously updated : 10/20/2020 Last updated : 02/15/2021
@@ -38,13 +38,12 @@ Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security
| Release state: | Generally available (GA) | | Pricing: | Requires [Azure Defender for servers](security-center-pricing.md) | | Supported platforms: | Azure machines running Windows<br>Azure Arc machines running Windows|
-| Supported versions of Windows: | ΓÇó Security Center supports detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers|
-| Unsupported operating systems: | ΓÇó Windows Server 2019<br> ΓÇó Windows 10<br> ΓÇó Linux|
+| Supported versions of Windows: | ΓÇó Security Center supports detection on Windows Server 2019, 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers<br> ΓÇó [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.md) (formerly Enterprise for Virtual Desktops (EVD)<br> ΓÇó [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md)|
+| Unsupported operating systems: | ΓÇó Windows 10 (other than EVD or WVD)<br> ΓÇó Linux|
| Required roles and permissions: | To enable/disable the integration: **Security admin** or **Owner**<br>To view MDATP alerts in Security Center: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor**| | Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov<br>![No](./media/icons/no-icon.png) GCC customers running workloads in global Azure clouds | | | | - ## Microsoft Defender for Endpoint features in Security Center Microsoft Defender for Endpoint provides:
@@ -57,7 +56,7 @@ Microsoft Defender for Endpoint provides:
By integrating Defender for Endpoint with Security Center, you'll benefit from the following additional capabilities: -- **Automated onboarding**. Security Center automatically enables the Microsoft Defender for Endpoint sensor for all Windows servers monitored by Security Center. Except for those that are running Windows Server 2019, which must be onboarded via local script, Group Policy Object (GPO), or [Microsoft Endpoint Configuration Manager](/mem/configmgr/) (formerly SCCM).
+- **Automated onboarding**. Security Center automatically enables the Microsoft Defender for Endpoint sensor for all Windows servers monitored by Security Center.
- **Single pane of glass**. The Security Center console displays Microsoft Defender for Endpoint alerts. To investigate further, use Microsoft Defender for Endpoint's own portal pages where you'll see additional information such as the alert process tree and the incident graph. You can also see a detailed machine timeline that shows every behavior for a historical period of up to six months.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cisco-ucs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco-ucs.md
@@ -49,7 +49,7 @@ Configure Cisco UCS to forward Syslog messages to your Azure Sentinel workspace
1. Configure the logs to be collected
- - Select the facilities and severities in the workspace advanced settings configuration
+ - Select the facilities and severities in the workspace agents configuration.
1. Configure and connect the Cisco UCS
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cyberark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cyberark.md
@@ -35,7 +35,7 @@ CyberArk EPV logs are sent from the Vault to a Linux-based log forwarding server
1. In the Azure Sentinel portal, click **Data connectors**, select **CyberArk Enterprise Password Vault (EPV) Events (Preview)** and then **Open connector page**.
-1. Follow the CyberArk EPV instructions to configure sending syslog data to the log forwarding server.
+1. Follow the [CyberArk EPV instructions](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) to configure sending syslog data to the log forwarding server.
1. Validate your connection and verify data ingestion using [these instructions](connect-cef-verify.md). It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-juniper-srx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-juniper-srx.md
@@ -49,7 +49,7 @@ Configure Juniper SRX to forward Syslog messages to your Azure Sentinel workspac
1. Configure the logs to be collected
- - Select the facilities and severities in the workspace advanced settings configuration
+ - Select the facilities and severities in the workspace agents configuration.
1. Configure and connect the Juniper SRX
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
@@ -63,17 +63,17 @@ For more information, see [Syslog data sources in Azure Monitor](../azure-monito
### Configure the Log Analytics agent
-1. At the bottom of the Syslog connector blade, click the **Open your workspace advanced settings configuration >** link.
+1. At the bottom of the Syslog connector blade, click the **Open your workspace agents configuration >** link.
-1. On the **Advanced settings** blade, select **Data** > **Syslog**. Then add the facilities for the connector to collect.
+1. On the **Agents configuration** blade, select the **Syslog** tab. Then add the facilities for the connector to collect. Select **Add facility** and choose from the drop-down list of facilities.
- Add the facilities that your syslog appliance includes in its log headers. - If you want to use anomalous SSH login detection with the data that you collect, add **auth** and **authpriv**. See the [following section](#configure-the-syslog-connector-for-anomalous-ssh-login-detection) for additional details.
-1. When you have added all the facilities that you want to monitor, and adjusted any severity options for each one, select the checkbox **Apply below configuration to my machines**.
+1. When you have added all the facilities that you want to monitor, verify that the check boxes for all the desired severities are marked.
-1. Select **Save**.
+1. Select **Apply**.
1. On your VM or appliance, make sure you're sending the facilities that you specified.
@@ -84,7 +84,6 @@ For more information, see [Syslog data sources in Azure Monitor](../azure-monito
> [!NOTE] > **Using the same machine to forward both plain Syslog *and* CEF messages** >
->
> You can use your existing [CEF log forwarder machine](connect-cef-agent.md) to collect and forward logs from plain Syslog sources as well. However, you must perform the following steps to avoid sending events in both formats to Azure Sentinel, as that will result in duplication of events. > > Having already set up [data collection from your CEF sources](connect-common-event-format.md), and having configured the Log Analytics agent as above:
@@ -94,7 +93,6 @@ For more information, see [Syslog data sources in Azure Monitor](../azure-monito
> 1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Azure Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.<br> > `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'` - ### Configure the Syslog connector for anomalous SSH login detection > [!IMPORTANT]
@@ -109,10 +107,7 @@ Azure Sentinel can apply machine learning (ML) to the syslog data to identify an
This detection requires a specific configuration of the Syslog data connector:
-1. For step 5 in the previous procedure, make sure that both **auth** and **authpriv** are selected as facilities to monitor. Keep the default settings for the severity options, so that they are all selected. For example:
-
- > [!div class="mx-imgBorder"]
- > ![Facilities required for anomalous SSH login detection](./media/connect-syslog/facilities-ssh-detection.png)
+1. For step 2 under [Configure the Log Analytics agent](#configure-the-log-analytics-agent) above, make sure that both **auth** and **authpriv** are selected as facilities to monitor, and that all the severities are selected.
2. Allow sufficient time for syslog information to be collected. Then, navigate to **Azure Sentinel - Logs**, and copy and paste the following query:
sentinel https://docs.microsoft.com/en-us/azure/sentinel/enable-entity-behavior-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/enable-entity-behavior-analytics.md
@@ -37,6 +37,9 @@ To enable or disable this feature (these prerequisites are not required to use t
- Your workspace must not have any Azure resource locks applied to it. [Learn more about Azure resource locking](../azure-resource-manager/management/lock-resources.md).
+> [!NOTE]
+> No special license is required to add UEBA functionality to Azure Sentinel, but **additional charges** may apply.
+ ## How to enable User and Entity Behavior Analytics 1. From the Azure Sentinel navigation menu, select **Entity behavior**.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/quickstart-onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/quickstart-onboard.md
@@ -42,7 +42,7 @@ After you connect your data sources, choose from a gallery of expertly created w
| Workspace geography | Azure Sentinel-generated data geography | | | |
- | United States<br>India<br>Brazil<br>Africa<br>Korea | United States |
+ | United States<br>India<br>Brazil<br>Africa<br>Korea<br>United Arab Emirates | United States |
| Europe<br>France<br>Switzerland | Europe | | Australia | Australia | | United Kingdom | United Kingdom |
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-content-roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-content-roadmap.md
@@ -82,7 +82,7 @@ A [guest executable](service-fabric-guest-executables-introduction.md) is an exi
## Application lifecycle As with other platforms, an application on Service Fabric usually goes through the following phases: design, development, testing, deployment, upgrade, maintenance, and removal. Service Fabric provides first-class support for the full application lifecycle of cloud applications, from development through deployment, daily management, and maintenance to eventual decommissioning. The service model enables several different roles to participate independently in the application lifecycle. [Service Fabric application lifecycle](service-fabric-application-lifecycle.md) provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
-The entire app lifecycle can be managed using [PowerShell cmdlets](/powershell/module/ServiceFabric/), [CLI commands](service-fabric-sfctl.md), [C# APIs](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), [Java APIs](/jav) or [Jenkins](/azure/developer/jenkins/deploy-to-service-fabric-cluster).
+The entire app lifecycle can be managed using [PowerShell cmdlets](/powershell/module/ServiceFabric/New-ServiceFabricService), [CLI commands](service-fabric-sfctl.md), [C# APIs](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), [Java APIs](/jav) or [Jenkins](/azure/developer/jenkins/deploy-to-service-fabric-cluster).
## Test applications and services To create truly cloud-scale services, it is critical to verify that your applications and services can withstand real-world failures. The Fault Analysis Service is designed for testing services that are built on Service Fabric. With the [Fault Analysis Service](service-fabric-testability-overview.md), you can induce meaningful faults and run complete test scenarios against your applications. These faults and scenarios exercise and validate the numerous states and transitions that a service will experience throughout its lifetime, all in a controlled, safe, and consistent manner.
@@ -155,7 +155,7 @@ Out of the box, Service Fabric components report health on all entities in the c
Service Fabric provides multiple ways to [view health reports](service-fabric-view-entities-aggregated-health.md) aggregated in the health store: * [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md) or other visualization tools.
-* Health queries (through [PowerShell](/powershell/module/ServiceFabric/), [CLI](service-fabric-sfctl.md), the [C# FabricClient APIs](/dotnet/api/system.fabric.fabricclient.healthclient) and [Java FabricClient APIs](/java/api/system.fabric), or [REST APIs](/rest/api/servicefabric)).
+* Health queries (through [PowerShell](/powershell/module/ServiceFabric/New-ServiceFabricService), [CLI](service-fabric-sfctl.md), the [C# FabricClient APIs](/dotnet/api/system.fabric.fabricclient.healthclient) and [Java FabricClient APIs](/java/api/system.fabric), or [REST APIs](/rest/api/servicefabric)).
* General queries that return a list of entities that have health as one of the properties (through PowerShell, CLI, the APIs, or REST). ## Monitoring and diagnostics
@@ -190,4 +190,4 @@ Multiple products are available that cover these three areas, and you are free t
[cluster-application-instances]: media/service-fabric-content-roadmap/cluster-application-instances.png
-[cluster-imagestore-apptypes]: ./media/service-fabric-content-roadmap/cluster-imagestore-apptypes.png
+[cluster-imagestore-apptypes]: ./media/service-fabric-content-roadmap/cluster-imagestore-apptypes.png
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/workspaces-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/workspaces-encryption.md
@@ -47,7 +47,7 @@ Workspaces can be configured to enable double encryption with a customer-managed
### Key access and workspace activation
-The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or Azure Key Vault RBAC access ([preview](../../key-vault/general/rbac-guide.md)). When granting permissions via an Azure Key Vault access policy, choose the [ΓÇ£Application-onlyΓÇ¥](../../key-vault/general/secure-your-key-vault.md#key-vault-authentication-options) option during policy creation (select the workspace's managed identity and do not add it as an authorized application).
+The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or [Azure Key Vault RBAC access](../../key-vault/general/rbac-guide.md). When granting permissions via an Azure Key Vault access policy, choose the [ΓÇ£Application-onlyΓÇ¥](../../key-vault/general/secure-your-key-vault.md#key-vault-authentication-options) option during policy creation (select the workspace's managed identity and do not add it as an authorized application).
The workspace managed identity must be granted the permissions it needs on the key vault before the workspace can be activated. This phased approach to workspace activation ensures that data in the workspace is encrypted with the customer-managed key. Note that encryption can be enabled or disabled for dedicated SQL Pools- each pool is not enabled for encryption by default.
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-public-ip-address-upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-public-ip-address-upgrade.md
@@ -32,7 +32,7 @@ The following scenarios are reviewed in this article:
In order to upgrade a public IP, it must not be associated with any resource (see [this page](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs). >[!IMPORTANT]
->Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
+>Public IPs upgraded from Basic to Standard SKU continue to have no guaranteed [availability zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). Please ensure this is kept in mind when choosing which resources to associate the IP address with.
# [**Basic to Standard - PowerShell**](#tab/option-upgrade-powershell)