Updates from: 01/04/2021 04:03:07
Service Microsoft Docs article Related commit history on GitHub Change details
app-service https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/web-sites-integrate-with-vnet.md
@@ -147,6 +147,7 @@ Three charges are related to the use of the gateway-required VNet Integration fe
> [!NOTE] > VNET integration is not supported for Docker Compose scenarios in App Service.
+> Azure Functions Access Restrictions are ignored if their is a private endpoint present.
> [!INCLUDE [app-service-web-vnet-troubleshooting](../../includes/app-service-web-vnet-troubleshooting.md)]
@@ -232,4 +233,4 @@ For gateway-required VNet Integration, you can integrate App Service with an Azu
[setp2saddresses]: ../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#addresspool [VNETRouteTables]: ../virtual-network/manage-route-table.md [installCLI]: /cli/azure/install-azure-cli?view=azure-cli-latest%2f
-[privateendpoints]: networking/private-endpoint.md
\ No newline at end of file
+[privateendpoints]: networking/private-endpoint.md
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/configure-web-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configure-web-app-portal.md
@@ -6,13 +6,13 @@ services: application-gateway
author: surajmb ms.service: application-gateway ms.topic: how-to
-ms.date: 09/23/2020
+ms.date: 01/02/2021
ms.author: victorh --- # Configure App Service with Application Gateway
-Since app service is a multi-tenant service instead of a dedicate deployment, it uses host header in the incoming request to resolve the request to the correct app service endpoint. Usually, the DNS name of the application, which in turn is the DNS name associated with the application gateway fronting the app service, is different from the domain name of the backend app service. Therefore, the host header in the original request received by the application gateway is not the same as the host name of the backend service. Because of this, unless the host header in the request from the application gateway to the backend is changed to the host name of the backend service, the multi-tenant backends are not able to resolve the request to the correct endpoint.
+Since app service is a multi-tenant service instead of a dedicated deployment, it uses host header in the incoming request to resolve the request to the correct app service endpoint. Usually, the DNS name of the application, which in turn is the DNS name associated with the application gateway fronting the app service, is different from the domain name of the backend app service. Therefore, the host header in the original request received by the application gateway is not the same as the host name of the backend service. Because of this, unless the host header in the request from the application gateway to the backend is changed to the host name of the backend service, the multi-tenant backends are not able to resolve the request to the correct endpoint.
Application Gateway provides a switch called `Pick host name from backend target` which overrides the host header in the request with the host name of the back-end when the request is routed from the Application Gateway to the backend. This capability enables support for multi-tenant back ends such as Azure app service and API management.
@@ -67,4 +67,4 @@ One way you can restrict access to your web apps is to use [Azure App Service st
## Next steps
-To learn more about the App service and other multi-tenant support with application gateway, see [multi-tenant service support with application gateway](./application-gateway-web-app-overview.md).
\ No newline at end of file
+To learn more about the App service and other multi-tenant support with application gateway, see [multi-tenant service support with application gateway](./application-gateway-web-app-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-troubleshoot-metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-troubleshoot-metric.md
@@ -4,7 +4,7 @@ description: Common issues with Azure Monitor metric alerts and possible solutio
author: harelbr ms.author: harelbr ms.topic: troubleshooting
-ms.date: 11/25/2020
+ms.date: 01/03/2021
ms.subservice: alerts --- # Troubleshooting problems in Azure Monitor metric alerts
@@ -260,6 +260,23 @@ We recommend choosing an *Aggregation granularity (Period)* that is larger than
- Metric alert rule that monitors multiple resources ΓÇô When a new resource is added to the scope - Metric alert rule that monitors a metric that isnΓÇÖt emitted continuously (sparse metric) ΓÇô When the metric is emitted after a period longer than 24 hours in which it wasnΓÇÖt emitted
+## The Dynamic Thresholds borders don't seem to fit the data
+
+If the behavior of a metric changed recently, the changes won't necessarily become reflected in the Dynamic Threshold borders (upper and lower bounds) immediately, as those are calculated based on metric data from the last 10 days. When viewing the Dynamic Threshold borders for a given metric, make sure to look at the metric trend in the last week, and not only for recent hours or days.
+
+## Why is weekly seasonality not detected by Dynamic Thresholds?
+
+To identify weekly seasonality, the Dynamic Thresholds model requires at least three weeks of historical data. Once enough historical data is available, any weekly seasonality that exists in the metric data will be identified and the model would be adjusted accordingly.
+
+## Dynamic Thresholds shows a negative lower bound for a metric even though the metric always has positive values
+
+When a metric exhibits large fluctuation, Dynamic Thresholds will build a wider model around the metric values, which can result in the lower border being below zero. Specifically, this can happen in the following cases:
+1. The sensitivity is set to low
+2. The median values are close to zero
+3. The metric exhibits an irregular behavior with high variance (there are spikes or dips in the data)
+
+When the lower bound has a negative value, this means that it's plausible for the metric to reach a zero value given the metric's irregular behavior. You may consider choosing a higher sensitivity or a larger *Aggregation granularity (Period)* to make the model less sensitive, or using the *Ignore data before* option to exclude a recent irregulaity from the historical data used to build the model.
+ ## Next steps - For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-unified-log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-unified-log.md
@@ -116,6 +116,8 @@ In workspaces and Application Insights, it's supported only in **Metric measurem
Split alerts by number or string columns into separate alerts by grouping into unique combinations. When creating resource-centric alerts at scale (subscription or resource group scope), you can split by Azure resource ID column. Splitting on Azure resource ID column will change the target of the alert to the specified resource.
+Splitting by Azure resource ID column is recommended when you want to monitor the same condition on multiple Azure resources. For example, monitoring all virtual machines for CPU usage over 80%. You may also decide not to split when you want a condition on multiple resources in the scope, such as monitoring that at least five machines in the resource group scope have CPU usage over 80%.
+ In workspaces and Application Insights, it's supported only in **Metric measurement** measure type. The field is called **Aggregate On**. It's limited to three columns. Having more than three groups by columns in the query could lead to unexpected results. In all other resource types, it's configured in **Split by dimensions** section of the condition (limited to six splits). #### Example of splitting by alert dimensions
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/it-service-management-connector-secure-webhook-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/it-service-management-connector-secure-webhook-connections.md
@@ -50,63 +50,6 @@ The main benefits of the integration are:
* **Alerts resolved in the ITSM tool**: Metric alerts implement "fired" and "resolved" states. When the condition is met, the alert state is "fired." When condition is not met anymore, the alert state is "resolved." In ITSMC, alerts can't be resolved automatically. With Secure Export, the resolved state flows to the ITSM tool and so is updated automatically. * **[Common alert schema](./alerts-common-schema.md)**: In ITSMC, the schema of the alert payload differs based on the alert type. In Secure Export, there's a common schema for all alert types. This common schema contains the CI for all alert types. All alert types will be able to bind their CI with the CMDB.
-Start using the ITSM Connector tool with these steps:
-
-1. Register your app with Azure AD.
-1. Define Service Principle.
-1. Create a Secure Webhook action group.
-1. Configure your partner environment.
-
-Secure Export supports connections with the following ITSM tools:
-* [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md)
-* [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
-
-## Register with Azure Active Directory
-
-Follow these steps to register the application with Azure AD:
-
-1. Follow the steps in [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
-2. In Azure AD, select **Expose application**.
-3. Select **Set** for **Application ID URI**.
-
- [![Screenshot of the option for setting the U R I of the application I D.](media/it-service-management-connector-secure-webhook-connections/azure-ad.png)](media/it-service-management-connector-secure-webhook-connections/azure-ad-expand.png#lightbox)
-4. Select **Save**.
-
-## Define Service Principle
-
-The Action Group service will need permission to acquire authentication tokens from your AAD application in order to authentication with Service now. To grant those permissions you will need to create a service principal for the Action Group service in your tenant.
-You can use this [PowerShell commands](./action-groups.md#secure-webhook-powershell-script) for this purpose. (Requires tenant admin privileges).
-As an optional step you can define application role in the created appΓÇÖs manifest which can allow you to further restrict access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal.
-This can be done through the same [PowerShell commands](./action-groups.md#secure-webhook-powershell-script).
-
-## Create a Secure Webhook action group
-
-After your application is registered with Azure AD, you can create work items in your ITSM tool based on Azure alerts, by using the Secure Webhook action in action groups.
-
-Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, Activity Log alerts, and Azure Log Analytics alerts in the Azure portal.
-To learn more about action groups, see [Create and manage action groups in the Azure portal](./action-groups.md).
-
-To add a webhook to an action, follow these instructions for Secure Webhook:
-
-1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
-2. Select **Alerts** > **Manage actions**.
-3. Select [Add action group](./action-groups.md#create-an-action-group-by-using-the-azure-portal), and fill in the fields.
-4. Enter a name in the **Action group name** box, and enter a name in the **Short name** box. The short name is used in place of a full action group name when notifications are sent using this group.
-5. Select **Secure Webhook**.
-6. Select these details:
- 1. Select the object ID of the Azure Active Directory instance that you registered.
- 2. For the URI, paste in the webhook URL that you copied from the [ITSM tool environment](#configure-the-itsm-tool-environment).
- 3. Set **Enable the common Alert Schema** to **Yes**.
-
- The following image shows the configuration of a sample Secure Webhook action:
-
- ![Screenshot that shows a Secure Webhook action.](media/it-service-management-connector-secure-webhook-connections/secure-webhook.png)
-
-## Configure the ITSM tool environment
-
-The configuration contain 2 steps:
-1. Get the URI for the secure export definition.
-2. Definitions according to the flow of the ITSM tool.
- ## Next steps+ * [Create ITSM work items from Azure alerts](./itsmc-overview.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsm-connector-secure-webhook-connections-azure-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsm-connector-secure-webhook-connections-azure-configuration.md new file mode 100644
@@ -0,0 +1,76 @@
+---
+title: IT Service Management Connector - Secure Export in Azure Monitor - Azure Configurations
+description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items.
+ms.subservice: logs
+ms.topic: conceptual
+author: nolavime
+ms.author: v-jysur
+ms.date: 01/03/2021
+
+---
+
+# Configure Azure to connect ITSM tools using Secure Export
+
+This article provides information about how to configure the Azure in order to use "Secure Export".
+In order to use "Secure Export", follow these steps:
+
+1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory)
+1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal)
+1. [Create a Secure Webhook action group.](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group)
+1. Configure your partner environment.
+ Secure Export supports connections with the following ITSM tools:
+ * [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md)
+ * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
+
+## Register with Azure Active Directory
+
+Follow these steps to register the application with Azure AD:
+
+1. Follow the steps in [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+2. In Azure AD, select **Expose application**.
+3. Select **Set** for **Application ID URI**.
+
+ [![Screenshot of the option for setting the U R I of the application I D.](media/it-service-management-connector-secure-webhook-connections/azure-ad.png)](media/it-service-management-connector-secure-webhook-connections/azure-ad-expand.png#lightbox)
+4. Select **Save**.
+
+## Define service principal
+
+The Action Group service will need permission to acquire authentication tokens from your AAD application in order to authentication with Service now. To grant those permissions, you will need to create a service principal for the Action Group service in your tenant.
+You can use this [PowerShell commands](./action-groups.md#secure-webhook-powershell-script) for this purpose. (Requires tenant admin privileges).
+As an optional step you can define application role in the created appΓÇÖs manifest, which can allow you to further restrict, access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal. \
+This step can be done through the same [PowerShell commands](./action-groups.md#secure-webhook-powershell-script).
+
+## Create a Secure Webhook action group
+
+After your application is registered with Azure AD, you can create work items in your ITSM tool based on Azure alerts, by using the Secure Webhook action in action groups.
+
+Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, Activity Log alerts, and Azure Log Analytics alerts in the Azure portal.
+To learn more about action groups, see [Create and manage action groups in the Azure portal](./action-groups.md).
+
+To add a webhook to an action, follow these instructions for Secure Webhook:
+
+1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
+2. Select **Alerts** > **Manage actions**.
+3. Select [Add action group](./action-groups.md#create-an-action-group-by-using-the-azure-portal), and fill in the fields.
+4. Enter a name in the **Action group name** box, and enter a name in the **Short name** box. The short name is used in place of a full action group name when notifications are sent using this group.
+5. Select **Secure Webhook**.
+6. Select these details:
+ 1. Select the object ID of the Azure Active Directory instance that you registered.
+ 2. For the URI, paste in the webhook URL that you copied from the [ITSM tool environment](#configure-the-itsm-tool-environment).
+ 3. Set **Enable the common Alert Schema** to **Yes**.
+
+ The following image shows the configuration of a sample Secure Webhook action:
+
+ ![Screenshot that shows a Secure Webhook action.](media/it-service-management-connector-secure-webhook-connections/secure-webhook.png)
+
+## Configure the ITSM tool environment
+
+The configuration contains two steps:
+
+1. Get the URI for the secure export definition.
+2. Definitions according to the flow of the ITSM tool.
+
+## Next steps
+
+* [ServiceNow Secure Export Configuration](./itsmc-secure-webhook-connections-servicenow.md)
+* [BMC Secure Export Configuration](./itsmc-secure-webhook-connections-bmc.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/custom-speech-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
@@ -39,7 +39,7 @@ This diagram highlights the pieces that make up the [Custom Speech portal](https
You need to have an Azure account and Speech service subscription before you can use the [Custom Speech portal](https://speech.microsoft.com/customspeech) to create a custom model. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
-If you plan to train a custom model with audio data, pick one of the following regions. This will reduce the time it takes to train a model.
+If you plan to train a custom model with audio data, pick one of the following regions that have dedicated hardware available for training. This will reduce the time it takes to train a model.
* Australia East * Canada Central
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -72,9 +72,15 @@ Both base models and custom models will be retired after some time (see [Model l
**A**: You can run a custom model locally in a [Docker container](speech-container-howto.md?tabs=cstt).
+**Q: Can I copy or move my datasets, models, and deployments to another region or subscription?**
+
+**A**: You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy a custom model to another region or subscription. Datasets or deployments cannot be copied. You can import a dataset again in another subscription and create endpoints there using the model copies.
+ **Q: Are my requests logged?**
-**A**: By default requests are not logged (neither audio, nor transcription). If required you may select *Log content from this endpoint* option when you [create a custom endpoint](./how-to-custom-speech-train-model.md) to enable tracing. Then requests will be logged in Azure in secure storage.
+**A**: By default requests are not logged (neither audio, nor transcription). If necessary, you may select *Log content from this endpoint* option when you [create a custom endpoint](./how-to-custom-speech-train-model.md). You can also enable audio logging in the [Speech SDK](speech-sdk.md) on a per-request basis without creating a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. For subscriptions that use Microsoft-owned storage, they will be available for 30 days.
+
+You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with *Log content from this endpoint* enabled. If audio logging is enabled via the SDK, call the [API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs) to access the files.
**Q: Are my requests throttled?**
@@ -95,7 +101,7 @@ See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
**Q: What is the limit on the size of a dataset, and why is it the limit?**
-**A**: The limit is due to the restriction on the size of a file for HTTP upload. See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md) for the actual limit.
+**A**: The limit is due to the restriction on the size of a file for HTTP upload. See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md) for the actual limit. You can split your data into multiple datasets and select all of them to train the model.
**Q: Can I zip my text files so I can upload a larger text file?**
@@ -121,22 +127,20 @@ See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
**Q: Do I need to transcribe adaptation data myself?**
-**A**: Yes! You can transcribe it yourself or use a professional transcription service. Some users prefer professional transcribers and others use crowdsourcing or do the transcriptions themselves.
+**A**: Yes. You can transcribe it yourself or use a professional transcription service. Some users prefer professional transcribers and others use crowdsourcing or do the transcriptions themselves.
-## Accuracy testing
-
-**Q: Can I perform offline testing of my custom acoustic model by using a custom language model?**
+**Q: How long will it take to train a custom model audio data?**
-**A**: Yes, just select the custom language model in the drop-down menu when you set up the offline test.
+**A**: Training a model with audio data is a lengthy process. Depending on the amount of data, it can take several days to create a custom model. If it cannot be finished within one week, the service might abort the training operation and report the model as failed. For faster results, use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
-**Q: Can I perform offline testing of my custom language model by using a custom acoustic model?**
+Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and discard the audio data. Training will then be finished much faster and results will be the same as training with just text.
-**A**: Yes, just select the custom acoustic model in the drop-down menu when you set up the offline test.
+## Accuracy testing
**Q: What is word error rate (WER) and how is it computed?** **A**: WER is the evaluation metric for speech recognition. WER is counted as the total number of errors,
-which includes insertions, deletions, and substitutions, divided by the total number of words in the reference transcription. For more information, see [word error rate](https://en.wikipedia.org/wiki/Word_error_rate).
+which includes insertions, deletions, and substitutions, divided by the total number of words in the reference transcription. For more information, see [Evaluate Custom Speech accuracy](how-to-custom-speech-evaluate-data.md#evaluate-custom-speech-accuracy).
**Q: How do I determine whether the results of an accuracy test are good?**
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
@@ -18,7 +18,7 @@ In this article, you learn how to quantitatively measure and improve the accurac
## Evaluate Custom Speech accuracy
-The industry standard to measure model accuracy is *Word Error Rate* (WER). WER counts the number of incorrect words identified during recognition,
+The industry standard to measure model accuracy is [Word Error Rate](https://en.wikipedia.org/wiki/Word_error_rate) (WER). WER counts the number of incorrect words identified during recognition,
then divides by the total number of words provided in the human-labeled transcript (shown below as N). Finally, that number is multiplied by 100% to calculate the WER. ![WER formula](./media/custom-speech/custom-speech-wer-formula.png)
@@ -33,6 +33,8 @@ Here's an example:
![Example of incorrectly identified words](./media/custom-speech/custom-speech-dis-words.png)
+If you want to replicate WER measurements locally, you can use sclite from [SCTK](https://github.com/usnistgov/SCTK).
+ ## Resolve errors and improve WER You can use the WER from the machine recognition results to evaluate the quality of the model you are using with your app, tool, or product. A WER of 5%-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, however you may want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
@@ -92,7 +94,7 @@ The following sections describe how each kind of additional training data can re
### Add related text sentences
-Additional related text sentences can primarily reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
+When you train a new custom model, start by adding related text to improve the recognition of domain-specific words and phrases. Related text sentences can primarily reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
> [!NOTE] > Avoid related text sentences that include noise such as unrecognizable characters or words.
@@ -107,6 +109,12 @@ Consider these details:
* Avoid samples that include transcription errors, but do include a diversity of audio quality. * Avoid sentences that are not related to your problem domain. Unrelated sentences can harm your model. * When the quality of transcripts vary, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
+* The Speech service will automatically use the transcripts to improve the recognition of domain-specific words and phrases, as if they were added as related text.
+* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by just using related text.
+* It can take several days for a training operation to complete. To improve the speed of training, make sure to create your Speech service subscription in a [region with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+
+> [!NOTE]
+> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio.
### Add new words with pronunciation
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-train-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
@@ -62,7 +62,7 @@ After your endpoint is deployed, the endpoint name appears as a link. Select the
## View logging data
-Logging data is available for download under **Endpoint** > **Details**.
+Logging data is available for export if you go to the endpoint's page under **Deployments**.
> [!NOTE] >Logging data is available for 30 days on Microsoft-owned storage. It will be removed afterwards. If a customer-owned storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/regions.md
@@ -40,6 +40,8 @@ The Speech service is available in these regions for **speech recognition**, **t
If you use the [Speech SDK](speech-sdk.md), regions are specified by the **Region identifier** (for example, as a parameter to `SpeechConfig.FromSubscription`). Make sure the region is matching the region of your subscription.
+If you plan to train a custom model with audio data, use one of the [regions with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later.
+ ### Intent recognition Available regions for **intent recognition** via the Speech SDK are the following:
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
@@ -127,6 +127,11 @@ These actions can be controlled by Azure RBAC:
* *Microsoft.MachineLearningServices/workspaces/computes/stop/action* * *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
+Please note to create a compute instance user needs to have permissions for the following actions:
+* *Microsoft.MachineLearningServices/workspaces/computes/write*
+* *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
++ ### <a name="create"></a>Create a compute instance In your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-migrate-physical-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-migrate-physical-virtual-machines.md
@@ -5,7 +5,7 @@ author: rahulg1190
ms.author: rahugup ms.manager: bsiva ms.topic: tutorial
-ms.date: 04/15/2020
+ms.date: 01/02/2021
ms.custom: MVC ---
@@ -105,7 +105,7 @@ Prepare for appliance deployment as follows:
- You prepare a machine to host the replication appliance. [Review](migrate-replication-appliance.md#appliance-requirements) the machine requirements. - The replication appliance uses MySQL. Review the [options](migrate-replication-appliance.md#mysql-installation) for installing MySQL on the appliance. - Review the Azure URLs required for the replication appliance to access [public](migrate-replication-appliance.md#url-access) and [government](migrate-replication-appliance.md#azure-government-url-access) clouds.-- Review [port] (migrate-replication-appliance.md#port-access) access requirements for the replication appliance.
+- Review [port](migrate-replication-appliance.md#port-access) access requirements for the replication appliance.
> [!NOTE] > The replication appliance shouldn't be installed on a source machine that you want to replicate or on the Azure Migrate discovery and assessment appliance you may have installed before.
@@ -346,4 +346,4 @@ After you've verified that the test migration works as expected, you can migrate
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
\ No newline at end of file
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
@@ -38,9 +38,9 @@ Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security
| Release state: | Generally available (GA) | | Pricing: | Requires [Azure Defender for servers](security-center-pricing.md) | | Supported platforms: | Azure machines running Windows<br>Azure Arc machines running Windows|
-| Supported versions of Windows: | ΓÇó Defender for Endpoint is built into Windows 10 1703 (and newer) and Windows Server 2019.<br> ΓÇó Security Center supports detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1.<br> ΓÇó Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers.<br> ΓÇó No support for Windows Server 2019 or Linux|
-| Required roles and permissions: | To enable/disable the integration: **Security admin** or **Owner**<br>To view MDATP alerts in Security Center: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** |
-| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) GCC customers running workloads in global Azure clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov |
+| Supported versions of Windows: | ΓÇó Security Center supports detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers<br> ΓÇó No support for Windows Server 2019, Windows 10 1703 (and newer), or Linux|
+| Required roles and permissions: | To enable/disable the integration: **Security admin** or **Owner**<br>To view MDATP alerts in Security Center: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor**|
+| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov<br>![No](./media/icons/no-icon.png) GCC customers running workloads in global Azure clouds |
| | |