Updates from: 12/24/2020 04:03:29
Service Microsoft Docs article Related commit history on GitHub Change details
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-rabbitmq-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
@@ -310,6 +310,21 @@ If you are testing locally without a connection string, you should set the "host
|userName|n/a|(ignored if using connectionString) <br>Name to access the queue | |password|n/a|(ignored if using connectionString) <br>Password to access the queue| +
+## Enable Runtime Scaling
+
+In order for the RabbitMQ trigger to scale out to multiple instances, the **Runtime Scale Monitoring** setting must be enabled.
+
+In the portal, this setting can be found under **Configuration** > **Function runtime settings** for your function app.
+
+:::image type="content" source="media/functions-networking-options/virtual-network-trigger-toggle.png" alt-text="VNETToggle":::
+
+In the CLI, you can enable **Runtime Scale Monitoring** by using the following command:
+
+```azurecli-interactive
+az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites
+```
+ ## Monitoring RabbitMQ endpoint To monitor your queues and exchanges for a certain RabbitMQ endpoint:
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/manage-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/manage-connections.md
@@ -100,7 +100,7 @@ public static async Task Run(string input)
// Rest of function } ```
-If you are working with functions v3.x, you need a refernce to Microsoft.Azure.DocumentDB.Core. Add a reference in the code:
+If you are working with functions v3.x, you need a reference to Microsoft.Azure.DocumentDB.Core. Add a reference in the code:
```cs #r "Microsoft.Azure.DocumentDB.Core"
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-platform-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/data-platform-metrics.md
@@ -53,7 +53,7 @@ There are three fundamental sources of metrics collected by Azure Monitor. Once
## Metrics explorer Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](rest-api-walkthrough.md).
-![Metrics Explorer](media/data-platform/metrics-explorer.png)
+![Metrics Explorer](media/data-platform-metrics/metrics-explorer.png)
- See [Getting started with Azure Monitor metrics explorer](metrics-getting-started.md) to get started using metrics explorer.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -34,7 +34,27 @@ If you're using Service Map, you can view the service desk items created in ITSM
![Screenshot that shows the Log Analytics screen.](media/itsmc-overview/itsmc-overview-integrated-solutions.png)
-## How to manually fix ServiceNow sync problems
+## Troubleshoot ITSM connections
+
+- If a connection fails from the connected source's UI and you get an **Error in saving connection** message, take the following steps:
+ - For ServiceNow, Cherwell, and Provance connections:
+ - Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections.
+ - Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection.
+ - For Service Manager connections:
+ - Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
+
+- If data from ServiceNow isn't getting synced to Log Analytics, ensure that the ServiceNow instance isn't sleeping. ServiceNow dev instances sometimes go to sleep when they're idle for a long time. If that isn't what's happening, report the problem.
+- If Log Analytics alerts fire but work items aren't created in the ITSM product, if configuration items aren't created/linked to work items, or for other information, see these resources:
+ - ITSMC: The solution shows a summary of connections, work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
+ - **Log Search** page: View the errors and related information directly by using the query `*ServiceDeskLog_CL*`.
+
+### Troubleshoot Service Manager web app deployment
+
+- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.
+- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.
+- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.
+
+### How to manually fix ServiceNow sync problems
Azure Monitor can connect to third-party IT Service Management (ITSM) providers. ServiceNow is one of those providers.
@@ -71,26 +91,6 @@ Use the following synchronization process to reactivate the connection and refre
f. Review the notifications to see if the process finished with success
-## Troubleshoot ITSM connections
--- If a connection fails from the connected source's UI and you get an **Error in saving connection** message, take the following steps:
- - For ServiceNow, Cherwell, and Provance connections:
- - Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections.
- - Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection.
- - For Service Manager connections:
- - Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
--- If data from ServiceNow isn't getting synced to Log Analytics, ensure that the ServiceNow instance isn't sleeping. ServiceNow dev instances sometimes go to sleep when they're idle for a long time. If that isn't what's happening, report the problem.-- If Log Analytics alerts fire but work items aren't created in the ITSM product, if configuration items aren't created/linked to work items, or for other information, see these resources:
- - ITSMC: The solution shows a summary of connections, work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
- - **Log Search** page: View the errors and related information directly by using the query `*ServiceDeskLog_CL*`.
-
-## Troubleshoot Service Manager web app deployment
--- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.-- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.-- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.- ## Next Steps * [ITSM Definition](./itsmc-definition.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metric-chart-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metric-chart-samples.md
@@ -20,7 +20,7 @@ Want to share your great charts examples with the world? Contribute to this page
This chart shows if CPU for an App Service was within the acceptable range and breaks it down by instance to determine whether the load was properly distributed. You can see from the chart that the app was running on a single server instance before 6 AM, and then scaled up by adding another instance.
-![Line chart of average cpu percentage by server instance](./media/metric-chart-samples/cpu-by-instance.png)
+![Line chart of average cpu percentage by server instance](./media/metrics-charts/cpu-by-instance.png)
### How to configure this chart?
@@ -30,17 +30,17 @@ Select your App Service resource and find the **CPU Percentage** metric. Then cl
View your application's availability by region to identify which geographic locations are having problems. This chart shows the Application Insights availability metric. You can see that the monitored application has no problem with availability from the East US datacenter, but it is experiencing a partial availability problem from West US, and East Asia.
-![Chart of average availability by locations](./media/metric-chart-samples/availability-run-location.png)
+![Chart of average availability by locations](./media/metrics-charts/availability-by-location.png)
### How to configure this chart? You first need to turn on [Application Insights availability](../app/monitor-web-app-availability.md) monitoring for your website. After that, pick your Application Insights resource and select the Availability metric. Apply splitting on the **Run location** dimension.
-## Volume of storage account transactions by API name
+## Volume of failed storage account transactions by API name
-Your storage account resource is experiencing an excess volume of transactions. You can use the transactions metric to identify which API is responsible for the excess load. Notice that the following chart is configured with the same dimension (API name) in filtering and splitting to narrow down the view to only the API calls of the interest:
+Your storage account resource is experiencing an excess volume of failed transactions. You can use the transactions metric to identify which API is responsible for the excess failure. Notice that the following chart is configured with the same dimension (API name) in splitting and filtered by failed response type:
-![Bar graph of API transactions](./media/metric-chart-samples/transactions-by-api.png)
+![Bar graph of API transactions](./media/metrics-charts/split-and-filter-example.png)
### How to configure this chart?
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-troubleshoot.md
@@ -75,10 +75,10 @@ This problem may happen when your dashboard was created with a metric that was l
## Chart shows dashed line Azure metrics charts use dashed line style to indicate that there is a missing value (also known as ΓÇ£null valueΓÇ¥) between two known time grain data points. For example, if in the time selector you picked ΓÇ£1 minuteΓÇ¥ time granularity but the metric was reported at 07:26, 07:27, 07:29, and 07:30 (note a minute gap between second and third data points), then a dashed line will connect 07:27 and 07:29 and a solid line will connect all other data points. The dashed line drops down to zero when the metric uses **count** and **sum** aggregation. For the **avg**, **min** or **max** aggregations, the dashed line connects two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
- ![Screenshot that shows how when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.](./media/metrics-troubleshoot/missing-data-point-line-chart.png)
+ ![Screenshot that shows how when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.](./media/metrics-troubleshoot/dashed-line.png)
**Solution:** This behavior is by design. It is useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there is a value and skipping the data point altogether when the value is missing:
- ![Screenshot that highlights the Scatter chart menu option.](./media/metrics-troubleshoot/missing-data-point-scatter-chart.png)
+ ![Screenshot that highlights the Scatter chart menu option.](./media/metrics-troubleshoot/scatter-plot.png)
> [!NOTE] > If you still prefer a line chart for your metric, moving mouse over the chart may help to assess the time granularity by highlighting the data point at the location of the mouse pointer.
@@ -86,7 +86,7 @@ Azure metrics charts use dashed line style to indicate that there is a missing v
## Chart shows unexpected drop in values In many cases, the perceived drop in the metric values is a misunderstanding of the data shown on the chart. You can be misled by a drop in sums or counts when the chart shows the most-recent minutes because the last metric data points havenΓÇÖt been received or processed by Azure yet. Depending on the service, the latency of processing metrics can be within a couple minutes range. For charts showing a recent time range with a 1- or 5- minute granularity, a drop of the value over the last few minutes becomes more noticeable:
- ![Screenshot that shows a drop of the value over the last few minutes.](./media/metrics-troubleshoot/drop-in-values.png)
+ ![Screenshot that shows a drop of the value over the last few minutes.](./media/metrics-troubleshoot/unexpected-dip.png)
**Solution:** This behavior is by design. We believe that showing data as soon as we receive it is beneficial even when the data is *partial* or *incomplete*. Doing so allows you to make important conclusion sooner and start investigation right away. For example, for a metric that shows the number of failures, seeing a partial value X tells you that there were at least X failures on a given minute. You can start investigating the problem right away, rather than wait to see the exact count of failures that happened on this minute, which might not be as important. The chart will update once we receive the entire set of data, but at that time it may also show new incomplete data points from more recent minutes.
@@ -96,7 +96,7 @@ Virtual machines and virtual machine scale sets have two categories of metrics:
By default, Guest OS metrics are stored in Azure Storage account, which you pick from the **Diagnostic settings** tab of your resource. If Guest OS metrics aren't collected or metrics explorer cannot access them, you will only see the **Virtual Machine Host** metric namespace:
-![metric image](./media/metrics-troubleshoot/cannot-pick-guest-os-namespace.png)
+![metric image](./media/metrics-troubleshoot/vm.png)
**Solution:** If you don't see **Guest OS (classic)** namespace and metrics in metrics explorer:
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/linked-templates.md
@@ -1,16 +1,16 @@
--- title: Link templates for deployment
-description: Describes how to use linked templates in an Azure Resource Manager template to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs.
+description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs.
ms.topic: conceptual ms.date: 12/07/2020 --- # Using linked and nested templates when deploying Azure resources
-To deploy complex solutions, you can break your template into many related templates, and then deploy them together through a main template. The related templates can be separate files or template syntax that is embedded within the main template. This article uses the term **linked template** to refer to a separate template file that is referenced via a link from the main template. It uses the term **nested template** to refer to embedded template syntax within the main template.
+To deploy complex solutions, you can break your Azure Resource Manager template (ARM template) into many related templates, and then deploy them together through a main template. The related templates can be separate files or template syntax that is embedded within the main template. This article uses the term **linked template** to refer to a separate template file that is referenced via a link from the main template. It uses the term **nested template** to refer to embedded template syntax within the main template.
For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. You can easily reuse these templates for other scenarios.
-For a tutorial, see [Tutorial: create linked Azure Resource Manager templates](./deployment-tutorial-linked-template.md).
+For a tutorial, see [Tutorial: Deploy a linked template](./deployment-tutorial-linked-template.md).
> [!NOTE] > For linked or nested templates, you can only set the deployment mode to [Incremental](deployment-modes.md). However, the main template can be deployed in complete mode. If you deploy the main template in the complete mode, and the linked or nested template targets the same resource group, the resources deployed in the linked or nested template are included in the evaluation for complete mode deployment. The combined collection of resources deployed in the main template and linked or nested templates is compared against the existing resources in the resource group. Any resources not included in this combined collection are deleted.
@@ -20,7 +20,7 @@ For a tutorial, see [Tutorial: create linked Azure Resource Manager templates](.
## Nested template
-To nest a template, add a [deployments resource](/azure/templates/microsoft.resources/deployments) to your main template. In the **template** property, specify the template syntax.
+To nest a template, add a [deployments resource](/azure/templates/microsoft.resources/deployments) to your main template. In the `template` property, specify the template syntax.
```json {
@@ -277,7 +277,7 @@ The following example deploys a SQL server and retrieves a key vault secret to u
## Linked template
-To link a template, add a [deployments resource](/azure/templates/microsoft.resources/deployments) to your main template. In the **templateLink** property, specify the URI of the template to include. The following example links to a template that is in a storage account.
+To link a template, add a [deployments resource](/azure/templates/microsoft.resources/deployments) to your main template. In the `templateLink` property, specify the URI of the template to include. The following example links to a template that is in a storage account.
```json {
@@ -304,9 +304,9 @@ To link a template, add a [deployments resource](/azure/templates/microsoft.reso
} ```
-When referencing a linked template, the value of `uri` can't be a local file or a file that is only available on your local network. Azure Resource Manager must be able to access the template. Provide a URI value that downloadable as **http** or **https**.
+When referencing a linked template, the value of `uri` can't be a local file or a file that is only available on your local network. Azure Resource Manager must be able to access the template. Provide a URI value that downloadable as HTTP or HTTPS.
-You may reference templates using parameters that include **http** or **https**. For example, a common pattern is to use the `_artifactsLocation` parameter. You can set the linked template with an expression like:
+You may reference templates using parameters that include HTTP or HTTPS. For example, a common pattern is to use the `_artifactsLocation` parameter. You can set the linked template with an expression like:
```json "uri": "[concat(parameters('_artifactsLocation'), '/shared/os-disk-parts-md.json', parameters('_artifactsLocationSasToken'))]"
@@ -318,47 +318,49 @@ If you're linking to a template in GitHub, use the raw URL. The link has the for
### Parameters for linked template
-You can provide the parameters for your linked template either in an external file or inline. When providing an external parameter file, use the **parametersLink** property:
+You can provide the parameters for your linked template either in an external file or inline. When providing an external parameter file, use the `parametersLink` property:
```json "resources": [ {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
- "name": "linkedTemplate",
- "properties": {
- "mode": "Incremental",
- "templateLink": {
- "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json",
- "contentVersion":"1.0.0.0"
- },
- "parametersLink": {
- "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.parameters.json",
- "contentVersion":"1.0.0.0"
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-10-01",
+ "name": "linkedTemplate",
+ "properties": {
+ "mode": "Incremental",
+ "templateLink": {
+ "uri": "https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json",
+ "contentVersion": "1.0.0.0"
+ },
+ "parametersLink": {
+ "uri": "https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.parameters.json",
+ "contentVersion": "1.0.0.0"
+ }
} }
- }
] ```
-To pass parameter values inline, use the **parameters** property.
+To pass parameter values inline, use the `parameters` property.
```json "resources": [ {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
- "name": "linkedTemplate",
- "properties": {
- "mode": "Incremental",
- "templateLink": {
- "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json",
- "contentVersion":"1.0.0.0"
- },
- "parameters": {
- "storageAccountName":{"value": "[parameters('storageAccountName')]"}
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-10-01",
+ "name": "linkedTemplate",
+ "properties": {
+ "mode": "Incremental",
+ "templateLink": {
+ "uri": "https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json",
+ "contentVersion": "1.0.0.0"
+ },
+ "parameters": {
+ "storageAccountName": {
+ "value": "[parameters('storageAccountName')]"
+ }
+ }
}
- }
} ] ```
@@ -388,7 +390,7 @@ You don't have to provide the `contentVersion` property for the `templateLink` o
The previous examples showed hard-coded URL values for the template links. This approach might work for a simple template, but it doesn't work well for a large set of modular templates. Instead, you can create a static variable that stores a base URL for the main template and then dynamically create URLs for the linked templates from that base URL. The benefit of this approach is that you can easily move or fork the template because you need to change only the static variable in the main template. The main template passes the correct URIs throughout the decomposed template.
-The following example shows how to use a base URL to create two URLs for linked templates (**sharedTemplateUrl** and **vmTemplate**).
+The following example shows how to use a base URL to create two URLs for linked templates (`sharedTemplateUrl` and `vmTemplateUrl`).
```json "variables": {
@@ -398,7 +400,7 @@ The following example shows how to use a base URL to create two URLs for linked
} ```
-You can also use [deployment()](template-functions-deployment.md#deployment) to get the base URL for the current template, and use that to get the URL for other templates in the same location. This approach is useful if your template location changes or you want to avoid hard coding URLs in the template file. The templateLink property is only returned when linking to a remote template with a URL. If you're using a local template, that property isn't available.
+You can also use [deployment()](template-functions-deployment.md#deployment) to get the base URL for the current template, and use that to get the URL for other templates in the same location. This approach is useful if your template location changes or you want to avoid hard coding URLs in the template file. The `templateLink` property is only returned when linking to a remote template with a URL. If you're using a local template, that property isn't available.
```json "variables": {
@@ -417,50 +419,50 @@ Ultimately, you would use the variable in the `uri` property of a `templateLink`
## Using copy
-To create multiple instances of a resource with a nested template, add the copy element at the level of the **Microsoft.Resources/deployments** resource. Or, if the scope is inner, you can add the copy within the nested template.
+To create multiple instances of a resource with a nested template, add the `copy` element at the level of the `Microsoft.Resources/deployments` resource. Or, if the scope is `inner`, you can add the copy within the nested template.
-The following example template shows how to use copy with a nested template.
+The following example template shows how to use `copy` with a nested template.
```json "resources": [ {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
- "name": "[concat('nestedTemplate', copyIndex())]",
- // yes, copy works here
- "copy":{
- "name": "storagecopy",
- "count": 2
- },
- "properties": {
- "mode": "Incremental",
- "expressionEvaluationOptions": {
- "scope": "inner"
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-10-01",
+ "name": "[concat('nestedTemplate', copyIndex())]",
+ // yes, copy works here
+ "copy": {
+ "name": "storagecopy",
+ "count": 2
},
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(variables('storageName'), copyIndex())]",
- "location": "West US",
- "sku": {
- "name": "Standard_LRS"
+ "properties": {
+ "mode": "Incremental",
+ "expressionEvaluationOptions": {
+ "scope": "inner"
},
- "kind": "StorageV2"
- // Copy works here when scope is inner
- // But, when scope is default or outer, you get an error
- //"copy":{
- // "name": "storagecopy",
- // "count": 2
- //}
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[concat(variables('storageName'), copyIndex())]",
+ "location": "West US",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ // Copy works here when scope is inner
+ // But, when scope is default or outer, you get an error
+ //"copy":{
+ // "name": "storagecopy",
+ // "count": 2
+ //}
+ }
+ ]
}
- ]
} }
- }
] ```
@@ -470,7 +472,7 @@ To get an output value from a linked template, retrieve the property value with
When getting an output property from a linked template, the property name must not include a dash.
-The following examples demonstrate how to reference a linked template and retrieve an output value. The linked template returns a simple message. First, the linked template:
+The following examples demonstrate how to reference a linked template and retrieve an output value. The linked template returns a simple message. First, the linked template:
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/linkedtemplates/helloworld.json":::
@@ -607,28 +609,28 @@ The following example shows how to pass a SAS token when linking to a template:
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
- "containerSasToken": { "type": "securestring" }
+ "containerSasToken": { "type": "securestring" }
}, "resources": [
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
- "name": "linkedTemplate",
- "properties": {
- "mode": "Incremental",
- "templateLink": {
- "uri": "[concat(uri(deployment().properties.templateLink.uri, 'helloworld.json'), parameters('containerSasToken'))]",
- "contentVersion": "1.0.0.0"
- }
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2019-10-01",
+ "name": "linkedTemplate",
+ "properties": {
+ "mode": "Incremental",
+ "templateLink": {
+ "uri": "[concat(uri(deployment().properties.templateLink.uri, 'helloworld.json'), parameters('containerSasToken'))]",
+ "contentVersion": "1.0.0.0"
+ }
+ }
}
- }
], "outputs": { } } ```
-In PowerShell, you get a token for the container and deploy the templates with the following commands. Notice that the **containerSasToken** parameter is defined in the template. It isn't a parameter in the **New-AzResourceGroupDeployment** command.
+In PowerShell, you get a token for the container and deploy the templates with the following commands. Notice that the `containerSasToken` parameter is defined in the template. It isn't a parameter in the `New-AzResourceGroupDeployment` command.
```azurepowershell-interactive Set-AzCurrentStorageAccount -ResourceGroupName ManageGroup -Name storagecontosotemplates
@@ -674,7 +676,7 @@ The following examples show common uses of linked templates.
## Next steps
-* To go through a tutorial, see [Tutorial: create linked Azure Resource Manager templates](./deployment-tutorial-linked-template.md).
-* To learn about the defining the deployment order for your resources, see [Defining dependencies in Azure Resource Manager templates](define-resource-dependency.md).
-* To learn how to define one resource but create many instances of it, see [Create multiple instances of resources in Azure Resource Manager](copy-resources.md).
-* For steps on setting up a template in a storage account and generating a SAS token, see [Deploy resources with Resource Manager templates and Azure PowerShell](deploy-powershell.md) or [Deploy resources with Resource Manager templates and Azure CLI](deploy-cli.md).
+* To go through a tutorial, see [Tutorial: Deploy a linked template](./deployment-tutorial-linked-template.md).
+* To learn about the defining the deployment order for your resources, see [Define the order for deploying resources in ARM templates](define-resource-dependency.md).
+* To learn how to define one resource but create many instances of it, see [Resource iteration in ARM templates](copy-resources.md).
+* For steps on setting up a template in a storage account and generating a SAS token, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md) or [Deploy resources with ARM templates and Azure CLI](deploy-cli.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-expressions.md
@@ -1,13 +1,13 @@
--- title: Template syntax and expressions
-description: Describes the declarative JSON syntax for Azure Resource Manager templates.
+description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates).
ms.topic: conceptual ms.date: 03/17/2020 --- # Syntax and expressions in Azure Resource Manager templates
-The basic syntax of the template is JSON. However, you can use expressions to extend the JSON values available within the template. Expressions start and end with brackets: `[` and `]`, respectively. The value of the expression is evaluated when the template is deployed. An expression can return a string, integer, boolean, array, or object.
+The basic syntax of the Azure Resource Manager template (ARM template) is JavaScript Object Notation (JSON). However, you can use expressions to extend the JSON values available within the template. Expressions start and end with brackets: `[` and `]`, respectively. The value of the expression is evaluated when the template is deployed. An expression can return a string, integer, boolean, array, or object.
A template expression can't exceed 24,576 characters.
@@ -26,7 +26,7 @@ Azure Resource Manager provides [functions](template-functions.md) that you can
Within the expression, the syntax `resourceGroup()` calls one of the functions that Resource Manager provides for use within a template. In this case, it's the [resourceGroup](template-functions-resource.md#resourcegroup) function. Just like in JavaScript, function calls are formatted as `functionName(arg1,arg2,arg3)`. The syntax `.location` retrieves one property from the object returned by that function.
-Template functions and their parameters are case-insensitive. For example, Resource Manager resolves **variables('var1')** and **VARIABLES('VAR1')** as the same. When evaluated, unless the function expressly modifies case (such as toUpper or toLower), the function preserves the case. Certain resource types may have case requirements that are separate from how functions are evaluated.
+Template functions and their parameters are case-insensitive. For example, Resource Manager resolves `variables('var1')` and `VARIABLES('VAR1')` as the same. When evaluated, unless the function expressly modifies case (such as `toUpper` or `toLower`), the function preserves the case. Certain resource types may have case requirements that are separate from how functions are evaluated.
To pass a string value as a parameter to a function, use single quotes.
@@ -118,7 +118,7 @@ The same formatting applies when passing values in from a parameter file. The ch
## Null values
-To set a property to null, you can use **null** or **[json('null')]**. The [json function](template-functions-object.md#json) returns an empty object when you provide `null` as the parameter. In both cases, Resource Manager templates treat it as if the property isn't present.
+To set a property to null, you can use `null` or `[json('null')]`. The [json function](template-functions-object.md#json) returns an empty object when you provide `null` as the parameter. In both cases, Resource Manager templates treat it as if the property isn't present.
```json "stringValue": null,
@@ -127,5 +127,5 @@ To set a property to null, you can use **null** or **[json('null')]**. The [json
## Next steps
-* For the full list of template functions, see [Azure Resource Manager template functions](template-functions.md).
-* For more information about template files, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md).
+* For the full list of template functions, see [ARM template functions](template-functions.md).
+* For more information about template files, see [Understand the structure and syntax of ARM templates](template-syntax.md).
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-tutorial-build-blazor-server-chat-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
@@ -428,7 +428,6 @@ From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
> "Azure": { > "SignalR": { > "Enabled": true,
-> "ServerStickyMode": "Required",
> "ConnectionString": <your-connection-string> > } > }
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/deploy-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
@@ -31,7 +31,7 @@ Use the information you gathered in the [Planning the Azure VMware Solution depl
>[!IMPORTANT] >If you left the **Virtual Network** option blank during the initial provisioning step on the **Create a Private Cloud** screen, complete the [Configure networking for your VMware private cloud](tutorial-configure-networking.md) tutorial **before** you proceed with this section.
-After deploying Azure VMware Solution, you'll create the virtual network's jump box that connects to vCenter and NSX. Once you've configured ExpressRoute circuits and ExpressRoute Global Reach, the jump box isn't needed. But it's handy to reach vCenter and NSX in your Azure VMware Solution.
+After you deploy Azure VMware Solution, you'll create the virtual network's jump box that connects to vCenter and NSX. Once you've configured ExpressRoute circuits and ExpressRoute Global Reach, the jump box isn't needed. But it's handy to reach vCenter and NSX in your Azure VMware Solution.
:::image type="content" source="media/pre-deployment/jump-box-diagram.png" alt-text="Create the Azure VMware Solution jump box" border="false" lightbox="media/pre-deployment/jump-box-diagram.png":::
@@ -41,15 +41,16 @@ To create a virtual machine (VM) in the virtual network you [identified or creat
## Connect to a virtual network with ExpressRoute
-If you didn't define a virtual network in the deployment step and your intent is to connect the Azure VMware Solution's ExpressRoute to an existing ExpressRoute Gateway, follow the steps below.
+>[!IMPORTANT]
+>If you've already defined a virtual network in the deployment screen in Azure, then skip to the next section.
-If you've already defined a virtual network in the deployment screen in Azure, then skip to the next section.
+If you didn't define a virtual network in the deployment step and your intent is to connect the Azure VMware Solution's ExpressRoute to an existing ExpressRoute Gateway, follow these steps.
[!INCLUDE [connect-expressroute-to-vnet](includes/connect-expressroute-vnet.md)] ## Verify network routes advertised
-The jump box is in the virtual network where Azure VMware Solution connects via its ExpressRoute circuit. In Azure, go to the jump box's network interface and [view the effective routes](../virtual-network/manage-route-table.md#view-effective-routes).
+The jump box is in the virtual network where Azure VMware Solution connects through its ExpressRoute circuit. In Azure, go to the jump box's network interface and [view the effective routes](../virtual-network/manage-route-table.md#view-effective-routes).
In the effective routes list, you should see the networks created as part of the Azure VMware Solution deployment. You'll see multiple networks that were derived from the [`/22` network you defined](production-ready-deployment-steps.md#ip-address-segment) during the [deployment step](#deploy-azure-vmware-solution) earlier in this article.
@@ -65,7 +66,7 @@ You can identify the vCenter, and NSX-T admin console's IP addresses and credent
## Create a network segment on Azure VMware Solution
-You use NSX-T to create new network segments in your Azure VMware Solution environment. You defined the network(s) you want to create in the [planning section](production-ready-deployment-steps.md). If you haven't defined them, go back to the [planning section](production-ready-deployment-steps.md) before proceeding.
+You use NSX-T to create new network segments in your Azure VMware Solution environment. You defined the networks you want to create in the [planning section](production-ready-deployment-steps.md). If you haven't defined them, go back to the [planning section](production-ready-deployment-steps.md) before proceeding.
>[!IMPORTANT] >Make sure the CIDR network address block you defined doesn't overlap with anything in your Azure or on-premises environments.
@@ -74,9 +75,9 @@ Follow the [Create an NSX-T network segment in Azure VMware Solution](tutorial-n
## Verify advertised NSX-T segment
-Go back to the [Verify network routes advertised](#verify-network-routes-advertised) step. You'll see an additional route(s) in the list representing the network segment(s) you created in the previous step.
+Go back to the [Verify network routes advertised](#verify-network-routes-advertised) step. You'll see other routes in the list representing the network segments you created in the previous step.
-For virtual machines, you'll assign the segment(s) you created in the [Create a network segment on Azure VMware Solution](#create-a-network-segment-on-azure-vmware-solution) step.
+For virtual machines, you'll assign the segments you created in the [Create a network segment on Azure VMware Solution](#create-a-network-segment-on-azure-vmware-solution) step.
Because DNS is required, identify what DNS server you want to use.
@@ -89,7 +90,7 @@ Because DNS is required, identify what DNS server you want to use.
## (Optional) Provide DHCP services to NSX-T network segment
-If you plan to use DHCP on your NSX-T segment(s), continue with this section. Otherwise, skip to the [Add a VM on the NSX-T network segment](#add-a-vm-on-the-nsx-t-network-segment) section.
+If you plan to use DHCP on your NSX-T segments, continue with this section. Otherwise, skip to the [Add a VM on the NSX-T network segment](#add-a-vm-on-the-nsx-t-network-segment) section.
Now that you've created your NSX-T network segment, you can create and manage DHCP in Azure VMware Solution in two ways:
@@ -99,13 +100,13 @@ Now that you've created your NSX-T network segment, you can create and manage DH
## Add a VM on the NSX-T network segment
-In your Azure VMware Solution vCenter, deploy a VM and use it to verify connectivity from your Azure VMware Solution network(s) to:
+In your Azure VMware Solution vCenter, deploy a VM and use it to verify connectivity from your Azure VMware Solution networks to:
- The internet - Azure Virtual Networks - On-premises.
-Deploy the VM as you would in any vSphere environment. Attach the VM to one of the network segment(s) you previously created in NSX-T.
+Deploy the VM as you would in any vSphere environment. Attach the VM to one of the network segments you previously created in NSX-T.
>[!NOTE] >If you set up a DHCP server, you get your network configuration for the VM from it (don't forget to set up the scope). If you are going to configure statically, then configure as you normally would.
@@ -114,15 +115,14 @@ Deploy the VM as you would in any vSphere environment. Attach the VM to one of
Log into the VM created in the previous step and verify connectivity;
-1. Ping an IP on the Internet.
-2. Go to an Internet site via a web browser.
+1. Ping an IP on the internet.
+2. In a web browser, go to an internet site.
3. Ping the jump box that sits on the Azure Virtual Network.
->[!IMPORTANT]
->At this point, Azure VMware Solution is up and running, and you've successfully established connectivity to and from Azure Virtual Network and the internet.
+Azure VMware Solution is now up and running, and you've successfully established connectivity to and from Azure Virtual Network and the internet.
## Next steps
-In the next section, you'll connect Azure VMware Solution to your on-premises network via ExpressRoute.
+In the next section, you'll connect Azure VMware Solution to your on-premises network through ExpressRoute.
> [!div class="nextstepaction"] > [Connect Azure VMware Solution to your on-premises environment](azure-vmware-solution-on-premises.md)
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -180,7 +180,7 @@ In the Azure portal, enable internet connectivity for a private cloud. With NSX-
#### Do I need to restrict access from the internet to VMs on logical networks in a private cloud?
-No. Network traffic inbound from the Internet directly to private clouds isn't allowed by default. However, you're able to expose Azure VMware Solution VMs to the Internet through the [Public IP](public-ip-usage.md) option in your Azure portal for your Azure VMware Solution private cloud.
+No. Network traffic inbound from the internet directly to private clouds isn't allowed by default. However, you're able to expose Azure VMware Solution VMs to the internet through the [Public IP](public-ip-usage.md) option in your Azure portal for your Azure VMware Solution private cloud.
#### Do I need to restrict internet access from VMs on logical networks to the internet?
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/batch-transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
@@ -8,14 +8,14 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 11/03/2020
+ms.date: 12/23/2020
ms.author: wolfma ms.custom: devx-track-csharp --- # How to use batch transcription
-Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files using a typical URI or a shared access signature (SAS) URI and asynchronously receive transcription results. With the v3.0 API, you can transcribe one or more audio files, or process a whole storage container.
+Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files using a typical URI or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI and asynchronously receive transcription results. With the v3.0 API, you can transcribe one or more audio files, or process a whole storage container.
You can use batch transcription REST APIs to call the following methods:
@@ -63,7 +63,7 @@ To create an ordered final transcript, use the timestamps generated per utteranc
### Configuration
-Configuration parameters are provided as JSON.
+Configuration parameters are provided as JSON.
**Transcribing one or more individual files.** If you have more than one file to transcribe, we recommend sending multiple files in one request. The example below is using three files:
@@ -82,7 +82,7 @@ Configuration parameters are provided as JSON.
} ```
-**Processing a whole storage container:**
+**Processing a whole storage container.** Container [SAS](../../storage/common/storage-sas-overview.md) should contain `r` (read) and `l` (list) permissions:
```json {
@@ -174,7 +174,7 @@ Use these optional properties to configure transcription:
`destinationContainerUrl` :::column-end::: :::column span="2":::
- Optional URL with [Service ad hoc SAS](../../storage/common/storage-sas-overview.md) to a writeable container in Azure. The result is stored in this container. SAS with stored access policy are **not** supported. When not specified, Microsoft stores the results in a storage container managed by Microsoft. When the transcription is deleted by calling [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription), the result data will also be deleted.
+ Optional URL with [ad hoc SAS](../../storage/common/storage-sas-overview.md) to a writeable container in Azure. The result is stored in this container. SAS with stored access policy are **not** supported. When not specified, Microsoft stores the results in a storage container managed by Microsoft. When the transcription is deleted by calling [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription), the result data will also be deleted.
:::row-end::: ### Storage
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
@@ -10,6 +10,7 @@ ms.subservice: speech-service
ms.topic: conceptual ms.date: 03/23/2020 ms.author: trbye
+ms.custom: references_regions
--- # Text-to-speech REST API
@@ -29,7 +30,7 @@ Before using this API, understand:
* The text-to-speech REST API requires an Authorization header. This means that you need to complete a token exchange to access the service. For more information, see [Authentication](#authentication). > [!TIP]
-> See the Azure government [documentation](../../azure-government/compare-azure-government-global-azure.md) for government cloud (FairFax) endpoints.
+> See the [Azure government documentation](/azure/azure-government/compare-azure-government-global-azure) for government cloud (FairFax) endpoints.
[!INCLUDE [](../../../includes/cognitive-services-speech-service-rest-auth.md)]
@@ -61,6 +62,9 @@ The `voices/list` endpoint allows you to get a full list of voices for a specifi
| West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+> [!TIP]
+> [Voices in preview](language-support.md#neural-voices-in-preview) are only available in these 3 regions: East US, West Europe and Southeast Asia.
+ ### Request headers This table lists required and optional headers for text-to-speech requests.
@@ -93,46 +97,78 @@ This response has been truncated to illustrate the structure of a response.
```json [
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (ar-EG, Hoda)",
+ "DisplayName": "Hoda",
+ "LocalName": "هدى",
+ "ShortName": "ar-EG-Hoda",
+ "Gender": "Female",
+ "Locale": "ar-EG",
+ "SampleRateHertz": "16000",
+ "VoiceType": "Standard",
+ "Status": "GA"
+ },
+
+...
+
{
- "Name": "Microsoft Server Speech Text to Speech Voice (ar-EG, Hoda)",
- "ShortName": "ar-EG-Hoda",
- "Gender": "Female",
- "Locale": "ar-EG",
- "SampleRateHertz": "16000",
- "VoiceType": "Standard"
- },
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (ar-SA, Naayf)",
- "ShortName": "ar-SA-Naayf",
- "Gender": "Male",
- "Locale": "ar-SA",
- "SampleRateHertz": "16000",
- "VoiceType": "Standard"
- },
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (bg-BG, Ivan)",
- "ShortName": "bg-BG-Ivan",
- "Gender": "Male",
- "Locale": "bg-BG",
- "SampleRateHertz": "16000",
- "VoiceType": "Standard"
- },
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (ca-ES, HerenaRUS)",
- "ShortName": "ca-ES-HerenaRUS",
- "Gender": "Female",
- "Locale": "ca-ES",
- "SampleRateHertz": "16000",
- "VoiceType": "Standard"
- },
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (zh-CN, XiaoxiaoNeural)",
- "ShortName": "zh-CN-XiaoxiaoNeural",
- "Gender": "Female",
- "Locale": "zh-CN",
- "SampleRateHertz": "24000",
- "VoiceType": "Neural"
- },
+ "Name": "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)",
+ "DisplayName": "Aria",
+ "LocalName": "Aria",
+ "ShortName": "en-US-AriaNeural",
+ "Gender": "Female",
+ "Locale": "en-US",
+ "StyleList": [
+ "chat",
+ "customerservice",
+ "newscast-casual",
+ "newscast-formal",
+ "cheerful",
+ "empathetic"
+ ],
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "GA"
+ },
+
+ ...
+
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)",
+ "DisplayName": "Orla",
+ "LocalName": "Orla",
+ "ShortName": "ga-IE-OrlaNeural",
+ "Gender": "Female",
+ "Locale": "ga-IE",
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "Preview"
+ },
+
+ ...
+
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (zh-CN, YunxiNeural)",
+ "DisplayName": "Yunxi",
+ "LocalName": "云希",
+ "ShortName": "zh-CN-YunxiNeural",
+ "Gender": "Male",
+ "Locale": "zh-CN",
+ "StyleList": [
+ "Calm",
+ "Fearful",
+ "Cheerful",
+ "Disgruntled",
+ "Serious",
+ "Angry",
+ "Sad",
+ "Depressed",
+ "Embarrassed"
+ ],
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "Preview"
+ },
... ]
@@ -204,24 +240,18 @@ This HTTP request uses SSML to specify the voice and language. If the body lengt
```http POST /cognitiveservices/v1 HTTP/1.1
-X-Microsoft-OutputFormat: raw-16khz-16bit-mono-pcm
+X-Microsoft-OutputFormat: raw-24khz-16bit-mono-pcm
Content-Type: application/ssml+xml Host: westus.tts.speech.microsoft.com Content-Length: 225 Authorization: Bearer [Base64 access_token] <speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female'
- name='en-US-AriaRUS'>
+ name='en-US-AriaNeural'>
Microsoft Speech Service Text-to-Speech API </voice></speak> ```
-See our quickstarts for language-specific examples:
-
-* [.NET Core, C#](./get-started-text-to-speech.md?pivots=programming-language-csharp&tabs=dotnetcore)
-* [Python](./get-started-text-to-speech.md?pivots=programming-language-python)
-* [Node.js](./get-started-text-to-speech.md)
- ### HTTP status codes The HTTP status code for each response indicates success or common errors.
@@ -231,7 +261,6 @@ The HTTP status code for each response indicates success or common errors.
| 200 | OK | The request was successful; the response body is an audio file. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. | | 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. |
-| 413 | Request Entity Too Large | The SSML input is longer than 1024 characters. |
| 415 | Unsupported Media Type | It's possible that the wrong `Content-Type` was provided. `Content-Type` should be set to `application/ssml+xml`. | | 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
@@ -241,5 +270,5 @@ If the HTTP status is `200 OK`, the body of the response contains an audio file
## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Asynchronous synthesis for long-form audio](./long-audio-api.md)-- [Get started with Custom Voice](how-to-custom-voice.md)\ No newline at end of file
+- [Asynchronous synthesis for long-form audio](quickstarts/text-to-speech/async-synthesis-long-form-audio.md)
+- [Get started with Custom Voice](how-to-custom-voice.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-services-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-private-link.md
@@ -1,64 +1,86 @@
---
-title: Using Speech Services with private endpoints
+title: How to use private endpoints with Speech service
titleSuffix: Azure Cognitive Services
-description: HowTo on using Speech Services with private endpoints provided by Azure Private Link
+description: Learn how to use Speech service with private endpoints provided by Azure Private Link
services: cognitive-services author: alexeyo26 manager: nitinme ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 12/04/2020
+ms.date: 12/15/2020
ms.author: alexeyo ---
-# Using Speech Services with private endpoints provided by Azure Private Link
+# Use Speech service through a private endpoint
-[Azure Private Link](../../private-link/private-link-overview.md) allows you to connect to various PaaS services in Azure via a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
+[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure using a [private endpoint](../../private-link/private-endpoint-overview.md).
+A private endpoint is a private IP address only accessible within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
-This article explains how to set up and use Private Link and private endpoints with Azure Cognitive Speech Services.
+This article explains how to set up and use Private Link and private endpoints with Azure Cognitive Speech Services.
> [!NOTE]
-> This article explains the specifics of setting up and using Private Link with Azure Cognitive Speech Services. Before proceeding further please get familiar with the general article on [using virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
+> This article explains the specifics of setting up and using Private Link with Azure Cognitive Speech Services.
+> Before proceeding, review how to [use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
-Enabling a Speech resource for the private endpoint scenarios requires performing of the following tasks:
-- [Create Speech resource custom domain name](#create-custom-domain-name)-- [Create and configure private endpoint(s)](#enabling-private-endpoints)-- [Adjust existing applications and solutions](#using-speech-resource-with-custom-domain-name-and-private-endpoint-enabled)
+Perform the following tasks to use a Speech service through a private endpoint:
-If later you decide to remove all private endpoints, but continue to use the resource, the necessary actions are described in [this section](#using-speech-resource-with-custom-domain-name-without-private-endpoints).
+1. [Create Speech resource custom domain name](#create-a-custom-domain-name)
+2. [Create and configure private endpoint(s)](#enable-private-endpoints)
+3. [Adjust existing applications and solutions](#use-speech-resource-with-custom-domain-name-and-private-endpoint-enabled)
-## Create custom domain name
+To remove private endpoints later, but still use the Speech resource, you will perform the tasks found in [this section](#use-speech-resource-with-custom-domain-name-without-private-endpoints).
-Private endpoints require the usage of [Cognitive Services custom subdomain names](../cognitive-services-custom-subdomains.md). Use the instructions below to create one for your Speech resource.
+## Create a custom domain name
-> [!WARNING]
-> A Speech resource with custom domain name enabled uses a different way to interact with Speech Services. Most likely you will have to adjust your application code for both [private endpoint enabled](#using-speech-resource-with-custom-domain-name-and-private-endpoint-enabled) and [**not** private endpoint enabled](#using-speech-resource-with-custom-domain-name-without-private-endpoints) scenarios.
+Private endpoints require a [Cognitive Services custom subdomain name](../cognitive-services-custom-subdomains.md). Follow the instructions below to create one for your Speech resource.
+
+> [!CAUTION]
+> A Speech resource with custom domain name enabled uses a different way to interact with the Speech service.
+> You probably must adjust your application code for both [private endpoint enabled](#use-speech-resource-with-custom-domain-name-and-private-endpoint-enabled) and [**not** private endpoint enabled](#use-speech-resource-with-custom-domain-name-without-private-endpoints) scenarios.
>
-> Operation of enabling custom domain name is [**not reversible**](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
+> When you enable a custom domain name, the operation is [**not reversible**](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
>
-> Especially in cases where your Speech resource has a lot of associated custom models and projects created via [Speech Studio](https://speech.microsoft.com/) we **strongly** recommend trying the configuration with a test resource and only then modifying the one used in production.
+> If your Speech resource has a lot of associated custom models and projects created via [Speech Studio](https://speech.microsoft.com/) we **strongly** recommend trying the configuration with a test resource before modifying the resource used in production.
# [Azure portal](#tab/portal) -- Go to [Azure portal](https://portal.azure.com/) and sign in to your Azure account-- Select the required Speech Resource-- Select *Networking* (*Resource management* group) -- In *Firewalls and virtual networks* tab (default) click **Generate Custom Domain Name** button-- A new panel will appear with instructions to create a unique custom subdomain for your resource
-> [!WARNING]
-> After you have created a custom domain name it **cannot** be changed. See more information in the Warning above.
-- After the operation is complete, you may want to select *Keys and Endpoint* (*Resource management* group) and verify the new endpoint name of your resource in the format of <p />`{your custom name}.cognitiveservices.azure.com`
+To create a custom domain name using Azure portal, follow these steps:
+
+1. Go to [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the required Speech Resource.
+1. In the **Resource Management** group in the left navigation pane, click **Networking**.
+1. In **Firewalls and virtual networks** tab, click **Generate Custom Domain Name**. A new right panel appears with instructions to create a unique custom subdomain for your resource.
+1. In the Generate Custom Domain Name panel, enter a custom domain name portion. Your full custom domain will look like:
+ `https://{your custom name}.cognitiveservices.azure.com`.
+ **After you create a custom domain name, it _cannot_ be changed! Re-read the caution alert above.** After you've entered your custom domain name, click **Save**.
+1. After the operation completes, in the **Resource management** group, click **Keys and Endpoint**. Confirm the new endpoint name of your resource starts this way:
+
+ `https://{your custom name}.cognitiveservices.azure.com`
# [PowerShell](#tab/powershell)
-This section requires locally running PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+To create a custom domain name using PowerShell, confirm that your computer has PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. to see the versions of these tools, follow these steps:
+
+1. In a PowerShell window, type:
+
+ `$PSVersionTable`
+
+ Confirm the PSVersion value is greater than 7.x. To upgrade PowerShell, follow instructions at [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell) to upgrade.
+
+1. In a PowerShell window, type:
-Before proceeding further run `Connect-AzAccount` to create a connection with Azure.
+ `Get-Module -ListAvailable Az`
-## Verify custom domain name availability
+ If nothing appears, or if Azure PowerShell module version is lower than 5.1.0,
+ follow instructions at [Install Azure PowerShell module](/powershell/azure/install-Az-ps) to upgrade.
-You need to check whether the custom domain you would like to use is free. We will use [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) method from Cognitive Services REST API. See comments in the code block below explaining the steps.
+Before proceeding, run `Connect-AzAccount` to create a connection with Azure.
+
+## Verify custom domain name is available
+
+You need to check whether the custom domain you would like to use is available.
+Follow these steps to confirm the domain is available using the [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) operation in the Cognitive Services REST API.
> [!TIP] > The code below will **NOT** work in Azure Cloud Shell.
@@ -67,18 +89,16 @@ You need to check whether the custom domain you would like to use is free. We wi
$subId = "Your Azure subscription Id" $subdomainName = "custom domain name"
-# Select the Azure subscription containing Speech resource
-# If your Azure account has only one active subscription
-# you can skip this step
+# Select the Azure subscription that contains Speech resource.
+# You can skip this step if your Azure account has only one active subscription.
Set-AzContext -SubscriptionId $subId
-# Preparing OAuth token which is used in request
-# to Cognitive Services REST API
+# Prepare OAuth token to use in request to Cognitive Services REST API.
$Context = Get-AzContext $AccessToken = (Get-AzAccessToken -TenantId $Context.Tenant.Id).Token $token = ConvertTo-SecureString -String $AccessToken -AsPlainText -Force
-# Preparing and executing the request to Cognitive Services REST API
+# Prepare and send the request to Cognitive Services REST API.
$uri = "https://management.azure.com/subscriptions/" + $subId + ` "/providers/Microsoft.CognitiveServices/checkDomainAvailability?api-version=2017-04-18" $body = @{
@@ -89,40 +109,40 @@ $jsonBody = $body | ConvertTo-Json
Invoke-RestMethod -Method Post -Uri $uri -ContentType "application/json" -Authentication Bearer ` -Token $token -Body $jsonBody | Format-List ```
-If the desired name is available, you will get a response like this:
+If the desired name is available, you will see a response like this:
```azurepowershell isSubdomainAvailable : True reason : type : subdomainName : my-custom-name ```
-If the name is already taken, then you will get the following response:
+If the name is already taken, then you will see the following response:
```azurepowershell isSubdomainAvailable : False reason : Sub domain name 'my-custom-name' is already used. Please pick a different name. type : subdomainName : my-custom-name ```
-## Enabling custom domain name
+## Create your custom domain name
-To enable custom domain name for the selected Speech Resource, we use [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet. See comments in the code block below explaining the steps.
+To enable custom domain name for the selected Speech Resource, we use [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet.
-> [!WARNING]
-> After successful execution of the code below you will create a custom domain name for your Speech resource. This name **cannot** be changed. See more information in the Warning above.
+> [!CAUTION]
+> After the code below runs successfully, you will create a custom domain name for your Speech resource.
+> This name **cannot** be changed. See more information in the **Caution** alert above.
```azurepowershell $resourceGroup = "Resource group name where Speech resource is located" $speechResourceName = "Your Speech resource name" $subdomainName = "custom domain name"
-# Select the Azure subscription containing Speech resource
-# If your Azure account has only one active subscription
-# you can skip this step
+# Select the Azure subscription that contains Speech resource.
+# You can skip this step if your Azure account has only one active subscription.
$subId = "Your Azure subscription Id" Set-AzContext -SubscriptionId $subId
-# Set the custom domain name to the selected resource
-# WARNING! THIS IS NOT REVERSIBLE!
+# Set the custom domain name to the selected resource.
+# CAUTION: THIS CANNOT BE CHANGED OR UNDONE!
Set-AzCognitiveServicesAccount -ResourceGroupName $resourceGroup ` -Name $speechResourceName -CustomSubdomainName $subdomainName ```
@@ -133,11 +153,11 @@ Set-AzCognitiveServicesAccount -ResourceGroupName $resourceGroup `
- This section requires the latest version of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Verify custom domain name availability
+## Verify the custom domain name is available
-You need to check whether the custom domain you would like to use is free. We will use [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) method from Cognitive Services REST API.
+You need to check whether the custom domain you would like to use is free. We will use [Check Domain Availability](/rest/api/cognitiveservices/accountmanagement/checkdomainavailability/checkdomainavailability) method from Cognitive Services REST API.
-Copy the code block below, insert the custom domain name and save to the file `subdomain.json`.
+Copy the code block below, insert your preferred custom domain name, and save to the file `subdomain.json`.
```json {
@@ -146,12 +166,12 @@ Copy the code block below, insert the custom domain name and save to the file `s
} ```
-Copy the file to your current folder or upload it to Azure Cloud Shell and execute the following command. (Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID).
+Copy the file to your current folder or upload it to Azure Cloud Shell and run the following command. (Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID).
```azurecli-interactive az rest --method post --url "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.CognitiveServices/checkDomainAvailability?api-version=2017-04-18" --body @subdomain.json ```
-If the desired name is available, you will get a response like this:
+If the desired name is available, you will see a response like this:
```azurecli { "isSubdomainAvailable": true,
@@ -161,7 +181,7 @@ If the desired name is available, you will get a response like this:
} ```
-If the name is already taken, then you will get the following response:
+If the name is already taken, then you will see the following response:
```azurecli { "isSubdomainAvailable": false,
@@ -170,7 +190,7 @@ If the name is already taken, then you will get the following response:
"type": null } ```
-## Enabling custom domain name
+## Enable custom domain name
To enable custom domain name for the selected Speech Resource, we use [az cognitiveservices account update](/cli/azure/cognitiveservices/account#az_cognitiveservices_account_update) command.
@@ -178,16 +198,18 @@ Select the Azure subscription containing Speech resource. If your Azure account
```azurecli-interactive az account set --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ```
-Set the custom domain name to the selected resource. Replace the sample parameter values with the actual ones and execute the command below.
-> [!WARNING]
-> After successful execution of the command below you will create a custom domain name for your Speech resource. This name **cannot** be changed. See more information in the Warning above.
+Set the custom domain name to the selected resource. Replace the sample parameter values with the actual ones and run the command below.
+
+> [!CAUTION]
+> After successful execution of the command below you will create a custom domain name for your Speech resource. This name **cannot** be changed. See more information in the caution alert above.
+ ```azurecli az cognitiveservices account update --name my-speech-resource-name --resource-group my-resource-group-name --custom-domain my-custom-name ``` ***
-## Enabling private endpoints
+## Enable private endpoints
Enable private endpoint using Azure portal, Azure PowerShell, or Azure CLI.
@@ -213,7 +235,7 @@ Get familiar with the general principles of [DNS for private endpoints in Cognit
We will use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name for this section.
-Log on to a virtual machine located in the virtual network to which you have attached your private endpoint. Open Windows Command Prompt or Bash shell, execute 'nslookup' command and ensure it successfully resolves your resource custom domain name:
+Log on to a virtual machine located in the virtual network to which you have attached your private endpoint. Open Windows Command Prompt or Bash shell, run `nslookup` and confirm it successfully resolves your resource custom domain name:
```dos C:\>nslookup my-private-link-speech.cognitiveservices.azure.com Server: UnKnown
@@ -228,11 +250,11 @@ Check that the IP address resolved corresponds to the address of your private en
#### (Optional check). DNS resolution from other networks
-This check is necessary if you plan to use your private endpoint enabled Speech resource in "hybrid" mode, that is you have enabled *All networks* or *Selected Networks and Private Endpoints* access option in the *Networking* section of your resource. If you plan to access the resource using only private endpoint, you can skip this section.
+This check is necessary if you plan to use your private endpoint enabled Speech resource in "hybrid" mode, where you have enabled either *All networks* or *Selected Networks and Private Endpoints* access option in the *Networking* section of your resource. If you plan to access the resource using only a private endpoint, you can skip this section.
-We will use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name for this section.
+We use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name for this section.
-On any machine attached to a network from which you allow access to the resource open Windows Command Prompt or Bash shell, execute 'nslookup' command and ensure it successfully resolves your resource custom domain name:
+On any computer attached to a network from which you allow access to the resource, open Windows Command Prompt or Bash shell, run the `nslookup` command and confirm it successfully resolves your resource custom domain name:
```dos C:\>nslookup my-private-link-speech.cognitiveservices.azure.com Server: UnKnown
@@ -246,18 +268,18 @@ Aliases: my-private-link-speech.cognitiveservices.azure.com
westeurope.prod.vnet.cog.trafficmanager.net ```
-Note that IP address resolved points to a VNet Proxy endpoint, which is used for dispatching the network traffic to the private endpoint enabled Cognitive Services resource. This behavior will be different for a resource with custom domain name enabled, but *without* private endpoints configured. See [this section](#dns-configuration).
+Note that the resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Cognitive Services resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
-## Adjusting existing applications and solutions
+## Adjust existing applications and solutions
-A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom domain enabled Speech resource both [with](#using-speech-resource-with-custom-domain-name-and-private-endpoint-enabled) and [without](#using-speech-resource-with-custom-domain-name-without-private-endpoints) private endpoints. The current section provides the necessary information for both cases.
+A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom domain enabled Speech resource both [with](#use-speech-resource-with-custom-domain-name-and-private-endpoint-enabled) and [without](#use-speech-resource-with-custom-domain-name-without-private-endpoints) private endpoints. The current section provides the necessary information for both cases.
-### Using Speech resource with custom domain name and private endpoint enabled
+### Use Speech resource with custom domain name and private endpoint enabled
A Speech resource with custom domain name and private endpoint enabled uses a different way to interact with Speech Services. This section explains how to use such resource with Speech Services REST API and [Speech SDK](speech-sdk.md). > [!NOTE]
-> Please note, that a Speech Resource without private endpoints, but with **custom domain name** enabled also has a special way of interacting with Speech Services, but this way differs from scenario of a private endpoint enabled Speech Resource. If you have such resource (say, you had a resource with private endpoints, but then decided to remove them) ensure to get familiar with the [correspondent section](#using-speech-resource-with-custom-domain-name-without-private-endpoints).
+> Please note, that a Speech Resource without private endpoints, but with **custom domain name** enabled also has a special way of interacting with Speech Services, but this way differs from scenario of a private endpoint enabled Speech Resource. If you have such resource (say, you had a resource with private endpoints, but then decided to remove them) ensure to get familiar with the [correspondent section](#use-speech-resource-with-custom-domain-name-without-private-endpoints).
#### Speech resource with custom domain name and private endpoint. Usage with REST API
@@ -325,11 +347,11 @@ We will use West Europe as a sample Azure Region and `my-private-link-speech.cog
To get the list of the voices supported in the region one needs to do the following two operations: -- Obtain authorization token via
+- Obtain authorization token:
```http https://westeurope.api.cognitive.microsoft.com/sts/v1.0/issuetoken ```-- Using the obtained token get the list of voices via
+- Using the token, get the list of voices:
```http https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list ```
@@ -408,7 +430,7 @@ To apply the principle described in the previous section to your application cod
- Determine endpoint URL your application is using - Modify your endpoint URL as described in the previous section and create your `SpeechConfig` class instance using this modified URL explicitly
-###### Determining application endpoint URL
+###### Determine application endpoint URL
- [Enable logging for your application](how-to-use-logging.md) and run it to generate the log - In the log file search for `SPEECH-ConnectionUrl`. The string will contain `value` parameter, which in turn will contain the full URL your application was using
@@ -421,7 +443,7 @@ Thus the URL used by the application in this example is:
``` wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US ```
-###### Creating `SpeechConfig` instance using full endpoint URL
+###### Create `SpeechConfig` instance using full endpoint URL
Modify the endpoint you determined in the previous section as described in [General principle](#general-principle) above.
@@ -459,7 +481,7 @@ SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithE
After this modification your application should work with the private enabled Speech resources. We are working on more seamless support of private endpoint scenario.
-### Using Speech resource with custom domain name without private endpoints
+### Use Speech resource with custom domain name without private endpoints
In this article we have pointed out several times, that enabling custom domain for a Speech resource is **irreversible** and such resource will use a different way of communicating with Speech services comparing to the "usual" ones (that is the ones, that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints)).
@@ -524,7 +546,7 @@ To enable your application for the scenario of Speech resource with custom domai
- Request Authorization Token via Cognitive Services REST API - Instantiate `SpeechConfig` class using "from authorization token" / "with authorization token" method
-###### Requesting Authorization Token
+###### Request Authorization Token
See [this article](../authentication.md#authenticate-with-an-authentication-token) on how to get the token via the Cognitive Services REST API.
@@ -535,7 +557,7 @@ https://my-private-link-speech.cognitiveservices.azure.com/sts/v1.0/issueToken
> [!TIP] > You may find this URL in *Keys and Endpoint* (*Resource management* group) section of your Speech resource in Azure portal.
-###### Creating `SpeechConfig` instance using authorization token
+###### Create `SpeechConfig` instance using authorization token
You need to instantiate `SpeechConfig` class using the authorization token you obtained in the previous section. Suppose we have the following variables defined:
confidential-computing https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
@@ -4,11 +4,11 @@ description: Learn to create an AKS cluster with confidential nodes and deploy a
author: agowdamsft ms.service: container-service ms.topic: quickstart
-ms.date: 9/22/2020
+ms.date: 12/11/2020
ms.author: amgowda ---
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using Azure CLI (preview)
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes (DCsv2) using Azure CLI (preview)
This quickstart is intended for developers or cluster operators who want to quickly create an AKS cluster and deploy an application to monitor applications using the managed Kubernetes service in Azure.
@@ -19,21 +19,24 @@ In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS)
> [!NOTE] > Confidential computing DCsv2 VMs leverage specialized hardware that is subject to higher pricing and region availability. For more information, see the virtual machines page for [available SKUs and supported regions](virtual-machine-solutions.md).
+> DCsv2 leverages Generation 2 Virtual Machines on Azure, this Generation 2 VM is a preview feature with AKS.
+ ### Deployment pre-requisites
+This deployment instructions assumes:
1. Have an active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin 1. Have the Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) 1. [aks-preview extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview) minimum version 0.4.62
-1. Have a minimum of six **DC<x>s-v2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription 8 cores. If you plan to provision a cluster that requires more than 8 cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket
+1. VM Cores Quota availability. Have a minimum of six **DC<x>s-v2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription 8 cores. If you plan to provision a cluster that requires more than 8 cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket
### Confidential computing node features (DC<x>s-v2) 1. Linux Worker Nodes supporting Linux Containers Only
-1. Ubuntu Generation 2 18.04 Virtual Machines
+1. Generation 2 VM with Ubuntu 18.04 Virtual Machines Nodes
1. Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). Read more [here](./faq.md)
-1. Kubernetes version 1.16+
-1. Pre-installed Intel SGX DCAP Driver. Read more [here](./faq.md)
-1. CLI based deployed during preview
+1. Supporting Kubernetes version 1.16+
+1. Intel SGX DCAP Driver Pre-installed on the AKS Nodes. Read more [here](./faq.md)
+1. Supporting CLI based deployed during preview with portal based provisioning post GA.
## Installing the CLI pre-requisites
@@ -49,13 +52,13 @@ To update the aks-preview CLI extension, use the following Azure CLI commands:
```azurecli-interactive az extension update --name aks-preview ```-
-Register the Gen2VMPreview:
+### Generation 2 VM's feature registration on Azure
+Registering the Gen2VMPreview on the Azure Subscription. This feature allows you to provision Generation 2 Virtual Machines as AKS Node Pools :
```azurecli-interactive az feature register --name Gen2VMPreview --namespace Microsoft.ContainerService ```
-It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command:
+It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step:
```azurecli-interactive az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/Gen2VMPreview')].{Name:name,State:properties.state}"
@@ -66,6 +69,25 @@ When the status shows as registered, refresh the registration of the Microsoft.C
az provider register --namespace Microsoft.ContainerService ```
+### Azure Confidential Computing feature registration on Azure (optional but recommended)
+Registering the AKS-ConfidentialComputinAddon on the Azure Subscription. This feature will add two daemonsets as discussed in details [here](./confidential-nodes-aks-overview.md#aks-provided-daemon-sets-addon):
+1. SGX Device Driver Plugin
+2. SGX Attestation Quote Helper
+
+```azurecli-interactive
+az feature register --name AKS-ConfidentialComputingAddon --namespace Microsoft.ContainerService
+```
+It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputinAddon')].{Name:name,State:properties.state}"
+```
+When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+ ## Creating an AKS cluster If you already have an AKS cluster that meets the above requirements, [skip to the existing cluster section](#existing-cluster) to add a new confidential computing node pool.
@@ -76,56 +98,65 @@ First, create a resource group for the cluster using the az group create command
az group create --name myResourceGroup --location westus2 ```
-Now create an AKS cluster using the az aks create command. The following example creates a cluster with a single node of size `Standard_DC2s_v2`. You can choose other supported list of DCsv2 SKUs from [here](../virtual-machines/dcv2-series.md):
+Now create an AKS cluster using the az aks create command.
+
+```azurecli-interactive
+# Create a new AKS cluster with system node pool with Confidential Computing addon enabled
+az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom
+```
+The above creates a new AKS cluster with system node pool. Now proceed adding a user node of Confidential Computing Nodepool type on AKS (DCsv2)
+
+The below example adds an user nodepool with 3 nodes of `Standard_DC2s_v2` size. You can choose other supported list of DCsv2 SKUs and regions from [here](../virtual-machines/dcv2-series.md):
```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-vm-size Standard_DC2s_v2 \
- --node-count 3 \
- --enable-addon confcom \
- --network-plugin azure \
- --vm-set-type VirtualMachineScaleSets \
- --aks-custom-headers usegen2vm=true
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2 --aks-custom-headers usegen2vm=true
```
-The above command should provision a new AKS cluster with **DC<x>s-v2** node pools and automatically install two daemon sets - ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin) & [SGX Quote Helper](confidential-nodes-aks-overview.md#sgx-quote))
+The above command should add a new node pool with **DC<x>s-v2** automatically run two daemonsets on this node pool - ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin) & [SGX Quote Helper](confidential-nodes-aks-overview.md#sgx-quote))
Get the credentials for your AKS cluster using the az aks get-credentials command: ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-Verify the nodes are created properly and the SGX-related daemon sets are running on **DC<x>s-v2** node pools using kubectl get pods & nodes command as shown below:
+Verify the nodes are created properly and the SGX-related daemonsets are running on **DC<x>s-v2** node pools using kubectl get pods & nodes command as shown below:
```console $ kubectl get pods --all-namespaces output kube-system sgx-device-plugin-xxxx 1/1 Running
+kube-system sgx-quote-helper-xxxx 1/1 Running
``` If the output matches to the above, then your AKS cluster is now ready to run confidential applications. Go to [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or, follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools)
->If the SGX related daemon sets are not installed on your DCSv2 node pools then run the below.
+## Adding confidential computing node pool to existing AKS cluster<a id="existing-cluster"></a>
+
+This section assumes you have an AKS cluster running already that meets the criteria listed in the pre-requisites section.
+
+First, lets add the feature to Azure Subscription
```azurecli-interactive
-az aks update --enable-addons confcom --resource-group myResourceGroup --name myAKSCluster
+az feature register --name AKS-ConfidentialComputinAddon --namespace Microsoft.ContainerService
```
+It might take several minutes for the status to show as Registered. You can check the registration status by using the 'az feature list' command. This feature registration is done only once per subscription. If this was registered previously you can skip the above step:
-![DCSv2 AKS Cluster Creation](./media/confidential-nodes-aks-overview/CLIAKSProvisioning.gif)
-
-## Adding confidential computing node to existing AKS cluster<a id="existing-cluster"></a>
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ConfidentialComputinAddon')].{Name:name,State:properties.state}"
+```
+When the status shows as registered, refresh the registration of the Microsoft.ContainerService resource provider by using the 'az provider register' command:
-This section assumes you have an AKS cluster running already that meets the criteria listed in the pre-requisites section.
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
-First, lets enable the confidential computing-related AKS add-ons on the existing cluster:
+Next, lets enable the confidential computing-related AKS add-ons on the existing cluster:
```azurecli-interactive az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup ```
-Now add a **DC<x>s-v2** node pool to the cluster
+Now add a **DC<x>s-v2** user node pool to the cluster
> [!NOTE] > To use the confidential computing capability your existing AKS cluster need to have at minimum one **DC<x>s-v2** VM SKU based node pool. Learn more on confidential computing DCsv2 VMs SKU's here [available SKUs and supported regions](virtual-machine-solutions.md).
@@ -155,7 +186,7 @@ kube-system sgx-quote-helper-xxxx 1/1 Running
If the output matches to the above, then your AKS cluster is now ready to run confidential applications. ## Hello World from isolated enclave application <a id="hello-world"></a>
-Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld).
+Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). The below deployment assumes you have deployed the addon "confcom".
```yaml apiVersion: batch/v1
@@ -238,4 +269,4 @@ az aks nodepool delete --cluster-name myAKSCluster --name myNodePoolName --resou
Run Python, Node etc. Applications confidentially through confidential containers by visiting [confidential container samples](https://github.com/Azure-Samples/confidential-container-samples).
-Run Enclave aware applications by visiting [Enclave Aware Azure Container Samples](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
\ No newline at end of file
+Run Enclave aware applications by visiting [Enclave Aware Azure Container Samples](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/serverless.md
@@ -5,7 +5,7 @@ author: ThomasWeiss
ms.author: thweiss ms.service: cosmos-db ms.topic: conceptual
-ms.date: 11/25/2020
+ms.date: 12/23/2020
--- # Azure Cosmos DB serverless (Preview)
@@ -14,7 +14,7 @@ ms.date: 11/25/2020
> [!IMPORTANT] > Azure Cosmos DB serverless is currently in preview. This preview version is provided without a Service Level Agreement and is not recommended for production workloads. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Cosmos DB serverless lets you use your Azure Cosmos account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. There is no minimum charge involved when using Azure Cosmos DB in serverless mode.
+Azure Cosmos DB serverless lets you use your Azure Cosmos account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required.
> [!IMPORTANT] > Do you have any feedback about serverless? We want to hear it! Feel free to drop a message to the Azure Cosmos DB serverless team: [azurecosmosdbserverless@service.microsoft.com](mailto:azurecosmosdbserverless@service.microsoft.com).
@@ -31,13 +31,12 @@ Azure Cosmos DB serverless best fits scenarios where you expect:
- **Low, intermittent and unpredictable traffic**: Because provisioning capacity in such situations isn't required and may be cost-prohibitive - **Moderate performance**: Because serverless containers have [specific performance characteristics](#performance)
-For these reasons, Azure Cosmos DB serverless should be considered for the following types of workload:
+For these reasons, Azure Cosmos DB serverless should be considered in the following situations:
-- Development-- Testing-- Prototyping-- Proof of concept-- Non-critical application with light traffic
+- Getting started with Azure Cosmos DB
+- Development, testing and prototyping of new applications
+- Running small-to-medium applications with intermittent traffic that is hard to forecast
+- Integrating with serverless compute services like [Azure Functions](../azure-functions/functions-overview.md)
See the [how to choose between provisioned throughput and serverless](throughput-serverless.md) article for more guidance on how to choose the offer that best fits your use-case.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/throughput-serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/throughput-serverless.md
@@ -5,7 +5,7 @@ author: ThomasWeiss
ms.author: thweiss ms.service: cosmos-db ms.topic: conceptual
-ms.date: 11/25/2020
+ms.date: 12/23/2020
--- # How to choose between provisioned throughput and serverless
@@ -20,7 +20,7 @@ Azure Cosmos DB is available in two different capacity modes: [provisioned throu
| Criteria | Provisioned throughput | Serverless | | --- | --- | --- | | Status | Generally available | In preview |
-| Best suited for | Mission-critical workloads requiring predictable performance | Small-to-medium non-critical workloads with light and intermittent traffic |
+| Best suited for | Mission-critical workloads requiring predictable performance | Small-to-medium workloads with light and intermittent traffic that is hard to forecast |
| How it works | For each of your containers, you provision some amount of throughput expressed in [Request Units](request-units.md) per second. Every second, this amount of Request Units is available for your database operations. Provisioned throughput can be updated manually or adjusted automatically with [autoscale](provision-throughput-autoscale.md). | You run your database operations against your containers without having to provision any capacity. | | Geo-distribution | Available (unlimited number of Azure regions) | Unavailable (serverless accounts can only run in 1 Azure region) | | Maximum storage per container | Unlimited | 50 GB |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-script.md
@@ -6,7 +6,7 @@ ms.author: nimoolen
ms.service: data-factory ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 12/03/2020
+ms.date: 12/23/2020
--- # Data flow script (DFS)
@@ -242,6 +242,18 @@ derive(each(match(type=='string'), $$ = 'string'),
each(match(type=='double'), $$ = 'double')) ~> DerivedColumn1 ```
+### Fill down
+Here is how to implement the common "Fill Down" problem with data sets when you want to replace NULL values with the value from the previous non-NULL value in the sequence. Note that this operation can be negative performance implications because you must creat a synthetic window across your entire data set with a "dummy" category value. Additional, you must sort by a value to create the proper data sequence to find the previous non-NULL value. This snippet below creates the synthetic category as "dummy" and sorts by a surrogate key. You can remove the surrogate key and use your own data-specific sort key. This code snippet assumes you've already added a Source transformation called ```source1```
+
+```
+source1 derive(dummy = 1) ~> DerivedColumn
+DerivedColumn keyGenerate(output(sk as long),
+ startAt: 1L) ~> SurrogateKey
+SurrogateKey window(over(dummy),
+ asc(sk, true),
+ Rating2 = coalesce(Rating, last(Rating, true()))) ~> Window1
+```
+ ## Next steps Explore Data Flows by starting with the [data flows overview article](concepts-data-flow-overview.md)
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/guest-configuration-create-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
@@ -389,13 +389,22 @@ Configuration AuditFilePathExists
## Policy lifecycle
-To release an update to the policy definition, there are three fields that require attention.
+If you would like to release an update to the policy, make the change for both the Guest Configuration
+package and the Azure Policy definition details.
> [!NOTE] > The `version` property of the Guest Configuration assignment only effects packages that > are hosted by Microsoft. The best practice for versioning custom content is to include > the version in the file name.
+First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
+unique from previous versions. You can include a version number in the name such as `PackageName_1.0.0`.
+The number in this example is only used to make the package unique, not to specify that the package
+should be considered newer or older than other packages.
+
+Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
+the explanations below.
+ - **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version number greater than what is currently published. - **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/guest-configuration-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create.md
@@ -591,13 +591,22 @@ New-GuestConfigurationPackage `
## Policy lifecycle
-If you would like to release an update to the policy, there are three fields that require attention.
+If you would like to release an update to the policy, make the change for both the Guest Configuration
+package and the Azure Policy definition details.
> [!NOTE] > The `version` property of the Guest Configuration assignment only effects packages that > are hosted by Microsoft. The best practice for versioning custom content is to include > the version in the file name.
+First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
+unique from previous versions. You can include a version number in the name such as `PackageName_1.0.0`.
+The number in this example is only used to make the package unique, not to specify that the package
+should be considered newer or older than other packages.
+
+Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
+the explanations below.
+ - **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version number greater than what is currently published. - **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/access-policies.md new file mode 100644
@@ -0,0 +1,91 @@
+---
+title: Use access policies in Azure HPC Cache
+description: How to create and apply custom access policies to limit client access to storage targets in Azure HPC Cache
+author: ekpgh
+ms.service: hpc-cache
+ms.topic: how-to
+ms.date: 12/22/2020
+ms.author: v-erkel
+---
+
+# Use client access policies
+
+This article explains how to create and apply custom client access policies for your storage targets.
+
+Client access policies control how clients are able to connect to the storage target exports. You can control things like root squash and read/write access at the client host or network level.
+
+Access policies are applied to a namespace path, which means that you can use different access policies for two different exports on an NFS storage system.
+
+This feature is for workflows where you need to control how different groups of clients access the storage targets.
+
+If you don't need fine-grained control over storage target access, you can use the default policy, or you can customize the default policy with extra rules.
+
+## Create a client access policy
+
+Use the **Client access policies** page in the Azure portal to create and manage policies. <!-- is there AZ CLI for this? -->
+
+[![screenshot of client access policies page. Several policies are defined, and some are expanded to show their rules](media/policies-overview.png)](media/policies-overview.png#lightbox)
+
+Each policy is made up of rules. The rules are applied to hosts in order from the smallest scope (host) to the largest (default). The first rule that matches is applied and later rules are ignored.
+
+To create a new access policy, click the **+ Add access policy** button at the top of the list. Give the new access policy a name, and enter at least one rule.
+
+![screenshot of access policies edit blade with multiple rules filled in. Click ok to save the rule.](media/add-policy.png)
+
+The rest of this section explains the values you can use in rules.
+
+### Scope
+
+The scope term and the address filter work together to define which clients are affected by the rule.
+
+Use them to specify whether the rule applies to an individual client (host), a range of IP addresses (network), or all clients (default).
+
+Select the appropriate **Scope** value for your rule:
+
+* **Host** - The rule applies to an individual client
+* **Network** - The rule applies to clients in a range of IP addresses
+* **Default** - The rule applies to all clients.
+
+Rules in a policy are evaluated in that order. After a client mount request matches one rule, the others are ignored.
+
+### Address filter
+
+The **Address filter** value specifies which clients match the rule.
+
+If you set the scope to **host**, you can specify only one IP address in the filter. For the scope setting **default**, you can't enter any IP addresses in the **Address filter** field because the default scope matches all clients.
+
+Specify the IP address or range of addresses for this rule. Use CIDR notation (example: 0.1.0.0/16) to specify an address range.
+
+### Access level
+
+Set what privileges to grant the clients that match the scope and filter.
+
+Options are **read/write**, **read-only**, or **no access**.
+
+### SUID
+
+Check the **SUID** box to allow files in storage to set user IDs upon access.
+
+SUID typically is used to increase a userΓÇÖs privileges temporarily so that the user can accomplish a task related to that file.
+
+### Submount access
+
+Check this box to allow the specified clients to directly mount this export's subdirectories.
+
+### Root squash
+
+Choose whether or not to set root squash for clients that match this rule.
+
+This value lets you allow root squash at the storage export level. You also can [set root squash at the cache level](configuration.md#configure-root-squash).
+
+If you turn on root squash, you must also set the anonymous ID user value to one of these options:
+
+* **-2** (nobody)
+* **65534** (nobody)
+* **-1** (no access)
+* **65535** (no access)
+* **0** (unprivileged root)
+
+## Next steps
+
+* Apply access policies in the namespace paths for your storage targets. Read [Set up the aggregated namespace](add-namespace-paths.md) to learn how.
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/add-namespace-paths https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/add-namespace-paths.md
@@ -4,7 +4,7 @@ description: How to create client-facing paths for back-end storage with Azure H
author: ekpgh ms.service: hpc-cache ms.topic: how-to
-ms.date: 09/30/2020
+ms.date: 12/22/2020
ms.author: v-erkel ---
@@ -16,13 +16,13 @@ Read [Plan the aggregated namespace](hpc-cache-namespace.md) to learn more about
The **Namespace** page in the Azure portal shows the paths that clients use to access your data through the cache. Use this page to create, remove, or change namespace paths. You also can configure namespace paths by using the Azure CLI.
-All of the existing client-facing paths are listed on the **Namespace** page. If a storage target does not have any paths, it does not appear in the table.
+All of the client-facing paths that have been defined for this cache are listed on the **Namespace** page. Storage targets that don't have any namespace paths defined yet don't appear in the table.
-You can sort the table columns by clicking the arrows and better understand your cache's aggregated namespace.
+You can sort the table columns to better understand your cache's aggregated namespace. Click the arrows in the column headers to sort the paths.
-![screenshot of portal namespace page with two paths in a table. Column headers: Namespace path, Storage target, Export path, and Export subdirectory. The items in the first column are clickable links. Top buttons: Add namespace path, refresh, delete](media/namespace-page.png)
+[![screenshot of portal namespace page with two paths in a table. Column headers: Namespace path, Storage target, Export path, and Export subdirectory, and Client access policy. The path names in the first column are clickable links. Top buttons: Add namespace path, refresh, delete](media/namespace-page.png) ](media/namespace-page.png#lightbox)
-## Add or edit client-facing namespace paths
+## Add or edit namespace paths
You must create at least one namespace path before clients can access the storage target. (Read [Mount the Azure HPC Cache](hpc-cache-mount.md) for more about client access.)
@@ -38,15 +38,17 @@ From the Azure portal, load the **Namespace** settings page. You can add, change
* **Add a new path:** Click the **+ Add** button at the top and fill in information in the edit panel.
- * Select the storage target from the drop-down list. (In this screenshot, the blob storage target can't be selected because it already has a namespace path.)
+ ![Screenshot of the add namespace edit fields with a blob storage target selected. The export and subdirectory paths are set to / and not editable.](media/namespace-add-blob.png)
- ![Screenshot of the new namespace edit fields with the storage target selector exposed](media/namespace-select-storage-target.png)
+ * Enter the path clients will use to access this storage target.
- * For an Azure Blob storage target, the export and subdirectory paths are automatically set to ``/``.
+ * Select which access policy to use for this path. Learn more about customizing client access in [Use client access policies](access-policies.md).
+
+ * Select the storage target from the drop-down list. If a blob storage target already has a namespace path, it can't be selected.
-* **Change an existing path:** Click the namespace path. The edit panel opens and you can modify the path.
+ * For an Azure Blob storage target, the export and subdirectory paths are automatically set to ``/``.
- ![Screenshot of the namespace page after clicking on a Blob namespace path - the edit fields appear on a pane to the right](media/edit-namespace-blob.png)
+* **Change an existing path:** Click the namespace path. The edit panel opens. You can modify the path and the access policy, but you can't change to a different storage target.
* **Delete a namespace path:** Select the checkbox to the left of the path and click the **Delete** button.
@@ -76,7 +78,7 @@ This list shows the maximum number of namespace paths per configuration.
* 3 TB cache - 10 namespace paths * 6 TB cache - 10 namespace paths
- * 23 TB cache - 20 namespace paths
+ * 12 TB cache - 20 namespace paths
* Up to 4 GB/s throughput:
@@ -104,13 +106,15 @@ Fill in these values for each namespace path:
* **Namespace path** - The client-facing file path.
+* **Client access policy** - Select which access policy to use for this path. Learn more about customizing client access in [Use client access policies](access-policies.md).
+ * **Storage target** - If creating a new namespace path, select a storage target from the drop-down menu. * **Export path** - Enter the path to the NFS export. Make sure to type the export name correctly - the portal validates the syntax for this field but does not check the export until you submit the change. * **Export subdirectory** - If you want this path to mount a specific subdirectory of the export, enter it here. If not, leave this field blank.
-![screenshot of the portal namespace page with the update page open at the right](media/update-namespace-nfs.png)
+![screenshot of the portal namespace page with the edit page open at the right. The edit form shows settings for an nfs storage target namespace path](media/namespace-edit-nfs.png)
### [Azure CLI](#tab/azure-cli)
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
@@ -4,7 +4,7 @@ description: Explains how to configure additional settings for the cache like MT
author: ekpgh ms.service: hpc-cache ms.topic: how-to
-ms.date: 05/06/2020
+ms.date: 12/21/2020
ms.author: v-erkel ---
@@ -38,7 +38,7 @@ If you don't want to change the MTU settings on other system components, you sho
Learn more about MTU settings in Azure virtual networks by reading [TCP/IP performance tuning for Azure VMs](../virtual-network/virtual-network-tcpip-performance-tuning.md). ## Configure root squash
-<!-- linked from troubleshoot -->
+<!-- linked from troubleshoot and from access policies -->
The **Enable root squash** setting controls how Azure HPC Cache treats requests from the root user on client machines.
@@ -50,6 +50,9 @@ Setting root squash on the cache can help compensate for the required ``no_root_
The default setting is **Yes**. (Caches created before April 2020 might have the default setting **No**.)
+> [!TIP]
+> You also can set root squash for specific storage exports by customizing [client access polices](access-policies.md#root-squash).
+ ## View snapshots for blob storage targets Azure HPC Cache automatically saves storage snapshots for Azure Blob storage targets. Snapshots provide a quick reference point for the contents of the back-end storage container.
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/directory-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/directory-services.md new file mode 100644
@@ -0,0 +1,87 @@
+---
+title: Use extended groups in Azure HPC Cache
+description: How to configure directory services for client access to storage targets in Azure HPC Cache
+author: ekpgh
+ms.service: hpc-cache
+ms.topic: how-to
+ms.date: 12/22/2020
+ms.author: v-erkel
+---
+
+# Configure directory services
+
+The **Directory services** settings allow your Azure HPC Cache to use an outside source to authenticate users for accessing back-end storage.
+
+You might need to enable **Extended groups** if your workflow includes NFS storage targets and clients that are members of more than 16 groups.
+
+After you click the button to enable extended groups, you must choose the source that Azure HPC Cache will use to get user and group credentials.
+
+* [Active Directory](#configure-active-directory) - Get credentials from an external Active Directory server. You can't use Azure Active Directory for this task.
+* [Flat file](#configure-file-download) - Download `/etc/group` and `/etc/passwd` files from a network location.
+* [LDAP](#configure-ldap) - Get credentials from a Lightweight Directory Access Protocol (LDAP)-compatible source.
+
+> [!NOTE]
+> Make sure that your cache can access its group information source from inside its secure subnetwork.<!-- + details/examples -->
+
+The **Username downloaded** field shows the status of the most recent group information download.
+
+![screenshot of directory services page settings page in portal, with the Yes option selected for extended groups, and the drop-down menu labeled Download source open](media/directory-services-select-group-source.png)
+
+## Configure Active Directory
+
+This section explains how to set up the cache to get user and group credentials from an external Active Directory (AD) server.
+
+Under **Active directory details**, supply these values:
+
+* **Primary DNS** - Specify the IP address of a domain name server that the cache can use to resolve the AD domain name.
+
+* **Secondary DNS** (optional) - Enter the address of a name server to use if the primary server is unavailable.
+
+* **AD DNS domain name** - Provide the fully qualified domain name of the AD server that the cache will join to get the credentials.
+
+* **Cache server name (computer account)** - Set the name that will be assigned to this HPC cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, hyphens (-), and underscores (_).
+
+* In the **Credentials** section, provide an AD administrator username and password that the Azure HPC Cache can use to access the AD server. This information is encrypted when stored, and can't be queried.
+
+Save the settings by clicking the button at the top of the page.
+
+![screenshot of Download details section with Active Directory values filled in](media/group-download-details-ad.png)
+
+## Configure file download
+
+These values are required if you want to download files with your user and group information. The files must be in the standard Linux/UNIX `/etc/group` and `/etc/passwrd` format.
+
+* **User file URI** - Enter the complete URI for the `/etc/passwrd` file.
+* **Group file URI** - Enter the complete URI for the `/etc/group` file.
+
+![screenshot of Download details section for a flat file download](media/group-download-details-file.png)
+
+## Configure LDAP
+
+Fill in these values if you want to use a non-AD LDAP source to get user and group credentials. Check with your LDAP administrator if you need help with these values.
+
+* **LDAP server** - Enter the fully qualified domain name or the IP address of the LDAP server to use. <!-- only one, not up to 3 -->
+
+* **LDAP base DN** - Specify the base distinguished name for the LDAP domain, in DN format. Ask your LDAP administrator if you donΓÇÖt know your base DN.
+
+The server and base DN are the only required settings to make LDAP work, but the additional options make your connection more secure.
+
+![screenshot of the LDAP configuration area of the directory services page settings page](media/group-download-details-ldap.png)
+
+In the **Secure access** section, you can enable encryption and certificate validation for the LDAP connection. After you click **Yes** to enable encryption, you have these options:
+
+* **Require valid certificate** - When this is set, the LDAP server's certificate is verified against the certificate authority in the URI field below.
+
+* **CA certificate URI** - Specify the path to the authoritative certificate. This can be a link to a CA-validated certificate or to a self-signed certificate. This field is required to use the externally validated certificates setting.
+
+* **Auto-download certificate** - Choose **Yes** if you want to try to download a certificate as soon as you submit these settings.
+
+Fill in the **Credentials** section if you want to use static credentials for LDAP security.
+
+* **Bind DN** - Enter the bind distinguished name to use to authenticate to the LDAP server. (Use DN format.)
+* **Bind password** - Provide the password for the bind DN.
+
+## Next steps
+
+* Learn more about client access in [Mount the Azure HPC Cache](hpc-cache-mount.md)
+* If your credentials don't download correctly, consult the administrator for your source of credentials. Open a [support ticket](hpc-cache-support-ticket.md) if needed.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-store-data-blob.md
@@ -76,7 +76,7 @@ The name of this setting is `deviceToCloudUploadProperties`. If you are using th
| ----- | ----- | ---- | | uploadOn | true, false | Set to `false` by default. If you want to turn the feature on, set this field to `true`. <br><br> Environment variable: `deviceToCloudUploadProperties__uploadOn={false,true}` | | uploadOrder | NewestFirst, OldestFirst | Allows you to choose the order in which the data is copied to Azure. Set to `OldestFirst` by default. The order is determined by last modified time of Blob. <br><br> Environment variable: `deviceToCloudUploadProperties__uploadOrder={NewestFirst,OldestFirst}` |
-| cloudStorageConnectionString | | `"DefaultEndpointsProtocol=https;AccountName=<your Azure Storage Account Name>;AccountKey=<your Azure Storage Account Key>;EndpointSuffix=<your end point suffix>"` is a connection string that allows you to specify the storage account to which you want your data uploaded. Specify `Azure Storage Account Name`, `Azure Storage Account Key`, `End point suffix`. Add appropriate EndpointSuffix of Azure where data will be uploaded, it varies for Global Azure, Government Azure, and Microsoft Azure Stack. <br><br> You can choose to specify Azure Storage SAS connection string here. But you have to update this property when it expires. <br><br> Environment variable: `deviceToCloudUploadProperties__cloudStorageConnectionString=<connection string>` |
+| cloudStorageConnectionString | | `"DefaultEndpointsProtocol=https;AccountName=<your Azure Storage Account Name>;AccountKey=<your Azure Storage Account Key>;EndpointSuffix=<your end point suffix>"` is a connection string that allows you to specify the storage account to which you want your data uploaded. Specify `Azure Storage Account Name`, `Azure Storage Account Key`, `End point suffix`. Add appropriate EndpointSuffix of Azure where data will be uploaded, it varies for Global Azure, Government Azure, and Microsoft Azure Stack. <br><br> You can choose to specify Azure Storage SAS connection string here. But you have to update this property when it expires. SAS permissions may include create access for containers and create, write, and add access for blobs. <br><br> Environment variable: `deviceToCloudUploadProperties__cloudStorageConnectionString=<connection string>` |
| storageContainersForUpload | `"<source container name1>": {"target": "<target container name>"}`,<br><br> `"<source container name1>": {"target": "%h-%d-%m-%c"}`, <br><br> `"<source container name1>": {"target": "%d-%c"}` | Allows you to specify the container names you want to upload to Azure. This module allows you to specify both source and target container names. If you don't specify the target container name, it will automatically assign the container name as `<IoTHubName>-<IotEdgeDeviceID>-<ModuleName>-<SourceContainerName>`. You can create template strings for target container name, check out the possible values column. <br>* %h -> IoT Hub Name (3-50 characters). <br>* %d -> IoT Edge Device ID (1 to 129 characters). <br>* %m -> Module Name (1 to 64 characters). <br>* %c -> Source Container Name (3 to 63 characters). <br><br>Maximum size of the container name is 63 characters, while automatically assigning the target container name if the size of container exceeds 63 characters it will trim each section (IoTHubName, IotEdgeDeviceID, ModuleName, SourceContainerName) to 15 characters. <br><br> Environment variable: `deviceToCloudUploadProperties__storageContainersForUpload__<sourceName>__target=<targetName>` | | deleteAfterUpload | true, false | Set to `false` by default. When it is set to `true`, it will automatically delete the data when upload to cloud storage is finished. <br><br> **CAUTION**: If you are using append blobs, this setting will delete append blobs from local storage after a successful upload, and any future Append Block operations to those blobs will fail. Use this setting with caution, do not enable this if your application does infrequent append operations or does not support continuous append operations<br><br> Environment variable: `deviceToCloudUploadProperties__deleteAfterUpload={false,true}`. |
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/key-vault-integrate-kubernetes.md
@@ -76,7 +76,7 @@ Complete the "Create a resource group," "Create AKS cluster," and "Connect to th
```azurecli az aks upgrade --kubernetes-version 1.16.9 --name contosoAKSCluster --resource-group contosoResourceGroup ```
-1. To display the metadata of the AKS cluster that you've created, use the following command. Copy the **principalId**, **clientId**, **subscriptionId**, and **nodeResourceGroup** for later use. If the ASK cluster was not created with managed identities enabled, the **principalId** and **clientId** will be null.
+1. To display the metadata of the AKS cluster that you've created, use the following command. Copy the **principalId**, **clientId**, **subscriptionId**, and **nodeResourceGroup** for later use. If the AKS cluster was not created with managed identities enabled, the **principalId** and **clientId** will be null.
```azurecli az aks show --name contosoAKSCluster --resource-group contosoResourceGroup
@@ -358,4 +358,4 @@ Verify that the contents of the secret are displayed.
To help ensure that your key vault is recoverable, see: > [!div class="nextstepaction"]
-> [Turn on soft delete](./key-vault-recovery.md)
\ No newline at end of file
+> [Turn on soft delete](./key-vault-recovery.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -15,8 +15,6 @@ ms.date: 09/10/2020
In this article, learn about Azure Machine Learning releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro?preserve-view=true&view=azure-ml-py) reference page.
-See [the list of known issues](resource-known-issues.md) to learn about known bugs and workarounds.
- ## 2020-12-07 ### Azure Machine Learning SDK for Python v1.19.0
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-models-with-mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-models-with-mlflow.md new file mode 100644
@@ -0,0 +1,147 @@
+---
+title: Deploy ML experiments with MLflow
+titleSuffix: Azure Machine Learning
+description: Set up MLflow with Azure Machine Learning to deploy your ML models as a web service.
+services: machine-learning
+author: shivp950
+ms.author: shipatel
+ms.service: machine-learning
+ms.subservice: core
+ms.reviewer: nibaccam
+ms.date: 12/23/2020
+ms.topic: conceptual
+ms.custom: how-to, devx-track-python
+---
+
+# Deploy MLflow models with Azure Machine Learning (preview)
+
+In this article, learn how to deploy your MLflow model as an Azure Machine Learning web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
+
+Azure Machine Learning offers deployment configurations for:
+* Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment.
+* Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.
+
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond the model training phase, to the deployment phase of your production model.
+
+>[!NOTE]
+> As an open source library, MLflow changes frequently. As such, the functionality made available via the Azure Machine Learning and MLflow integration should be considered as a preview, and not fully supported by Microsoft.
+
+The following diagram demonstrates that with the MLflow deploy API you can deploy your existing MLflow model as an Azure Machine Learning web service, despite their frameworks--PyTorch, Tensorflow, scikit-learn, ONNX, etc., and manage your production models in your workspace.
+
+![ deploy mlflow models with azure machine learning](./media/how-to-use-mlflow/mlflow-diagram-deploy.png)
+
+> [!TIP]
+> The information in this document is primarily for data scientists and developers who want to deploy their MLflow model to an Azure Machine Learning web service endpoint. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+
+## Prerequisites
+
+* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md).
+* Install the `azureml-mlflow` package.
+ * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), which provides the connectivity for MLflow to access your workspace.
+
+## Deploy to ACI
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
+
+```python
+from azureml.core.webservice import AciWebservice, Webservice
+
+# Set the model path to the model folder created by your run
+model_path = "model"
+
+# Configure
+aci_config = AciWebservice.deploy_configuration(cpu_cores=1,
+ memory_gb=1,
+ tags={'method' : 'sklearn'},
+ description='Diabetes model',
+ location='eastus2')
+```
+
+Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
+
+```python
+(webservice,model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
+ workspace=ws,
+ model_name='sklearn-model',
+ service_name='diabetes-model-1',
+ deployment_config=aci_config,
+ tags=None, mlflow_home=None, synchronous=True)
+
+webservice.wait_for_deployment(show_output=True)
+```
+
+## Deploy to AKS
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+To deploy to AKS, first create an AKS cluster. Create an AKS cluster using the [ComputeTarget.create()](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py#&preserve-view=truecreate-workspace--name--provisioning-configuration-) method. It may take 20-25 minutes to create a new cluster.
+
+```python
+from azureml.core.compute import AksCompute, ComputeTarget
+
+# Use the default configuration (can also provide parameters to customize)
+prov_config = AksCompute.provisioning_configuration()
+
+aks_name = 'aks-mlflow'
+
+# Create the cluster
+aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+
+aks_target.wait_for_completion(show_output = True)
+
+print(aks_target.provisioning_state)
+print(aks_target.provisioning_errors)
+```
+Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
+
+```python
+from azureml.core.webservice import Webservice, AksWebservice
+
+# Set the web service configuration (using default here with app insights)
+aks_config = AksWebservice.deploy_configuration(enable_app_insights=True, compute_target_name='aks-mlflow')
+
+```
+
+Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
+
+```python
+
+# Webservice creation using single command
+from azureml.core.webservice import AksWebservice, Webservice
+
+# set the model path
+model_path = "model"
+
+(webservice, model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
+ workspace=ws,
+ model_name='sklearn-model',
+ service_name='my-aks',
+ deployment_config=aks_config,
+ tags=None, mlflow_home=None, synchronous=True)
++
+webservice.wait_for_deployment()
+```
+
+The service deployment can take several minutes.
+
+## Clean up resources
+
+If you don't plan to use your deployed web service, use `service.delete()` to delete it from your notebook. For more information, see the documentation for [WebService.delete()](/python/api/azureml-core/azureml.core.webservice%28class%29?preserve-view=true&view=azure-ml-py#&preserve-view=truedelete--).
+
+## Example notebooks
+
+The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article.
+
+> [!NOTE]
+> A community-driven repository of examples using mlflow can be found at https://github.com/Azure/azureml-examples.
+
+## Next steps
+
+* [Manage your models](concept-model-management-and-deployment.md).
+* Monitor your production models for [data drift](./how-to-enable-data-collection.md).
+* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-datasets.md
@@ -363,5 +363,4 @@ Limitations and known issues for data drift monitors:
* Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) to set up a dataset monitor. * See how to set up data drift on [models deployed to Azure Kubernetes Service](./how-to-enable-data-collection.md).
-* Set up dataset drift monitors with [event grid](how-to-use-event-grid.md).
-* See these common [troubleshooting tips](resource-known-issues.md#data-drift) if you're having problems.
+* Set up dataset drift monitors with [event grid](how-to-use-event-grid.md).
\ No newline at end of file
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-view-training-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-view-training-logs.md
@@ -116,7 +116,7 @@ Log files are an essential resource for debugging the Azure ML workloads. Drill
The tables below show the contents of the log files in the folders you'll see in this section. > [!NOTE]
-> Information the user should notice even if skimmingYou will not necessarily see every file for every run. For example, the 20_image_build_log*.txt only appears when a new image is built (e.g. when you change you environment).
+> You will not necessarily see every file for every run. For example, the 20_image_build_log*.txt only appears when a new image is built (e.g. when you change you environment).
#### `azureml-logs` folder
@@ -183,4 +183,4 @@ Try these next steps to learn how to use Azure Machine Learning:
* Learn how to [track experiments and enable logs in the Azure Machine Learning designer](how-to-track-designer-experiments.md).
-* See an example of how to register the best model and deploy it in the tutorial, [Train an image classification model with Azure Machine Learning](tutorial-train-models-with-aml.md).
\ No newline at end of file
+* See an example of how to register the best model and deploy it in the tutorial, [Train an image classification model with Azure Machine Learning](tutorial-train-models-with-aml.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow-azure-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
@@ -176,8 +176,8 @@ When you are ready to create an endpoint for your ML models. You can deploy as,
You can leverage the [mlflow.azureml.deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) API to deploy a model to your Azure Machine Learning workspace. If you only registered the model to the Azure Databricks workspace, as described in the [register models with MLflow](#register-models-with-mlflow) section, specify the `model_name` parameter to register the model into Azure Machine Learning workspace. Azure Databricks runs can be deployed to the following endpoints,
-* [Azure Container Instance](how-to-use-mlflow.md#deploy-to-aci)
-* [Azure Kubernetes Service](how-to-use-mlflow.md#deploy-to-aks)
+* [Azure Container Instance](how-to-deploy-models-with-mlflow.md#deploy-to-aci)
+* [Azure Kubernetes Service](how-to-deploy-models-with-mlflow.md#deploy-to-aks)
### Deploy models to ADB endpoints for batch scoring
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
@@ -8,12 +8,12 @@ ms.author: shipatel
ms.service: machine-learning ms.subservice: core ms.reviewer: nibaccam
-ms.date: 09/08/2020
+ms.date: 12/23/2020
ms.topic: conceptual ms.custom: how-to, devx-track-python ---
-# Track experiment runs and deploy ML models with MLflow and Azure Machine Learning (preview)
+# Train and track ML models with MLflow and Azure Machine Learning (preview)
In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments.
@@ -21,12 +21,10 @@ Supported capabilities include:
+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models.
-+ Submit training jobs with MLflow Projects with Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
++ Submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) with Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md). + Track and manage models in MLflow and Azure Machine Learning model registry.
-+ Deploy your MLflow experiments as an Azure Machine Learning web service. By deploying as a web service, you can apply the Azure Machine Learning monitoring and data drift detection functionalities to your production models.
- [MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md). >[!NOTE]
@@ -205,9 +203,9 @@ run.get_metrics()
## Manage models
-Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere) which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run id is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
+Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere) which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run ID is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
-If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](#deploy-and-register-mlflow-models).
+If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-models-with-mlflow.md).
To register and view a model from a run, use the following steps:
@@ -233,110 +231,6 @@ To register and view a model from a run, use the following steps:
![MLmodel-schema](./media/how-to-use-mlflow/mlmodel-view.png) -
-## Deploy and register MLflow models
-
-Deploying your MLflow experiments as an Azure Machine Learning web service allows you to leverage and apply the Azure Machine Learning model management and data drift detection capabilities to your production models.
-
-To do so, you need to
-
-1. Register your model.
-1. Determine which deployment configuration you want to use for your scenario.
-
- 1. [Azure Container Instance (ACI)](#deploy-to-aci) is a suitable choice for a quick dev-test deployment.
- 1. [Azure Kubernetes Service (AKS)](#deploy-to-aks) is suitable for scalable production deployments.
-
-The following diagram demonstrates that with the MLflow deploy API you can deploy your existing MLflow models as an Azure Machine Learning web service, despite their frameworks--PyTorch, Tensorflow, scikit-learn, ONNX, etc., and manage your production models in your workspace.
-
-![ deploy mlflow models with azure machine learning](./media/how-to-use-mlflow/mlflow-diagram-deploy.png)
--
-### Deploy to ACI
-
-Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
-
-```python
-from azureml.core.webservice import AciWebservice, Webservice
-
-# Set the model path to the model folder created by your run
-model_path = "model"
-
-# Configure
-aci_config = AciWebservice.deploy_configuration(cpu_cores=1,
- memory_gb=1,
- tags={'method' : 'sklearn'},
- description='Diabetes model',
- location='eastus2')
-```
-
-Then, register and deploy the model in one step with the Azure Machine Learning SDK [deploy](/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py&preserve-view=true#&preserve-view=truedeploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) method.
-
-```python
-(webservice,model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
- workspace=ws,
- model_name='sklearn-model',
- service_name='diabetes-model-1',
- deployment_config=aci_config,
- tags=None, mlflow_home=None, synchronous=True)
-
-webservice.wait_for_deployment(show_output=True)
-```
-
-### Deploy to AKS
-
-To deploy to AKS, first create an AKS cluster. Create an AKS cluster using the [ComputeTarget.create()](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py#&preserve-view=truecreate-workspace--name--provisioning-configuration-) method. It may take 20-25 minutes to create a new cluster.
-
-```python
-from azureml.core.compute import AksCompute, ComputeTarget
-
-# Use the default configuration (can also provide parameters to customize)
-prov_config = AksCompute.provisioning_configuration()
-
-aks_name = 'aks-mlflow'
-
-# Create the cluster
-aks_target = ComputeTarget.create(workspace=ws,
- name=aks_name,
- provisioning_configuration=prov_config)
-
-aks_target.wait_for_completion(show_output = True)
-
-print(aks_target.provisioning_state)
-print(aks_target.provisioning_errors)
-```
-Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
-
-```python
-from azureml.core.webservice import Webservice, AksWebservice
-
-# Set the web service configuration (using default here with app insights)
-aks_config = AksWebservice.deploy_configuration(enable_app_insights=True, compute_target_name='aks-mlflow')
-
-```
-
-Then, register and deploy the model by using the Azure Machine Learning SDK [deploy](/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py&preserve-view=true#&preserve-view=truedeploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) method.
-
-```python
-
-# Webservice creation using single command
-from azureml.core.webservice import AksWebservice, Webservice
-
-# set the model path
-model_path = "model"
-
-(webservice, model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
- workspace=ws,
- model_name='sklearn-model',
- service_name='my-aks',
- deployment_config=aks_config,
- tags=None, mlflow_home=None, synchronous=True)
--
-webservice.wait_for_deployment()
-```
-
-The service deployment can take several minutes.
- ## Clean up resources If you don't plan to use the logged metrics and artifacts in your workspace, the ability to delete them individually is currently unavailable. Instead, delete the resource group that contains the storage account and workspace, so you don't incur any charges:
@@ -360,6 +254,7 @@ The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
## Next steps
-* [Manage your models](concept-model-management-and-deployment.md).
+* [Deploy models with MLflow](how-to-deploy-models-with-mlflow.md).
* Monitor your production models for [data drift](./how-to-enable-data-collection.md). * [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
+* [Manage your models](concept-model-management-and-deployment.md).
\ No newline at end of file
purview https://docs.microsoft.com/en-us/azure/purview/manage-credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-credentials.md
@@ -68,7 +68,7 @@ Credential type supported in Purview today:
* SQL authentication : You will add the **password** as a secret in key vault * Account Key : You will add the **account key** as a secret in key vault
-Here is more information on how to add secrets to a key vault: (Insert key vault article)
+For more information, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
After storing your secrets in your key vault, Create your new Credential by selecting +New from the Credentials command bar. Provide the required information, including selecting the Authentication method and a Key Vault instance from which to select a secret from. Once all the details have been filled in, click on create.
security-center https://docs.microsoft.com/en-us/azure/security-center/defender-for-sql-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-introduction.md
@@ -49,7 +49,9 @@ These two plans include functionality for identifying and mitigating potential d
- [Vulnerability assessment](../azure-sql/database/sql-vulnerability-assessment.md) - The scanning service to discover, track, and help you remediate potential database vulnerabilities. Assessment scans provide an overview of your SQL machines' security state, and details of any security findings. -- [Advanced threat protection](../azure-sql/database/threat-detection-overview.md) - The detection service that continuously monitors your SQL servers for threats such as SQL injection, brute-force attacks, and privilege abuse. This service provides action-oriented security alerts in Azure Security Center with details of the suspicious activity, guidance on how to mitigate to the threats, and options for continuing your investigations with Azure Sentinel.
+- [Advanced threat protection](../azure-sql/database/threat-detection-overview.md) - The detection service that continuously monitors your SQL servers for threats such as SQL injection, brute-force attacks, and privilege abuse. This service provides action-oriented security alerts in Azure Security Center with details of the suspicious activity, guidance on how to mitigate to the threats, and options for continuing your investigations with Azure Sentinel.
+ > [!TIP]
+ > View the list of security alerts for SQL servers [in the alerts reference page](alerts-reference.md#alerts-sql-db-and-warehouse).
## What kind of alerts does Azure Defender for SQL provide?
@@ -69,9 +71,4 @@ Alerts include details of the incident that triggered them, as well as recommend
In this article, you learned about Azure Defender for SQL. > [!div class="nextstepaction"]
-> [Scan your SQL servers for vulnerabilities with Azure Defender](defender-for-sql-usage.md)
-
-For related material, see the following articles:
--- [How to enable Azure Defender for SQL database servers](../azure-sql/database/azure-defender-for-sql.md)-- [The list of security alerts for SQL servers](alerts-reference.md#alerts-sql-db-and-warehouse)
+> [Scan your SQL servers for vulnerabilities with Azure Defender](defender-for-sql-usage.md)
\ No newline at end of file
security-center https://docs.microsoft.com/en-us/azure/security-center/defender-for-sql-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-usage.md
@@ -11,7 +11,7 @@ ms.devlang: na
ms.topic: how-to ms.tgt_pltfrm: na ms.workload: na
-ms.date: 11/30/2020
+ms.date: 12/23/2020
ms.author: memildin ---
@@ -103,7 +103,7 @@ You can view the vulnerability assessment results directly from Security Center.
In each view, the security checks are sorted by **Severity**. Click a specific security check to see a details pane with a **Description**, how to **Remediate** it, and other related information such as **Impact** or **Benchmark**. ## Azure Defender for SQL alerts
-Alerts are generated by unusual and potentially harmful attempts to access or exploit SQL machines. These events can trigger alerts shown in the [Alerts for SQL Database and Azure Synapse Analytics section of the alerts reference page](alerts-reference.md#alerts-sql-db-and-warehouse).
+Alerts are generated by unusual and potentially harmful attempts to access or exploit SQL machines. These events can trigger alerts shown in the [alerts reference page](alerts-reference.md#alerts-sql-db-and-warehouse).
## Explore and investigate security alerts
@@ -125,5 +125,4 @@ For related material, see the following article:
- [Security alerts for SQL Database and Azure Synapse Analytics](alerts-reference.md#alerts-sql-db-and-warehouse) - [Set up email notifications for security alerts](security-center-provide-security-contact-details.md)-- [Learn more about Azure Sentinel](../sentinel/index.yml)-- [Azure Security Center's data security package](../azure-sql/database/azure-defender-for-sql.md)\ No newline at end of file
+- [Learn more about Azure Sentinel](../sentinel/index.yml)
\ No newline at end of file
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/22/2020
+ms.date: 12/23/2020
ms.author: memildin ---
@@ -38,6 +38,7 @@ Updates in December include:
- [New security alerts page in the Azure portal (preview)](#new-security-alerts-page-in-the-azure-portal-preview) - [Revitalized Security Center experience in Azure SQL Database & SQL Managed Instance](#revitalized-security-center-experience-in-azure-sql-database--sql-managed-instance) - [Asset inventory tools and filters updated](#asset-inventory-tools-and-filters-updated)
+- [Recommendation about web apps requesting SSL certificates no longer part of secure score](#recommendation-about-web-apps-requesting-ssl-certificates-no-longer-part-of-secure-score)
### Azure Defender for SQL servers on machines is generally available
@@ -139,6 +140,17 @@ The inventory page in Azure Security Center has been refreshed with the followin
Learn more about inventory in [Explore and manage your resources with asset inventory](asset-inventory.md). +
+### Recommendation about web apps requesting SSL certificates no longer part of secure score
+
+The recommendation "Web apps should request an SSL certificate for all incoming requests" has been moved from the security control **Manage access and permissions** (worth a maximum of 4 pts) into **Implement security best practices** (which is worth no points).
+
+Ensuring your web apps request a certificate certainly makes them more secure. However, for public-facing web apps it's irrelevant. If you access your site over HTTP and not HTTPS, you will not receive any client certificate. So if your application requires client certificates, you should not allow requests to your application over HTTP. Learn more in [Configure TLS mutual authentication for Azure App Service](../app-service/app-service-web-configure-tls-mutual-auth.md).
+
+Wish this change, the recommendation is now a recommended best practice which does not impact your score.
+
+Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations).
+ ## November 2020 Updates in November include:
security-center https://docs.microsoft.com/en-us/azure/security-center/secure-score-security-controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
@@ -228,7 +228,7 @@ Even though Security CenterΓÇÖs default security initiative is based on industry
</tr> <tr> <td class="tg-lboi"><strong><p style="font-size: 16px">Manage access and permissions (max score 4)</p></strong>A core part of a security program is ensuring your users have the necessary access to do their jobs but no more than that: the <a href="/windows-server/identity/ad-ds/plan/security-best-practices/implementing-least-privilege-administrative-models">least privilege access model</a>.<br>Control access to your resources by creating role assignments with <a href="/azure/role-based-access-control/overview">Azure role-based access control (Azure RBAC)</a>. A role assignment consists of three elements:<br>- <strong>Security principal</strong>: the object the user is requesting access to<br>- <strong>Role definition</strong>: their permissions<br>- <strong>Scope</strong>: the set of resources to which the permissions apply</td>
- <td class="tg-lboi"; width=55%>- Deprecated accounts should be removed from your subscription (Preview)<br>- Deprecated accounts with owner permissions should be removed from your subscription (Preview)<br>- External accounts with owner permissions should be removed from your subscription (Preview)<br>- External accounts with write permissions should be removed from your subscription (Preview)<br>- There should be more than one owner assigned to your subscription<br>- Azure role-based access control (Azure RBAC) should be used on Kubernetes Services (Preview)<br>- Service Fabric clusters should only use Azure Active Directory for client authentication<br>- Service principals should be used to protect your subscriptions instead of Management Certificates<br>- Least privileged Linux capabilities should be enforced for containers (preview)<br>- Immutable (read-only) root filesystem should be enforced for containers (preview)<br>- Container with privilege escalation should be avoided (preview)<br>- Running containers as root user should be avoided (preview)<br>- Containers sharing sensitive host namespaces should be avoided (preview)<br>- Usage of pod HostPath volume mounts should be restricted to a known list (preview)<br>- Privileged containers should be avoided (preview)<br>- Azure Policy add-on for Kubernetes should be installed and enabled on your clusters (preview)<br>- Web apps should request an SSL certificate for all incoming requests<br>- Managed identity should be used in your API App<br>- Managed identity should be used in your function App<br>- Managed identity should be used in your web App</td>
+ <td class="tg-lboi"; width=55%>- Deprecated accounts should be removed from your subscription (Preview)<br>- Deprecated accounts with owner permissions should be removed from your subscription (Preview)<br>- External accounts with owner permissions should be removed from your subscription (Preview)<br>- External accounts with write permissions should be removed from your subscription (Preview)<br>- There should be more than one owner assigned to your subscription<br>- Azure role-based access control (Azure RBAC) should be used on Kubernetes Services (Preview)<br>- Service Fabric clusters should only use Azure Active Directory for client authentication<br>- Service principals should be used to protect your subscriptions instead of Management Certificates<br>- Least privileged Linux capabilities should be enforced for containers (preview)<br>- Immutable (read-only) root filesystem should be enforced for containers (preview)<br>- Container with privilege escalation should be avoided (preview)<br>- Running containers as root user should be avoided (preview)<br>- Containers sharing sensitive host namespaces should be avoided (preview)<br>- Usage of pod HostPath volume mounts should be restricted to a known list (preview)<br>- Privileged containers should be avoided (preview)<br>- Azure Policy add-on for Kubernetes should be installed and enabled on your clusters (preview)<br>- Managed identity should be used in your API App<br>- Managed identity should be used in your function App<br>- Managed identity should be used in your web App</td>
</tr> <tr> <td class="tg-lboi"><strong><p style="font-size: 16px">Remediate security configurations (max score 4)</p></strong>Misconfigured IT assets have a higher risk of being attacked. Basic hardening actions are often forgotten when assets are being deployed and deadlines must be met. Security misconfigurations can be at any level in the infrastructure: from the operating systems and network appliances, to cloud resources.<br>Azure Security Center continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. When you've configured the relevant "compliance packages" (standards and baselines) that matter to your organization, any gaps will result in security recommendations that include the CCEID and an explanation of the potential security impact.<br>Commonly used packages are <a href="/azure/security/benchmarks/introduction">Azure Security Benchmark</a> and <a href="https://www.cisecurity.org/benchmark/azure/">CIS Microsoft Azure Foundations Benchmark version 1.1.0</a></td>
@@ -264,7 +264,7 @@ Even though Security CenterΓÇÖs default security initiative is based on industry
</tr> <tr> <td class="tg-lboi"><strong><p style="font-size: 16px">Implement security best practices (max score 0)</p></strong>Modern security practices ΓÇ£assume breachΓÇ¥ of the network perimeter. For that reason, many of the best practices in this control focus on managing identities.<br>Losing keys and credentials is a common problem. <a href="/azure/key-vault/key-vault-overview">Azure Key Vault</a> protects keys and secrets by encrypting keys, .pfx files, and passwords.<br>Virtual private networks (VPNs) are a secure way to access your virtual machines. If VPNs aren't available, use complex passphrases and two-factor authentication such as <a href="/azure/active-directory/authentication/concept-mfa-howitworks">Azure AD Multi-Factor Authentication</a>. Two-factor authentication avoids the weaknesses inherent in relying only on usernames and passwords.<br>Using strong authentication and authorization platforms is another best practice. Using federated identities allows organizations to delegate management of authorized identities. This is also important when employees are terminated, and their access needs to be revoked.</td>
- <td class="tg-lboi"; width=55%>- A maximum of 3 owners should be designated for your subscription<br>- External accounts with read permissions should be removed from your subscription<br>- MFA should be enabled on accounts with read permissions on your subscription<br>- Access to storage accounts with firewall and virtual network configurations should be restricted<br>- All authorization rules except RootManageSharedAccessKey should be removed from Event Hub namespace<br>- An Azure Active Directory administrator should be provisioned for SQL servers<br>- Advanced data security should be enabled on your managed instances<br>- Authorization rules on the Event Hub instance should be defined<br>- Storage accounts should be migrated to new Azure Resource Manager resources<br>- Virtual machines should be migrated to new Azure Resource Manager resources<br>- Subnets should be associated with a Network Security Group<br>- [Preview] Windows exploit guard should be enabled <br>- [Preview] Guest configuration agent should be installed<br>- Non-internet-facing virtual machines should be protected with network security groups<br>- Azure Backup should be enabled for virtual machines<br>- Geo-redundant backup should be enabled for Azure Database for MariaDB<br>- Geo-redundant backup should be enabled for Azure Database for MySQL<br>- Geo-redundant backup should be enabled for Azure Database for PostgreSQL<br>- PHP should be updated to the latest version for your API app<br>- PHP should be updated to the latest version for your web app<br>- Java should be updated to the latest version for your API app<br>- Java should be updated to the latest version for your function app<br>- Java should be updated to the latest version for your web app<br>- Python should be updated to the latest version for your API app<br>- Python should be updated to the latest version for your function app<br>- Python should be updated to the latest version for your web app<br>- Audit retention for SQL servers should be set to at least 90 days</td>
+ <td class="tg-lboi"; width=55%>- A maximum of 3 owners should be designated for your subscription<br>- External accounts with read permissions should be removed from your subscription<br>- MFA should be enabled on accounts with read permissions on your subscription<br>- Access to storage accounts with firewall and virtual network configurations should be restricted<br>- All authorization rules except RootManageSharedAccessKey should be removed from Event Hub namespace<br>- An Azure Active Directory administrator should be provisioned for SQL servers<br>- Advanced data security should be enabled on your managed instances<br>- Authorization rules on the Event Hub instance should be defined<br>- Storage accounts should be migrated to new Azure Resource Manager resources<br>- Virtual machines should be migrated to new Azure Resource Manager resources<br>- Subnets should be associated with a Network Security Group<br>- [Preview] Windows exploit guard should be enabled <br>- [Preview] Guest configuration agent should be installed<br>- Non-internet-facing virtual machines should be protected with network security groups<br>- Azure Backup should be enabled for virtual machines<br>- Geo-redundant backup should be enabled for Azure Database for MariaDB<br>- Geo-redundant backup should be enabled for Azure Database for MySQL<br>- Geo-redundant backup should be enabled for Azure Database for PostgreSQL<br>- PHP should be updated to the latest version for your API app<br>- PHP should be updated to the latest version for your web app<br>- Java should be updated to the latest version for your API app<br>- Java should be updated to the latest version for your function app<br>- Java should be updated to the latest version for your web app<br>- Python should be updated to the latest version for your API app<br>- Python should be updated to the latest version for your function app<br>- Python should be updated to the latest version for your web app<br>- Audit retention for SQL servers should be set to at least 90 days<br>- Web apps should request an SSL certificate for all incoming requests</td>
</tr> </tbody> </table>
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-overview.md
@@ -6,7 +6,7 @@ ms.date: 11/20/2020
--- # What is Azure Service Bus?
-Microsoft Azure Service Bus is a fully managed enterprise message broker with message queues and public-subscribe topics. Service Bus is used to decouple applications and services from each other, providing the following benefits:
+Microsoft Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. Service Bus is used to decouple applications and services from each other, providing the following benefits:
- Load-balancing work across competing workers - Safely routing and transferring data and control across service and application boundaries
@@ -163,4 +163,4 @@ To get started using Service Bus messaging, see the following articles:
* Try the quickstarts for [.NET](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), or [JMS](service-bus-java-how-to-use-jms-api-amqp.md). * To manage Service Bus resources, see [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases). * To learn more about Standard and Premium tiers and their pricing, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
-* To learn about performance and latency for the Premium tier, see [Premium Messaging](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722).
\ No newline at end of file
+* To learn about performance and latency for the Premium tier, see [Premium Messaging](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722).
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-project-creation-next-step-tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-project-creation-next-step-tasks.md
@@ -3,10 +3,17 @@ title: Service Fabric project creation next steps
description: Learn about the application project you just created in Visual Studio. Learn how to build services using tutorials and learn more about developing services for Service Fabric. ms.topic: conceptual
-ms.date: 12/07/2017
+ms.date: 12/21/2020
+ms.custom: contperf-fy21q2
--- # Your Service Fabric application and next steps
-Your Azure Service Fabric application has been created. This article describes some tutorials to try out, the makeup of your project, some more information you might be interested in, and potential next steps.
+Your Azure Service Fabric application has been created. This article includes a number of resources, some more information you might be interested in, and potential [next steps](#next-steps).
+
+New users may find [tutorials, walkthroughs, and samples](#get-started-with-tutorials-walk-throughs-and-samples) helpful. It can also be useful to examine the [structure of the created application project](#the-application-project). Also included are descriptions of Service Fabric's [programming models](#learn-more-about-the-programming-models), [service communication](#learn-about-service-communication), [application security](#learn-about-configuring-application-security), and [application lifecycle](#learn-about-the-application-lifecycle).
+
+More experienced users may find the Service Fabric [best practices](#learn-about-best-practices) section useful to learn how to take advantage of the platform and structure applications with maximum efficacy.
+
+For those with questions or feedback, or who are looking to report an issue, see the [corresponding section](#have-questions-or-feedback--need-to-report-an-issue).
## Get started with tutorials, walk-throughs, and samples Ready to get started?
@@ -21,11 +28,6 @@ Or, try out one of the following walk-throughs and create your first...
You may also be interested in trying out our [sample applications](/samples/browse/?products=azure).
-## Have questions or feedback? Need to report an issue?
-Read through [common questions](service-fabric-common-questions.md) and find answers on what Service Fabric can do and how it should be used.
-
-[Support options](service-fabric-support.md) lists forums on StackOverflow and MSDN for asking questions as well as options for reporting issues, getting support, and submitting product feedback.
- ## The application project Every new application includes an application project. There may be one or two additional projects, depending on the type of service chosen.
@@ -37,8 +39,6 @@ The application project consists of:
* A deployment script that you can use to deploy your application from the command line or as part of an automated continuous integration and deployment pipeline. Learn more about [deploying applications using PowerShell](service-fabric-deploy-remove-applications.md). * The application manifest, which describes the application. You can find the manifest under the ApplicationPackageRoot folder. Learn more about [application and service manifests](service-fabric-application-model.md). -- ## Learn more about the programming models Service Fabric offers multiple ways to write and manage your services. Here's overview and conceptual information on [stateless and stateful Reliable Services](service-fabric-reliable-services-introduction.md), [Reliable Actors](service-fabric-reliable-actors-introduction.md), [containers](service-fabric-containers-overview.md), [guest executables](service-fabric-guest-executables-introduction.md), and [stateless and stateful ASP.NET Core services](service-fabric-reliable-services-communication-aspnetcore.md).
@@ -53,6 +53,26 @@ Your application may contain sensitive information, such as storage connection s
## Learn about the application lifecycle As with other platforms, a Service Fabric application usually goes through the following phases: design, development, testing, deployment, upgrading, maintenance, and removal. [This article](service-fabric-application-lifecycle.md) provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
+## Learn about best practices
+Service Fabric has a number of articles describing [best practices](./service-fabric-best-practices-overview.md). Take advantage of this information to help ensure your cluster and application run as well as possible.
+The topics covered include:
+* [Security](./service-fabric-best-practices-security.md)
+* [Networking](./service-fabric-best-practices-networking.md)
+* [Compute planning and scaling](./service-fabric-best-practices-capacity-scaling.md)
+* [Infrastructure as code](./service-fabric-best-practices-infrastructure-as-code.md)
+* [Monitoring and diagnostics](./service-fabric-best-practices-monitoring.md)
+* [Application design](./service-fabric-best-practices-applications.md)
+
+Also included is a [production readiness checklist](./service-fabric-production-readiness-checklist.md) that integrates all of the best practice information in an easy-to-consume format.
+
+## Have questions or feedback? Need to report an issue?
+Read through [common questions](service-fabric-common-questions.md) and find answers on what Service Fabric can do and how it should be used.
+
+[Troubleshooting guides](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides) can be useful to help diagnose and solve common problems in Service Fabric clusters.
+
+[Support options](service-fabric-support.md) lists forums on StackOverflow and MSDN for asking questions as well as options for reporting issues, getting support, and submitting product feedback.
++ ## Next steps - [Create a Windows cluster in Azure](service-fabric-tutorial-create-vnet-and-windows-cluster.md). - Visualize your cluster, including deployed applications and physical layout, with [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md).
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/connect-job-to-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/connect-job-to-vnet.md new file mode 100644
@@ -0,0 +1,46 @@
+---
+title: Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNET)
+description: This article describes how to connect an Azure Stream Analytics job with resources that are in a VNET.
+author: sidram
+ms.author: sidram
+ms.reviewer: mamccrea
+ms.service: stream-analytics
+ms.topic: conceptual
+ms.date: 12/23/2020
+ms.custom: devx-track-js
+---
+# Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)
+
+Your Stream Analytics jobs make outbound connections to your input and output Azure resources to process data in real time and produce results. These input and output resources (for example, Azure Event Hubs and Azure SQL Database) could be behind an Azure firewall or in an Azure Virtual Network (VNet). Stream Analytics service operates from networks that can't be directly included in your network rules.
+
+However, there are two ways to securely connect your Stream Analytics jobs to your input and output resources in such scenarios.
+* Using private endpoints in Stream Analytics clusters.
+* Using Managed Identity authentication mode coupled with 'Allow trusted services' networking setting.
+
+Your Stream Analytics job does not accept any inbound connection.
+
+## Private endpoints in Stream Analytics clusters.
+[Stream Analytics clusters](https://docs.microsoft.com/azure/stream-analytics/cluster-overview) is a single tenant dedicated compute cluster where you can run your Stream Analytics jobs. You can create managed private endpoints in your Stream Analytics cluster, which allows any jobs running on your cluster to make a secure outbound connection to your input and output resources.
+
+The creation of private endpoints in your Stream Analytics cluster is a [two step operation](https://docs.microsoft.com/azure/stream-analytics/private-endpoints). This option is best suited for medium to large streaming workloads as the minimum size of a Stream Analytics cluster is 36 SUs (although the 36 SUs can be shared by different jobs in various subscriptions or environments like development, test, and production).
+
+## Managed identity authentication with 'Allow trusted services' configuration
+Some Azure services provide **Allow trusted Microsoft services** networking setting, which when enabled, allows your Stream Analytics jobs to securely connect to your resource using strong authentication. This option allows you to connect your jobs to your input and output resources without requiring a Stream Analytics cluster and private endpoints. Configuring your job to use this technique is a 2-step operation:
+* Use Managed Identity authentication mode when configuring input or output in your Stream Analytics job.
+* Grant your specific Stream Analytics jobs explicit access to your target resources by assigning an Azure role to the job's system-assigned managed identity.
+
+Enabling **Allow trusted Microsoft services** does not grant blanket access to any job. This gives you full control of which specific Stream Analytics jobs can access your resources securely.
+
+Your jobs can connect to the following Azure services using this technique:
+1. [Blob Storage or Azure Data Lake Storage Gen2](https://docs.microsoft.com/azure/stream-analytics/blob-output-managed-identity) - can be your job's storage account, streaming input or output.
+2. [Azure Event Hubs](https://docs.microsoft.com/azure/stream-analytics/event-hubs-managed-identity) - can be your job's streaming input or output.
+
+If your jobs need to connect to other input or output types, then the only option is to use private endpoints in Stream Analytics clusters.
+
+You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-ml.md) allows you to use any popular open-source tool, such as Tensorflow, scikit-learn, or PyTorch, to prep, train, and deploy models.
+
+## Next steps
+
+* [Create and remove Private Endpoints in Stream Analytics clusters](https://docs.microsoft.com/azure/stream-analytics/private-endpoints)
+* [Connect to Event Hubs in a VNet using Managed Identity authentication](https://docs.microsoft.com/azure/stream-analytics/event-hubs-managed-identity)
+* [Connect to Blob storage and ADLS Gen2 in a VNet using Managed Identity authentication](https://docs.microsoft.com/azure/stream-analytics/blob-output-managed-identity)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/generation-2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generation-2.md
@@ -31,7 +31,9 @@ Generation 1 VMs are supported by all VM sizes in Azure (except for Mv2-series V
* [Dasv4-series](dav4-dasv4-series.md) * [Ddsv4-series](ddv4-ddsv4-series.md) * [Esv3-series](ev3-esv3-series.md)
+* [Esv4-series](ev4-esv4-series.md)
* [Easv4-series](eav4-easv4-series.md)
+* [Edsv4-series](edv4-edsv4-series.md)
* [Fsv2-series](fsv2-series.md) * [GS-series](sizes-previous-gen.md#gs-series) * [HB-series](hb-series.md)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/azure-hybrid-benefit-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
@@ -167,7 +167,7 @@ For more information about Red Hat subscription compliance, software updates, an
### SUSE
-To use Azure Hybrid Benefit for your SLES VMs, you must first be registered with the [SUSE Public Cloud Program](https://www.suse.com/media/guide/suse_public_cloud_service_provider_program_overview.pdf). After you've purchased SUSE subscriptions, you must register your VMs that use those subscriptions to your own source of updates. Use SUSE Customer Center, the Subscription Management Tool server, or SUSE Manager for this registration.
+To use Azure Hybrid Benefit for your SLES VMs, and for information about moving from SLES PAYG to BYOS or moving from SLES BYOS to PAYG, see [SUSE Linux Enterprise and Azure Hybrid Benefit](https://www.suse.com/c/suse-linux-enterprise-and-azure-hybrid-benefit/).
## Frequently asked questions *Q: Can I use a license type of `RHEL_BYOS` with a SLES image, or vice versa?*