Updates from: 03/22/2021 04:04:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
Previously updated : 03/16/2021 Last updated : 03/21/2021
+zone_pivot_groups: b2c-policy-type
# Embedded sign-in experience +++++ For a simpler sign-in experience, you can avoid redirecting users to a separate sign-in page or generating a pop-up window. By using the inline frame element `<iframe>`, you can embed the Azure AD B2C sign-in user interface directly into your web application. [!INCLUDE [b2c-public-preview-feature](../../includes/active-directory-b2c-public-preview.md)]
See the following related articles:
- [User interface customization](customize-ui.md) - [RelyingParty](relyingparty.md) element reference - [Enable your policy for JavaScript](./javascript-and-page-layout.md)-- [Code samples](code-samples.md)
+- [Code samples](code-samples.md)
+
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
>For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
-## 1.6.2.3
+## 1.6.2.4
>[!NOTE] > - This release will be made available for download only.
Please follow this link to read more about [auto upgrade](how-to-connect-install
> - This release defaults the AADConnect server to the new V2 end point. Note that this end point is not supported in the German national cloud, the Chinese national cloud and the US government cloud and if you need to deploy this version in these clouds you need to follow [these instructions](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2#rollback) to switch back to the V1 end point. Failure to do so will result in errors in synchronization. ### Release status
-3/17/2021: Released for download
+3/19/2021: Released for download
### Functional changes
active-directory Non Gallery Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/non-gallery-apps.md
- Title: Using Azure AD for applications not listed in the app gallery
-description: Understand how to integrate apps not listed in the Azure AD gallery.
------- Previously updated : 07/27/2020----
-# Using Azure AD for applications not listed in the app gallery
-
-In the [Add an app](add-application-portal.md) quickstart, you learn how to add an app to your Azure AD tenant.
-
-In addition to the choices in the [Azure AD application gallery](../saas-apps/tutorial-list.md), you have the option to add a **non-gallery application**.
-
-## Capabilities for apps not listed in the Azure AD gallery
-
-You can add any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. Depending on your [license agreement](https://azure.microsoft.com/pricing/details/active-directory/), the following capabilities are available:
--- Self-service integration of any application that supports [Security Assertion Markup Language (SAML) 2.0](https://wikipedia.org/wiki/SAML_2.0) identity providers (SP-initiated or IdP-initiated)-- Self-service integration of any web application that has an HTML-based sign-in page using [password-based SSO](sso-options.md#password-based-sso)-- Self-service connection of applications that use the [System for Cross-Domain Identity Management (SCIM) protocol for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)-- Ability to add links to any application in the [Office 365 app launcher](https://www.microsoft.com/microsoft-365/blog/2014/10/16/organize-office-365-new-app-launcher-2/) or [My Apps](sso-options.md#linked-sign-on)-
-If you're looking for developer guidance on how to integrate custom apps with Azure AD, see [Authentication Scenarios for Azure AD](../develop/authentication-vs-authorization.md). When you develop an app that uses a modern protocol like [OpenId Connect/OAuth](../develop/active-directory-v2-protocols.md) to authenticate users, you can register it with the Microsoft identity platform by using the [App registrations](../develop/quickstart-register-app.md) experience in the Azure portal.
-
-## Next steps
--- [Quickstart Series on App Management](view-applications-portal.md)
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
Title: Get started integrating Azure AD with apps
+ Title: Get started integrating Azure Active Directory with apps
description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications. + Previously updated : 07/16/2018 Last updated : 03/19/2021
Before integrating applications with Azure AD, it is important to know where you
* How are your groups organized? * Who are the group members? * What permissions/role assignments do the groups currently have?
-* Will you need to clean up user/group databases before integrating? (This is a pretty important question. Garbage in, garbage out.)
+* Will you need to clean up user/group databases before integrating? (This is an important question. Garbage in, garbage out.)
### Access management inventory * How do you currently manage user access to applications? Does that need to change? Have you considered other ways to manage access, such as with [Azure RBAC](../../role-based-access-control/role-assignments-portal.md) for example?
The following articles discuss the different ways applications integrate with Az
* [Using applications in the Azure application gallery](what-is-single-sign-on.md) * [Integrating SaaS applications tutorials list](../saas-apps/tutorial-list.md)
+## Capabilities for apps not listed in the Azure AD gallery
+
+You can add any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. Depending on your [license agreement](https://azure.microsoft.com/pricing/details/active-directory/), the following capabilities are available:
+
+- Self-service integration of any application that supports [Security Assertion Markup Language (SAML) 2.0](https://wikipedia.org/wiki/SAML_2.0) identity providers (SP-initiated or IdP-initiated)
+- Self-service integration of any web application that has an HTML-based sign-in page using [password-based SSO](sso-options.md#password-based-sso)
+- Self-service connection of applications that use the [System for Cross-Domain Identity Management (SCIM) protocol for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)
+- Ability to add links to any application in the [Office 365 app launcher](https://www.microsoft.com/microsoft-365/blog/2014/10/16/organize-office-365-new-app-launcher-2/) or [My Apps](sso-options.md#linked-sign-on)
+
+If you're looking for developer guidance on how to integrate custom apps with Azure AD, see [Authentication Scenarios for Azure AD](../develop/authentication-vs-authorization.md). When you develop an app that uses a modern protocol like [OpenId Connect/OAuth](../develop/active-directory-v2-protocols.md) to authenticate users, you can register it with the Microsoft identity platform by using the [App registrations](../develop/quickstart-register-app.md) experience in the Azure portal.
+ ### Authentication Types
-Each of your applications may have different authentication requirements. With Azure AD, signing certificates can be used with applications that use SAML 2.0, WS-Federation, or OpenID Connect Protocols as well as Password Single Sign On. For more information about application authentication types for use with Azure AD see [Managing Certificates for Federated Single Sign-On in Azure Active Directory](manage-certificates-for-federated-single-sign-on.md) and [Password based single sign on](what-is-single-sign-on.md).
+Each of your applications may have different authentication requirements. With Azure AD, signing certificates can be used with applications that use SAML 2.0, WS-Federation, or OpenID Connect Protocols and Password Single Sign On. For more information about application authentication types, see [Managing Certificates for Federated Single Sign-On in Azure Active Directory](manage-certificates-for-federated-single-sign-on.md) and [Password based single sign on](what-is-single-sign-on.md).
### Enabling SSO with Azure AD App Proxy With Microsoft Azure AD Application Proxy, you can provide access to applications located inside your private network securely, from anywhere and on any device. After you have installed an application proxy connector within your environment, it can be easily configured with Azure AD.
The following articles describe ways you can manage access to applications once
* [Sharing accounts](../enterprise-users/users-sharing-accounts.md) ## Next steps
-For in-depth information, you can download Azure Active Directory deployment plans from [GitHub](../fundamentals/active-directory-deployment-plans.md). For gallery applications, you can download deployment plans for single sign-on, Conditional Access, and user provisioning through the [Azure portal](https://portal.azure.com).
+For in-depth information, you can download Azure Active Directory deployment plans from [GitHub](../fundamentals/active-directory-deployment-plans.md). For gallery applications, you can download deployment plans for single sign-on, Conditional Access, and user provisioning through the [Azure portal](https://portal.azure.com).
To download a deployment plan from the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Enterprise Applications** | **Pick an App** | **Deployment Plan**.-
-Please provide feedback on deployment plans by taking the [Deployment plan survey](https://aka.ms/DeploymentPlanFeedback).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory application management documenta
### New articles - [Configure SAML-based single sign-on](configure-saml-single-sign-on.md)-- [Using Azure AD for applications not listed in the app gallery](non-gallery-apps.md) - [Get It Now - add an app from the Azure Marketplace](get-it-now-azure-marketplace.md) - [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-configure.md) - [Quickstart: Set up single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-setup-sso.md)
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/api-server-authorized-ip-ranges.md
You need the Azure CLI version 2.0.76 or later installed and configured. Run `a
The API server Authorized IP ranges feature has the following limitations: - On clusters created after API server authorized IP address ranges moved out of preview in October 2019, API server authorized IP address ranges are only supported on the *Standard* SKU load balancer. Existing clusters with the *Basic* SKU load balancer and API server authorized IP address ranges configured will continue work as is but cannot be migrated to a *Standard* SKU load balancer. Those existing clusters will also continue to work if their Kubernetes version or control plane are upgraded. API server authorized IP address ranges are not supported for private clusters.-- This feature is not compatible with clusters that use [Public IP per Node node pools preview feature](use-multiple-node-pools.md#assign-a-public-ip-per-node-for-your-node-pools-preview).
+- This feature is not compatible with clusters that use [Public IP per Node](use-multiple-node-pools.md#assign-a-public-ip-per-node-for-your-node-pools).
## Overview of API server authorized IP ranges
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Last updated 03/15/2021 -+ #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 04/08/2020 Last updated : 02/11/2021
A workload may require splitting a cluster's nodes into separate pools for logic
* If you expand your VNET after creating the cluster you must update your cluster (perform any managed clster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket. * Calico Network Policy is not supported. * Azure Network Policy is not supported.
-* Kube-proxy expects a single contiguous cidr and uses it this for three optmizations. See this [K.E.P.](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20191104-iptables-no-cluster-cidr.md ) and --cluster-cidr [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for details. In azure cni your first node pool's subnet will be given to kube-proxy.
+* Kube-proxy expects a single contiguous cidr and uses it this for three optmizations. See this [K.E.P.](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2450-Remove-knowledge-of-pod-cluster-CIDR-from-iptables-rules) and --cluster-cidr [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for details. In azure cni your first node pool's subnet will be given to kube-proxy.
To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
az deployment group create \
It may take a few minutes to update your AKS cluster depending on the node pool settings and operations you define in your Resource Manager template.
-## Assign a public IP per node for your node pools (preview)
+## Assign a public IP per node for your node pools
-> [!WARNING]
-> You must install the CLI preview extension 0.4.43 or greater to use the public IP per node feature.
+AKS nodes do not require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
-AKS nodes do not require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by registering for a preview feature, Node Public IP (preview).
-
-To install and update the latest aks-preview extension, use the following Azure CLI commands:
-
-```azurecli
-az extension add --name aks-preview
-az extension update --name aks-preview
-az extension list
-```
-
-Register for the Node Public IP feature with the following Azure CLI command:
-
-```azurecli-interactive
-az feature register --name NodePublicIPPreview --namespace Microsoft.ContainerService
-```
-It may take several minutes for the feature to register. You can check the status with the following command:
-
-```azurecli-interactive
- az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/NodePublicIPPreview')].{Name:name,State:properties.state}"
-```
-
-After successful registration, create a new resource group.
+First, create a new resource group.
```azurecli-interactive az group create --name myResourceGroup2 --location eastus
For existing AKS clusters, you can also add a new node pool, and attach a public
az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodepool2 --enable-node-public-ip ```
-> [!Important]
-> During preview, the Azure Instance Metadata Service doesn't currently support retrieval of public IP addresses for the standard tier VM SKU. Due to this limitation, you can't use kubectl commands to display the public IPs assigned to the nodes. However, the IPs are assigned and function as intended. The public IPs for your nodes are attached to the instances in your Virtual Machine Scale Set.
- You can locate the public IPs for your nodes in various ways:
-* Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips]
+* Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips].
* Use [PowerShell or Bash commands][vmss-commands]. * You can also view the public IPs in the Azure portal by viewing the instances in the Virtual Machine Scale Set.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
<!-- INTERNAL LINKS --> [aks-windows]: windows-container-cli.md
-[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
-[az-aks-create]: /cli/azure/aks#az-aks-create
-[az-aks-get-upgrades]: /cli/azure/aks#az-aks-get-upgrades
-[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
-[az-aks-nodepool-list]: /cli/azure/aks/nodepool#az-aks-nodepool-list
-[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update
-[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az-aks-nodepool-upgrade
-[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az-aks-nodepool-scale
-[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az-aks-nodepool-delete
-[az-extension-add]: /cli/azure/extension#az-extension-add
-[az-extension-update]: /cli/azure/extension#az-extension-update
-[az-group-create]: /cli/azure/group#az-group-create
-[az-group-delete]: /cli/azure/group#az-group-delete
-[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
+[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_get_credentials
+[az-aks-create]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_create
+[az-aks-get-upgrades]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_get_upgrades
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_add
+[az-aks-nodepool-list]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_list
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_update
+[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_upgrade
+[az-aks-nodepool-scale]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_scale
+[az-aks-nodepool-delete]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_delete
+[az-extension-add]: /cli/azure/extension?view=azure-cli-latest&preserve-view=true#az_extension_add
+[az-extension-update]: /cli/azure/extension?view=azure-cli-latest&preserve-view=true#az_extension_update
+[az-group-create]: /cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_create
+[az-group-delete]: /cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete
+[az-deployment-group-create]: /cli/azure/deployment/group?view=azure-cli-latest&preserve-view=true#az_deployment_group_create
[gpu-cluster]: gpu-cluster.md [install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[ip-limitations]: ../virtual-network/virtual-network-ip-addresses-overview-arm#standard [node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine
-[az-list-ips]: /cli/azure/vmss.md#az-vmss-list-instance-public-ips
+[az-list-ips]: /cli/azure/vmss?view=azure-cli-latest&preserve-view=true#az_vmss_list_instance_public_ips
[reduce-latency-ppg]: reduce-latency-ppg.md
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-webhooks.md
Title: Start an Azure Automation runbook from a webhook
description: This article tells how to use a webhook to start a runbook in Azure Automation from an HTTP call. Previously updated : 06/24/2020 Last updated : 03/18/2021 # Start a runbook from a webhook
Use the following procedure to create a new webhook linked to a runbook in the A
4. Fill in the **Name** and **Expiration Date** fields for the webhook and specify if it should be enabled. See [Webhook properties](#webhook-properties) for more information about these properties. 5. Click the copy icon and press Ctrl+C to copy the URL of the webhook. Then record it in a safe place.
- > [!NOTE]
- > Once you create the webhook, you cannot retrieve the URL again.
+ > [!IMPORTANT]
+ > Once you create the webhook, you cannot retrieve the URL again. Make sure you copy and record it as above.
![Webhook URL](media/automation-webhooks/copy-webhook-url.png)
Assuming the request is successful, the webhook response contains the job ID in
The client can't determine when the runbook job completes or its completion status from the webhook. It can find out this information using the job ID with another mechanism, such as [Windows PowerShell](/powershell/module/servicemanagement/azure.service/get-azureautomationjob) or the [Azure Automation API](/rest/api/automation/job).
+### Use a webhook from an ARM template
+
+Automation webhooks can also be invoked by [Azure Resource Manager (ARM) templates](/azure/azure-resource-manager/templates/overview). The ARM template issues a `POST` request and receives a return code just like any other client. See [Use a webhook](#use-a-webhook).
+
+ > [!NOTE]
+ > For security reasons, the URI is only returned the first time a template is deployed.
+
+This sample template creates a test environment and returns the URI for the webhook it creates.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "automationAccountName": {
+ "type": "String",
+ "metadata": {
+ "description": "Automation account name"
+ }
+ },
+ "webhookName": {
+ "type": "String",
+ "metadata": {
+ "description": "Webhook Name"
+ }
+ },
+ "runbookName": {
+ "type": "String",
+ "metadata": {
+ "description": "Runbook Name for which webhook will be created"
+ }
+ },
+ "WebhookExpiryTime": {
+ "type": "String",
+ "metadata": {
+ "description": "Webhook Expiry time"
+ }
+ },
+ "_artifactsLocation": {
+ "defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-automation/",
+ "type": "String",
+ "metadata": {
+ "description": "URI to artifacts location"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Automation/automationAccounts",
+ "apiVersion": "2020-01-13-preview",
+ "name": "[parameters('automationAccountName')]",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "sku": {
+ "name": "Free"
+ }
+ },
+ "resources": [
+ {
+ "type": "runbooks",
+ "apiVersion": "2018-06-30",
+ "name": "[parameters('runbookName')]",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[parameters('automationAccountName')]"
+ ],
+ "properties": {
+ "runbookType": "Python2",
+ "logProgress": "false",
+ "logVerbose": "false",
+ "description": "Sample Runbook",
+ "publishContentLink": {
+ "uri": "[uri(parameters('_artifactsLocation'), 'scripts/AzureAutomationTutorialPython2.py')]",
+ "version": "1.0.0.0"
+ }
+ }
+ },
+ {
+ "type": "webhooks",
+ "apiVersion": "2018-06-30",
+ "name": "[parameters('webhookName')]",
+ "dependsOn": [
+ "[parameters('automationAccountName')]",
+ "[parameters('runbookName')]"
+ ],
+ "properties": {
+ "isEnabled": true,
+ "expiryTime": "[parameters('WebhookExpiryTime')]",
+ "runbook": {
+ "name": "[parameters('runbookName')]"
+ }
+ }
+ }
+ ]
+ }
+ ],
+ "outputs": {
+ "webhookUri": {
+ "type": "String",
+ "value": "[reference(parameters('webhookName')).uri]"
+ }
+ }
+}
+```
+ ## Renew a webhook When a webhook is created, it has a validity time period of ten years, after which it automatically expires. Once a webhook has expired, you can't reactivate it. You can only remove and then recreate it.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Ext
## Start-up and configuration
-When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. You can much more easily inject dependencies and run middleware when running out-of-process.
+When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. When running out-of-process, you can much more easily add configurations, inject dependencies, and run your own middleware.
-The following code shows an example of a `HostBuilder` pipeline:
+The following code shows an example of a [HostBuilder] pipeline:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_startup":::
-A `HostBuilder` is used to build and return a fully initialized `IHost` instance, which you run asynchronously to start your function app.
+A [HostBuilder] is used to build and return a fully initialized [IHost] instance, which you run asynchronously to start your function app.
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_host_run"::: ### Configuration
-Having access to the host builder pipeline means that you can set any app-specific configurations during initialization. These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you'll still need to use the [host.json file](functions-host-json.md).
+The [ConfigureFunctionsWorkerDefaults] method is used to add the settings required for the function app to run out-of-process, which includes the following functionality:
-<!--The following example shows how to add configuration `args`, which are read as command-line arguments:
-
- .ConfigureAppConfiguration(c =>
- {
- c.AddCommandLine(args);
- })
- :::
++ Default set of converters.++ Set the default [JsonSerializerOptions] to ignore casing on property names.++ Integrate with Azure Functions logging.++ Output binding middleware and features.++ Function execution middleware.++ Default gRPC support. +
-The `ConfigureAppConfiguration` method is used to configure the rest of the build process and application. This example also uses an [IConfigurationBuilder](/dotnet/api/microsoft.extensions.configuration.iconfigurationbuilder?view=dotnet-plat-ext-5.0&preserve-view=true), which makes it easier to add multiple configuration items. Because `ConfigureAppConfiguration` returns the same instance of [`IConfiguration`](/dotnet/api/microsoft.extensions.configuration.iconfiguration?view=dotnet-plat-ext-5.0&preserve-view=true), you can also just call it multiple times to add multiple configuration items.-->
-You can access the full set of configurations from both [`HostBuilderContext.Configuration`](/dotnet/api/microsoft.extensions.hosting.hostbuildercontext.configuration?view=dotnet-plat-ext-5.0&preserve-view=true) and [`IHost.Services`](/dotnet/api/microsoft.extensions.hosting.ihost.services?view=dotnet-plat-ext-5.0&preserve-view=true).
+Having access to the host builder pipeline means that you can also set any app-specific configurations during initialization. You can call the [ConfigureAppConfiguration] method on [HostBuilder] one or more times to add the configurations required by your function app. To learn more about app configuration, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&preserve-view=true).
-To learn more about configuration, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&preserve-view=true).
+These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you'll still need to use the [host.json file](functions-host-json.md).
### Dependency injection
-Dependency injection is simplified, compared to .NET class libraries. Rather than having to create a startup class to register services, you just have to call `ConfigureServices` on the host builder and use the extension methods on [`IServiceCollection`](/dotnet/api/microsoft.extensions.dependencyinjection.iservicecollection?view=dotnet-plat-ext-5.0&preserve-view=true) to inject specific services.
+Dependency injection is simplified, compared to .NET class libraries. Rather than having to create a startup class to register services, you just have to call [ConfigureServices] on the host builder and use the extension methods on [IServiceCollection] to inject specific services.
The following example injects a singleton service dependency:
The following example injects a singleton service dependency:
To learn more, see [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-5.0&preserve-view=true).
-<!--### Middleware
+### Middleware
.NET isolated also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
-While the full middleware registration set of APIs is not yet exposed, we do support middleware registration and have added an example to the sample application under the Middleware folder.
+The [ConfigureFunctionsWorkerDefaults] extension method has an overload that lets you register your own middleware, as you can see in the following example.
+
+For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
## Execution context
-.NET isolated passes a `FunctionContext` object to your function methods. This object lets you get an [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) instance to write to the logs by calling the `GetLogger` method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
+.NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
## Bindings
-Bindings are defined by using attributes on methods, parameters, and return types. A function method is a method with a `Function` and a trigger attribute applied to an input parameter, as shown in the following example:
+Bindings are defined by using attributes on methods, parameters, and return types. A function method is a method with a `Function` attribute and a trigger attribute applied to an input parameter, as shown in the following example:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_trigger" :::
The trigger attribute specifies the trigger type and binds input data to a metho
The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name.
-Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding classes, such as `ICollector<T>`, `IAsyncCollector<T>`, and `CloudBlockBlob`. There's also no direct support for types inherited from underlying service SDKs, such as [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient) and [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage). Instead, bindings rely on strings, arrays, and serializable types, such as plain old class objects (POCOs).
+Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding classes, such as `ICollector<T>`, `IAsyncCollector<T>`, and `CloudBlockBlob`. There's also no direct support for types inherited from underlying service SDKs, such as [DocumentClient] and [BrokeredMessage]. Instead, bindings rely on strings, arrays, and serializable types, such as plain old class objects (POCOs).
+
+For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when running out-of-process.
-For HTTP triggers, you must use `HttpRequestData` and `HttpResponseData` to access the request and response data. This is because you don't have access to the original HTTP request and response objects when running out-of-process.
+For a complete set of reference samples for using triggers and bindings when running out-of-process, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
### Input bindings
To write to an output binding, you must apply an output binding attribute to the
The data written to an output binding is always the return value of the function. If you need to write to more than one output binding, you must create a custom return type. This return type must have the output binding attribute applied to one or more properties of the class. The following example writes to both an HTTP response and a queue output binding: ### HTTP trigger
-HTTP triggers translates the incoming HTTP request message into an `HttpRequestData` object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optional a message `Body`. This object is a representation of the HTTP request object and not the request itself.
+HTTP triggers translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optional a message `Body`. This object is a representation of the HTTP request object and not the request itself.
-Likewise, the function returns an `HttpReponseData` object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
+Likewise, the function returns an [HttpReponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
The following code is an HTTP trigger
The following code is an HTTP trigger
## Logging
-In .NET isolated, you can write to logs by using an [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) instance obtained from a `FunctionContext` object passed to your function. Call the `GetLogger` method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
+In .NET isolated, you can write to logs by using an [ILogger] instance obtained from a [FunctionContext] object passed to your function. Call the [GetLogger] method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
-The following example shows how to get an `ILogger` and write logs inside a function:
+The following example shows how to get an [ILogger] and write logs inside a function:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Http/HttpFunction.cs" id="docsnippet_logging" :::
-Use various methods of `ILogger` to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
+Use various methods of [ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
-An [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) is also provided when using [dependency injection](#dependency-injection).
+An [ILogger] is also provided when using [dependency injection](#dependency-injection).
## Differences with .NET class library functions
This section describes the current state of the functional and behavioral differ
| - | - | - | | .NET versions | LTS (.NET Core 3.1) | Current (.NET 5.0) | | Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
-| Binding extension packages | [`Microsoft.Azure.WebJobs.Extensions.*`](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | Under [`Microsoft.Azure.Functions.Worker.Extensions.*`](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
-| Logging | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) passed to the function | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) obtained from `FunctionContext` |
+| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | Under [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
+| Logging | [ILogger] passed to the function | [ILogger] obtained from [FunctionContext] |
| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | Not supported | | Output bindings | Out parameters | Return values |
-| Output binding types | `IAsyncCollector`, [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient?view=azure-dotnet&preserve-view=true), [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage?view=azure-dotnet&preserve-view=true), and other client-specific types | Simple types, JSON serializable types, and arrays. |
+| Output binding types | `IAsyncCollector`, [DocumentClient], [BrokeredMessage], and other client-specific types | Simple types, JSON serializable types, and arrays. |
| Multiple output bindings | Supported | [Supported](#multiple-output-bindings) |
-| HTTP trigger | [`HttpRequest`](/dotnet/api/microsoft.aspnetcore.http.httprequest?view=aspnetcore-5.0&preserve-view=true)/[`ObjectResult`](/dotnet/api/microsoft.aspnetcore.mvc.objectresult?view=aspnetcore-5.0&preserve-view=true) | `HttpRequestData`/`HttpResponseData` |
+| HTTP trigger | [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
| Durable Functions | [Supported](durable/durable-functions-overview.md) | Not supported | | Imperative bindings | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | function.json artifact | Generated | Not generated |
For information on workarounds to know issues running .NET isolated process func
## Next steps + [Learn more about triggers and bindings](functions-triggers-bindings.md)
-+ [Learn more about best practices for Azure Functions](functions-best-practices.md)
++ [Learn more about best practices for Azure Functions](functions-best-practices.md)++
+[HostBuilder]: /dotnet/api/microsoft.extensions.hosting.hostbuilder?view=dotnet-plat-ext-5.0&preserve-view=true
+[IHost]: /dotnet/api/microsoft.extensions.hosting.ihost?view=dotnet-plat-ext-5.0&preserve-view=true
+[ConfigureFunctionsWorkerDefaults]: /dotnet/api/microsoft.extensions.hosting.workerhostbuilderextensions.configurefunctionsworkerdefaults?view=azure-dotnet&preserve-view=true#Microsoft_Extensions_Hosting_WorkerHostBuilderExtensions_ConfigureFunctionsWorkerDefaults_Microsoft_Extensions_Hosting_IHostBuilder_
+[ConfigureAppConfiguration]: /dotnet/api/microsoft.extensions.hosting.hostbuilder.configureappconfiguration?view=dotnet-plat-ext-5.0&preserve-view=true
+[IServiceCollection]: /dotnet/api/microsoft.extensions.dependencyinjection.iservicecollection?view=dotnet-plat-ext-5.0&preserve-view=true
+[ConfigureServices]: /dotnet/api/microsoft.extensions.hosting.hostbuilder.configureservices?view=dotnet-plat-ext-5.0&preserve-view=true
+[FunctionContext]: /dotnet/api/microsoft.azure.functions.worker.functioncontext?view=azure-dotnet&preserve-view=true
+[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true
+[GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger?view=azure-dotnet&preserve-view=true
+[DocumentClient]: /dotnet/api/microsoft.azure.documents.client.documentclient
+[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
+[HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
+[HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true
+[HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest?view=aspnetcore-5.0&preserve-view=true
+[ObjectResult]: /dotnet/api/microsoft.aspnetcore.mvc.objectresult?view=aspnetcore-5.0&preserve-view=true
+[JsonSerializerOptions]: /api/system.text.json.jsonserializeroptions?view=net-5.0&preserve-view=true
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Specifies the maximum number of language worker processes, with a default value
||| |FUNCTIONS\_WORKER\_PROCESS\_COUNT|2|
-## PYTHON\_THREADPOOL\_THREAD\_COUNT
-
-Specifies the maximum number of threads that a Python language worker would use to execute function invocations, with a default value of `1` for Python version `3.8` and below. For Python version `3.9` and above, the value is set to `None`. Note that this setting does not guarantee the number of threads that would be set during executions. The setting allows Python to expand the number of threads to the specified value. The setting only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and not for coroutines.
-
-|Key|Sample value|Max value|
-||||
-|PYTHON\_THREADPOOL\_THREAD\_COUNT|2|32|
-- ## FUNCTIONS\_WORKER\_RUNTIME
-The language worker runtime to load in the function app. This will correspond to the language being used in your application (for example, "dotnet"). For functions in multiple languages you will need to publish them to multiple apps, each with a corresponding worker runtime value. Valid values are `dotnet` (C#/F#), `node` (JavaScript/TypeScript), `java` (Java), `powershell` (PowerShell), and `python` (Python).
+The language worker runtime to load in the function app. This corresponds to the language being used in your application (for example, `dotnet`). Starting with version 2.x of the Azure Functions runtime, a given function app can only support a single language.
|Key|Sample value| |||
-|FUNCTIONS\_WORKER\_RUNTIME|dotnet|
+|FUNCTIONS\_WORKER\_RUNTIME|node|
+
+Valid values:
+
+| Value | Language |
+|||
+| `dotnet` | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# (script)](functions-reference-csharp.md) |
+| `dotnet-isolated` | [C# (isolated process)](dotnet-isolated-process-guide.md) |
+| `java` | [Java](functions-reference-java.md) |
+| `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) |
+| `powershell` | [PowerShell](functions-reference-powershell.md) |
+| `python` | [Python](functions-reference-python.md) |
## PIP\_EXTRA\_INDEX\_URL
The value for this setting indicates a custom package index URL for Python apps.
To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+## PYTHON\_THREADPOOL\_THREAD\_COUNT
+
+Specifies the maximum number of threads that a Python language worker would use to execute function invocations, with a default value of `1` for Python version `3.8` and below. For Python version `3.9` and above, the value is set to `None`. Note that this setting does not guarantee the number of threads that would be set during executions. The setting allows Python to expand the number of threads to the specified value. The setting only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and not for coroutines.
+
+|Key|Sample value|Max value|
+||||
+|PYTHON\_THREADPOOL\_THREAD\_COUNT|2|32|
+ ## SCALE\_CONTROLLER\_LOGGING\_ENABLED _This setting is currently in preview._
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-wwps.md
Government requests for customer data must comply with applicable laws.
Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
-Our [Law Enforcement Request Report](https://www.microsoft.com/about/corporate-responsibility/lerr) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data.
+Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data.
### CLOUD Act provisions
This section addresses common customer questions related to Azure public, privat
- **Data storage for non-regional - **Sovereign cloud deployment:** Why doesnΓÇÖt Microsoft deploy a sovereign, physically isolated cloud instance in every country that requests it? **Answer:** Microsoft is actively pursuing sovereign cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, break down when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra sovereign cloud or fragmentation within a sovereign cloud. Whereas a sovereign cloud might prove to be the right solution for certain customers, it is not the only option available to worldwide public sector customers. - **Sovereign cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** Government customers can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by the customerΓÇÖs own security-cleared, in-country personnel. Customers can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes they use in Azure. With Azure Stack Hub, customers have sole control of their data, including storage, processing, transmission, and remote access.-- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/about/corporate-responsibility/lerr) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.
+- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.
- **Autarky:** Can Microsoft cloud operations be separated from the Internet or the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model. - **Public Cloud:** Azure regional datacenters can be connected to local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft is not possible in public cloud. - **Private Cloud:** With Azure Stack Hub, customers have full control over network connectivity and can operate Azure Stack Hub in [fully disconnected mode](/azure-stack/operator/azure-stack-disconnected-deployment).
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/data-retention-privacy.md
You can [switch off some of the data by editing ApplicationInsights.config][conf
> [!NOTE] > Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling we recommend this [article](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data our [IP address collection article](./ip-collection.md) will walk you through your options.
+## Can I modify or update data after it has been collected?
+
+No, data is read-only, and can only be deleted via the purge functionality. To learn more visit [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
+ ## Credits This product includes GeoLite2 data created by MaxMind, available from [https://www.maxmind.com](https://www.maxmind.com).
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/personal-data-mgmt.md
Last updated 05/18/2018
Log Analytics is a data store where personal data is likely to be found. Application Insights stores its data in a Log Analytics partition. This article will discuss where in Log Analytics and Application Insights such data is typically found, as well as the capabilities available to you to handle such data. > [!NOTE]
-> For the purposes of this article _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights.
+> For the purposes of this article _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you are using a workspace-based Application Insights resource, the information on log data will apply but if you are using the classic Application Insights resource then the application data applies.
[!INCLUDE [gdpr-dsr-and-stp-note](../../../includes/gdpr-dsr-and-stp-note.md)] + ## Strategy for personal data handling While it will be up to you and your company to ultimately determine the strategy with which you will handle your private data (if at all), the following are some possible approaches. They are listed in order of preference from a technical point of view from most to least preferable: * Where possible, stop collection of, obfuscate, anonymize, or otherwise adjust the data being collected to exclude it from being considered "private". This is _by far_ the preferred approach, saving you the need to create a very costly and impactful data handling strategy. * Where not possible, attempt to normalize the data to reduce the impact on the data platform and performance. For example, instead of logging an explicit User ID, create a lookup data that will correlate the username and their details to an internal ID that can then be logged elsewhere. That way, should one of your users ask you to delete their personal information, it is possible that only deleting the row in the lookup table corresponding to the user will be sufficient.
-* Finally, if private data must be collected, build a process around the purge API path and the existing query API path to meet any obligations you may have around exporting and deleting any private data associated with a user.
+* Finally, if private data must be collected, build a process around the purge API path and the existing query API path to meet any obligations you may have around exporting and deleting any private data associated with a user.
## Where to look for private data in Log Analytics?
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Add, remove, and arrange favorites in Azure portal description: Learn how to add or remove items from the favorites list and rearrange the order of items keywords: favorites,portal Previously updated : 12/20/2019 Last updated : 03/16/2021
azure-portal Azure Portal Dashboard Share Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboard-share-access.md
Title: Share Azure portal dashboards by using Azure role-based access control
description: This article explains how to share a dashboard in the Azure portal by using Azure role-based access control. ms.assetid: 8908a6ce-ae0c-4f60-a0c9-b3acfe823365 Previously updated : 03/23/2020 Last updated : 03/19/2021 # Share Azure dashboards by using Azure role-based access control
-After configuring a dashboard, you can publish it and share it with other users in your organization. You allow others to view your dashboard by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Assign a user or group of users to a role. That role defines whether those users can view or modify the published dashboard.
+After configuring a dashboard, you can publish it and share it with other users in your organization. You allow others to view your dashboard by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Assign a single user or a group of users to a role. That role defines whether those users can view or modify the published dashboard.
-All published dashboards are implemented as Azure resources. They exist as manageable items within your subscription and are contained in a resource group. From an access control perspective, dashboards are no different than other resources, such as a virtual machine or a storage account.
-
-> [!TIP]
-> Individual tiles on the dashboard enforce their own access control requirements based on the resources they display. You can share a dashboard broadly while protecting the data on individual tiles.
->
->
+All published dashboards are implemented as Azure resources. They exist as manageable items within your subscription and are contained in a resource group. From an access control perspective, dashboards are no different from other resources, such as a virtual machine or a storage account. Individual tiles on the dashboard enforce their own access control requirements based on the resources they display. You can share a dashboard broadly while protecting the data on individual tiles.
## Understanding access control for dashboards
The permissions you assign inherit from the subscription down to the resource. T
Let's say you have an Azure subscription and various members of your team have been assigned the roles of *owner*, *contributor*, or *reader* for the subscription. Users who are owners or contributors can list, view, create, modify, or delete dashboards within the subscription. Users who are readers can list and view dashboards, but can't modify or delete them. Users with reader access can make local edits to a published dashboard, such as when troubleshooting an issue, but they can't publish those changes back to the server. They can make a private copy of the dashboard for themselves.
-You could also assign permissions to the resource group that contains several dashboards or to an individual dashboard. For example, you may decide that a group of users should have limited permissions across the subscription but greater access to a particular dashboard. Assign those users to a role for that dashboard.
+You could assign permissions to the resource group that contains several dashboards or to an individual dashboard. For example, you may decide that a group of users should have limited permissions across the subscription but greater access to a particular dashboard. Assign those users to a role for that dashboard.
-## Publish dashboard
+## Publish a dashboard
Let's suppose you configure a dashboard that you want to share with a group of users in your subscription. The following steps show how to share a dashboard to a group called Storage Managers. You can name your group whatever you like. For more information, see [Managing groups in Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
Your dashboard is now published. If the permissions inherited from the subscript
You can assign a group of users to a role for that dashboard.
-1. After publishing the dashboard, select the **Share** or **Unshare** option to access **Sharing + access control**.
-
-1. In **Sharing + access control**, select **Manage users**.
-
- ![manage users for a dashboard](./media/azure-portal-dashboard-share-access/manage-users-for-access-control.png)
+1. After publishing the dashboard, select **Manage sharing**.
-1. Select **Role assignments** to see existing users that are already assigned a role for this dashboard.
+1. In **Access Control** select **Role assignments** to see existing users that are already assigned a role for this dashboard.
1. To add a new user or group, select **Add** then **Add role assignment**. ![add a user for access to the dashboard](./media/azure-portal-dashboard-share-access/manage-users-existing-users.png)
-1. Select the role that represents the permissions to grant. For this example, select **Contributor**.
+1. Select the role that represents the permissions to grant, such as **Contributor**.
1. Select the user or group to assign to the role. If you don't see the user or group you're looking for in the list, use the search box. Your list of available groups depends on the groups you've created in Active Directory.
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards.md
Title: Create and share dashboards in the Azure portal
-description: This article describes how to create, customize, publish, and share dashboards in the Azure portal.
+ Title: Create a dashboard in the Azure portal
+description: This article describes how to create and customize a dashboard in the Azure portal.
ms.assetid: ff422f36-47d2-409b-8a19-02e24b03ffe7 Previously updated : 03/23/2020 Last updated : 03/16/2021
-# Create and share dashboards in the Azure portal
+# Create a dashboard in the Azure portal
-Dashboards are a focused and organized view of your cloud resources in the Azure portal. Use dashboards as a workspace where you can quickly launch tasks for day-to-day operations and monitor resources. Build custom dashboards based on projects, tasks, or user roles, for example.
+Dashboards are a focused and organized view of your cloud resources in the Azure portal. Use dashboards as a workspace where you can monitor resources and quickly launch tasks for day-to-day operations. Build custom dashboards based on projects, tasks, or user roles, for example.
-The Azure portal provides a default dashboard as a starting point. You can edit the default dashboard. Create and customize additional dashboards, and publish and share dashboards to make them available to other users. This article describes how to create a new dashboard, customize the interface, and publish and share dashboards.
+The Azure portal provides a default dashboard as a starting point. You can edit the default dashboard and create and customize additional dashboards. This article describes how to create a new dashboard and customize it. For information on sharing dashboards, see [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md).
## Create a new dashboard
-In this example, we create a new, private dashboard and assign a name. Follow these steps to get started:
+In this example, we create a new private dashboard and assign a name. Follow these steps to get started:
1. Sign in to the [Azure portal](https://portal.azure.com).
In this example, we create a new, private dashboard and assign a name. Follow th
![Open the dashboard](./media/azure-portal-dashboards/portal-menu-dashboard.png)
-1. Select **New dashboard**.
+1. Select **New dashboard** then **Blank dashboard**.
![Screenshot of new dashboard](./media/azure-portal-dashboards/create-new-dashboard.png) This action opens the **Tile Gallery**, from which you'll select tiles, and an empty grid where you'll arrange the tiles.
+1. Select the **My Dashboard** text in the dashboard label and enter a name that will help you easily identify the custom dashboard.
+ ![Screenshot of tile gallery and empty grid](./media/azure-portal-dashboards/dashboard-name.png)
-1. Select the **My Dashboard** text in the dashboard label and enter a name that will help you easily identify the custom dashboard.
+1. In the page header select **Done customizing** to exit edit mode, then select **Save**.
-1. Select **Done customizing** in the page header to exit edit mode.
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-save.png" alt-text="Screenshot of dashboard save process":::
The dashboard view now shows your new dashboard. Select the arrow next to the dashboard name to see dashboards available to you. The list might include dashboards that other users have created and shared.
The dashboard view now shows your new dashboard. Select the arrow next to the da
Now, let's edit the dashboard to add, resize, and arrange tiles that represent your Azure resources.
-### Add tiles from the dashboard
+### Add tiles from the tile gallery
To add tiles to a dashboard, follow these steps:
To add tiles to a dashboard, follow these steps:
1. Browse the **Tile Gallery** or use the search field to find the tile you want.
-1. Select **Add** to add the tile to the dashboard with a default size and location. Or, drag the tile to the grid and place it where you want.
+1. Select **Add** to add the tile to the dashboard with a default size and location. Or, drag the tile to the grid and place it where you want. Add any tiles you want, but here are a couple of ideas:
+
+ - Add **All resources** to see any resources you've already created.
-> [!TIP]
-> If you work with more than one organization, add the **Organization identity** tile to your dashboard to clearly show which organization the resources belong to.
+ - If you work with more than one organization, add the **Organization identity** tile to your dashboard to clearly show which organization the resources belong to.
+
+1. In the page header select **Save**.
### Add tiles from a resource page
To change the size of a tile or to rearrange the tiles on a dashboard, follow th
### Additional tile configuration
-Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from **Azure Monitor**. You can also customize tile data to override the dashboard's default time settings.
+Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from Azure Monitor. You can also customize tile data to override the dashboard's default time settings.
-Any tile that needs to be set up displays a **Configure tile** banner until you customize the tile. To customize the tile:
+Any tile that needs to be set up displays a banner until you customize the tile. For the **Metrics chart**, the banner is **Edit in Metrics**.To customize the tile:
-1. Select **Done customizing** in the page header to exit edit mode.
+1. In the page header select **Save** to exit edit mode.
1. Select the banner, then do the required setup.
Any tile that needs to be set up displays a **Configure tile** banner until you
Data on the dashboard automatically shows activity for the past 24 hours. To show a different time span for just this tile, follow these steps:
-1. Select **Customize tile data** from the context menu or the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter from the upper left corner of the tile.
+1. Select **Customize tile data** from the context menu or from the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter in the upper left corner of the tile.
![Screenshot of tile context menu](./media/azure-portal-dashboards/dashboard-customize-tile-data.png)
To permanently delete a private or shared dashboard, follow these steps:
![Screenshot of delete confirmation](./media/azure-portal-dashboards/dashboard-delete-dash.png)
+## Recover a deleted dashboard
+
+If you're in the global Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. For information, see [Recover a deleted dashboard in the Azure portal](recover-shared-deleted-dashboard.md).
+ ## Next steps * [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md)
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
Title: Use a custom markdown tile on Azure dashboards description: Learn how to add a markdown tile to an Azure dashboard to display static content Previously updated : 01/08/2020 Last updated : 03/19/2021
You can add a markdown tile to your Azure dashboards to display custom, static c
![Screenshot showing portal sidebar](./media/azure-portal-markdown-tile/azure-portal-nav.png)
-1. If you've created any custom dashboards, in the dashboard view, use the drop-down to select the dashboard where the custom markdown tile should appear. Select the edit icon to open the **Tile Gallery**.
+1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**.
![Screenshot showing dashboard edit view](./media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png)
You can add a markdown tile to your Azure dashboards to display custom, static c
You can use any combination of plain text, Markdown syntax, and HTML content on the markdown tile. The Azure portal uses an open-source library called _marked_ to transform your content into HTML that is shown on the tile. The HTML produced by _marked_ is pre-processed by the portal before it's rendered. This step helps make sure that your customization won't affect the security or layout of the portal. During that pre-processing, any part of the HTML that poses a potential threat is removed. The following types of content aren't allowed by the portal:
-* JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations will be removed.
-* iframes - `<iframe>` tags will be removed.
-* Style - `<style>` tags will be removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
+* JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.
+* iframes - `<iframe>` tags are removed.
+* Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
## Next steps
azure-portal Azure Portal Video Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-video-series.md
Title: Azure portal how-to video series description: Find video demos for how to work with Azure services in the portal. View and link directly to the latest how-to videos. keywords: Previously updated : 10/05/2020 Last updated : 03/16/2021
The Azure portal how-to video series showcases how to work with Azure services i
## Featured video
-In this featured video, we show you how to use Azure Cost Management views.
+In this featured video, we show you how to build tabs and alerts in Azure workbooks.
-> [!VIDEO https://www.youtube.com/embed/VRJA5bn2VH0]
+> [!VIDEO https://www.youtube.com/embed/3XY3lYgrRvA]
-[How to use Azure Cost Management views](https://www.youtube.com/watch?v=VRJA5bn2VH0)
+[How to build tabs and alerts in Azure workbooks](https://www.youtube.com/watch?v=3XY3lYgrRvA)
Catch up on these recent videos you may have missed:
-| [How to use pills to filter in the Azure portal](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [How to get a visualization view of your resources](https://www.youtube.com/watch?v=wudqkkJd5E4) | [How to pin content to your Azure portal dashboard](https://www.youtube.com/watch?v=eyOJkhYItSg) |
+| [How to easily manage your virtual machine](https://www.youtube.com/watch?v=vQClJHt2ulQ) | [How to use pills to filter in the Azure portal](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [How to get a visualization view of your resources](https://www.youtube.com/watch?v=wudqkkJd5E4) |
| | | |
-| [![Image of YouTube video about how to use pills to filter in the Azure portal](https://i.ytimg.com/vi/XyKh_3NxUlM/hqdefault.jpg)](https://www.youtube.com/watch?XyKh_3NxUlM) | [![Image of YouTube video about how to get a visualization view of your resources](https://i.ytimg.com/vi/wudqkkJd5E4/hqdefault.jpg)](http://www.youtube.com/watch?v=wudqkkJd5E4) | [![Image of YouTube video about how to pin content to your Azure portal dashboard](https://i.ytimg.com/vi/eyOJkhYItSg/hqdefault.jpg)](http://www.youtube.com/watch?v=eyOJkhYItSg) |
+| [![Image of YouTube video about how to easily manage your virtual machine](https://i.ytimg.com/vi/vQClJHt2ulQ/hqdefault.jpg)](http://www.youtube.com/watch?v=vQClJHt2ulQ) | [![Image of YouTube video about how to use pills to filter in the Azure portal](https://i.ytimg.com/vi/XyKh_3NxUlM/hqdefault.jpg)](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [![Image of YouTube video about how to get a visualization view of your resources](https://i.ytimg.com/vi/wudqkkJd5E4/hqdefault.jpg)](http://www.youtube.com/watch?v=wudqkkJd5E4) |
## Video playlist
azure-portal Manage Filter Resource Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/manage-filter-resource-views.md
Title: View and filter Azure resource information description: Filter information and use different views to better understand your Azure resources. Previously updated : 09/11/2020 Last updated : 03/16/2021 # View and filter Azure resource information
To delete a view:
1. Select **Manage view** then **Browse all views**.
-1. In the **Saved views for "All resources"** pane, select the view then select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png).
+1. In the **Saved views** pane, select the view then select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png).
## Export information from a view
As you move around the portal, you'll see other areas where you can export infor
## Summarize resources with visuals
-The views we've looked at so far have been _list views_, but there are also _summary views_ that include visuals. You can save and use these views just like you can list views. Filters persist between the two types of views. There are standard views, like the **Location** view shown below, as well as views that are relevant to specific services, such as the **Status** view for Azure Storage.
+The views we've looked at so far have been _list views_, but there are also _summary views_ that include visuals. You can save and use these views just like you can with list views. Filters persist between the two types of views. There are standard views, like the **Location** view shown below, as well as views that are relevant to specific services, such as the **Status** view for Azure Storage.
:::image type="content" source="media/manage-filter-resource-views/summary-map.png" alt-text="Summary of resources in a map view":::
azure-portal Recover Shared Deleted Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/recover-shared-deleted-dashboard.md
# Recover a deleted dashboard in the Azure portal
-If you're in the public Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. If you're in an Azure government cloud or the dashboard isn't published, you cannot recover it, and you must rebuild it. For more information about publishing a dashboard, see [Publish dashboard](azure-portal-dashboard-share-access.md#publish-dashboard). Follow these steps to recover a published dashboard:
+If you're in the global Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. If you're in an Azure Government cloud or the dashboard isn't published, you cannot recover it, and you must rebuild it. For more information about publishing a dashboard, see [Publish dashboard](azure-portal-dashboard-share-access.md#publish-a-dashboard). Follow these steps to recover a published dashboard:
1. From the Azure portal menu, select **Resource groups**, then select the resource group where you published the dashboard (by default, it's named **dashboards**).
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: You can change Azure portal default settings to meet your own preferences. Settings include inactive session timeout, default view, menu mode, contrast, theme, notifications, and language and regional formats keywords: settings, timeout, language, regional Previously updated : 08/05/2020 Last updated : 03/15/2021
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Title: How to create an Azure support request
description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. ms.assetid: fd6841ea-c1d5-4bb7-86bd-0c708d193b89 Previously updated : 06/25/2020 Last updated : 03/16/2021 # Create an Azure support request
To create a support request, you must be an [Owner](../../role-based-access-cont
To start a support request from anywhere in the Azure portal:
-1. Select the **?** in the global header. Then select **Help + support**.
+1. Select the **?** in the global header, then select **Help + support**.
![Help and Support](./media/how-to-create-azure-support-request/helpandsupportnewlower.png)
To start a support request from anywhere in the Azure portal:
### Go to Help + support from a resource menu
-To start a support request in the context of the resource, you're currently working with:
+To start a support request in the context of the resource you're currently working with:
1. From the resource menu, in the **Support + Troubleshooting** section, select **New support request**.
azure-sql Single Database Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-scale.md
else {
- When downgrading a database with [geo-replication](active-geo-replication-configure-portal.md) enabled, downgrade its primary databases to the desired service tier and compute size before downgrading the secondary database (general guidance for best performance). When downgrading to a different edition, it's a requirement that the primary database is downgraded first. - The restore service offerings are different for the various service tiers. If you're downgrading to the **Basic** tier, there's a lower backup retention period. See [Azure SQL Database Backups](automated-backups-overview.md). - The new properties for the database aren't applied until the changes are complete.-- When data copying is required to scale a database (see [Latency](#latency)) when changing the service tier, high resource utilization concurrent to the scaling operation may cause longer scaling times. With [Accelerated Database Recovery (ADR)](/sql/relational-databases/accelerated-database-recovery-concepts.md), rollback of long running transactions is not a significant source of delay, but high concurrent resource usage may leave less compute, storage, and network bandwidth resources for scaling, particularly for smaller compute sizes.
+- When data copying is required to scale a database (see [Latency](#latency)) when changing the service tier, high resource utilization concurrent to the scaling operation may cause longer scaling times. With [Accelerated Database Recovery (ADR)](/sql/relational-databases/accelerated-database-recovery-concepts), rollback of long running transactions is not a significant source of delay, but high concurrent resource usage may leave less compute, storage, and network bandwidth resources for scaling, particularly for smaller compute sizes.
## Billing
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Important updates to Azure VMware Solution will be applied starting in March 202
## March 15, 2021 -- Azure VMware Solution service will perform maintenance work through March 19, 20201, to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
+- Azure VMware Solution service will perform maintenance work through March 19, 2021, to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
- During this time, VMware vCenter will be unavailable, and you won't be able to manage VMs (stop, start, create, delete). Private cloud scaling (adding/removing servers and clusters) will also be unavailable. VMware High Availability (HA) will continue to operate to provide protection for existing VMs.
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
+
+ Title: Overview of BareMetal Infrastructure Preview in Azure
+description: Overview of the BareMetal Infrastructure in Azure.
+++ Last updated : 1/4/2021++
+# What is BareMetal Infrastructure Preview on Azure?
+
+Azure BareMetal Infrastructure provides a secure solution for migrating enterprise custom workloads. The BareMetal instances are non-shared host/server hardware assigned to you. It unlocks porting your on-prem solution with specialized workloads requiring certified hardware, licensing, and support agreements. Azure handles infrastructure monitoring and maintenance for you, while in-guest operating system (OS) monitoring and application monitoring fall within your ownership.
+
+BareMetal Infrastructure provides a path to modernize your infrastructure landscape while maintaining your existing investments and architecture. With BareMetal Infrastructure, you can bring specialized workloads to Azure, allowing you access and integration with Azure services with low latency.
+
+## SKU availability in Azure regions
+BareMetal Infrastructure for specialized and general-purpose workloads is available, starting with four regions based on Revision 4.2 (Rev 4.2) stamps:
+- West Europe
+- North Europe
+- East US 2
+- South Central US
+
+>[!NOTE]
+>**Rev 4.2** is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
+
+## Support
+BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
+
+As soon as you receive root access and full control, you assume responsibility for:
+- Designing and implementing backup and recovery solutions, high availability, and disaster recovery
+- Licensing, security, and support for OS and third-party software
+
+Microsoft is responsible for:
+- Providing the hardware for specialized workloads
+- Provisioning the OS
++
+## Compute
+BareMetal Infrastructure offers multiple SKUs for specialized workloads. Available SKUs available range from the smaller two-socket system to the 24-socket system. Use the workload-specific SKUs for your specialized workload.
+
+The BareMetal instance stamp itself combines the following components:
+
+- **Computing:** Servers based on a different generation of Intel Xeon processors that provide the necessary computing capability and are certified for the specialized workload.
+
+- **Network:** A unified high-speed network fabric interconnects computing, storage, and LAN components.
+
+- **Storage:** An infrastructure accessed through a unified network fabric.
+
+Within the multi-tenant infrastructure of the BareMetal stamp, customers are deployed in isolated tenants. When deploying a tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one that BareMetal instances are billed.
+
+>[!NOTE]
+>A customer deployed in the BareMetal instance gets isolated into a tenant. A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants cannot see each other or communicate with each other on the BareMetal instances.
+
+## OS
+During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
+
+>[!NOTE]
+>Remember, BareMetal Infrastructure is a BYOL model.
+
+The available Linux OS versions are:
+- Red Hat Enterprise Linux (RHEL) 7.6
+- SUSE Linux Enterprise Server (SLES)
+ - SLES 12 SP2
+ - SLES 12 SP3
+ - SLES 12 SP4
+ - SLES 12 SP5
+ - SLES 15 SP1
+
+## Storage
+BareMetal instances based on specific SKU type come with predefined NFS storage for the specific workload type. When you provision BareMetal, you can provision more storage based on your estimated growth by submitting a support request. All storage comes with an all-flash disk in Revision 4.2 with support for NFSv3 and NFSv4. The newer Revision 4.5 NVMe SSD will be available. For more information on storage sizing, see the [BareMetal workload type](../virtual-machines/workloads/sap/get-started.md) section.
+
+>[!NOTE]
+>The storage used for BareMetal meets [Federal Information Processing Standard (FIPS) Publication 140-2](/microsoft-365/compliance/offering-fips-140-2) requirements offering Encryption at Rest by default. The data is stored securely on the disks.
+
+## Networking
+The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It's likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances is:
+
+- Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network assets.
+- An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher.
+- Extended Active directory and DNS in Azure or completely running in Azure.
+
+Using ExpressRoute lets you extend your on-premises network into Microsoft cloud over a private connection with a connectivity provider's help. You can enable **ExpressRoute Premium** to extend connectivity across geopolitical boundaries or use **ExpressRoute Local** for cost-effective data transfer between the location near the Azure region you want.
+
+BareMetal instances are provisioned within your Azure VNET server IP address range.
++
+The architecture shown is divided into three sections:
+- **Left:** shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
+- **Center:** shows [ExpressRoute](../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.
+- **Right:** shows Azure IaaS, and in this case use of VMs to host your applications, which are provisioned within your Azure virtual network.
+- **Bottom:** shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+ >[!TIP]
+ >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
+
+## Next steps
+
+The next step is to learn how to identify and interact with BareMetal Instance units through the Azure portal.
+
+> [!div class="nextstepaction"]
+> [Manage BareMetal Instances through the Azure portal](connect-baremetal-infrastructure.md)
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
+
+ Title: Connect BareMetal Instance units in Azure
+description: Learn how to identify and interact with BareMetal Instance units the Azure portal or Azure CLI.
++ Last updated : 03/19/2021++
+# Connect BareMetal Instance units in Azure
+
+This article shows how the [Azure portal](https://portal.azure.com/) displays [BareMetal Instances](concepts-baremetal-infrastructure-overview.md). This article also shows you the activities you can do in the Azure portal with your deployed BareMetal Instance units.
+
+## Register the resource provider
+An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal Instance units, you must register the resource provider with your subscription.
+
+You can register the BareMetal Instance resource provider by using the Azure portal or Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy your BareMetal Instance units.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. On the Azure portal menu, select **All services**.
+
+1. In the **All services** box, enter **subscription**, and then select **Subscriptions**.
+
+1. Select the subscription from the subscription list to view.
+
+1. Select **Resource providers** and enter **BareMetalInfrastructure** into the search. The resource provider should be **Registered**, as the image shows.
+
+>[!NOTE]
+>If the resource provider is not registered, select **Register**.
+
+
+### [Azure CLI](#tab/azure-cli)
+
+To begin using Azure CLI:
++
+Sign in to the Azure subscription you use for the BareMetal Instance deployment through the Azure CLI. Register the `BareMetalInfrastructure` resource provider with the [az provider register](/cli/azure/provider#az_provider_register) command:
+
+```azurecli
+az provider register --namespace Microsoft.BareMetalInfrastructure
+```
+
+You can use the [az provider list](/cli/azure/provider#az_provider_list) command to see all available providers.
+++
+For more information about resource providers, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+
+## BareMetal Instance units in the Azure portal
+
+When you submit a BareMetal Instance deployment request, you'll specify the Azure subscription that you're connecting to the BareMetal Instances. Use the same subscription you use to deploy the application layer that works against the BareMetal Instance units.
+
+During the deployment of your BareMetal Instances, a new [Azure resource group](../azure-resource-manager/management/manage-resources-portal.md) gets created in the Azure subscription you used in the deployment request. This new resource group lists all your BareMetal Instance units you've deployed in the specific subscription.
+
+### [Portal](#tab/azure-portal)
+
+1. In the BareMetal subscription, in the Azure portal, select **Resource groups**.
+
+ :::image type="content" source="media/baremetal-infrastructure-portal/view-baremetal-instance-units-azure-portal.png" alt-text="Screenshot that shows the list of Resource Groups":::
+
+1. In the list, locate the new resource group.
+
+ :::image type="content" source="media/baremetal-infrastructure-portal/filter-resource-groups.png" alt-text="Screenshot that shows the BareMetal Instance unit in a filtered Resource groups list" lightbox="media/baremetal-infrastructure-portal/filter-resource-groups.png":::
+
+ >[!TIP]
+ >You can filter on the subscription you used to deploy the BareMetal Instance. After you filter to the proper subscription, you might have a long list of resource groups. Look for one with a post-fix of **-Txxx** where xxx is three digits like **-T250**.
+
+1. Select the new resource group to show the details of it. The image shows one BareMetal Instance unit deployed.
+
+ >[!NOTE]
+ >If you deployed several BareMetal Instance tenants under the same Azure subscription, you would see multiple Azure resource groups.
+
+### [Azure CLI](#tab/azure-cli)
+
+To see all your BareMetal Instances, run the [az baremetalinstance list](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_list) command for your resource group:
+
+```azurecli
+az baremetalinstance list --resource-group DSM05A-T550 ΓÇôoutput table
+```
+
+> [!TIP]
+> The `--output` parameter is a global parameter, available for all commands. The **table** value presents output in a friendly format. For more information, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
+++
+## View the attributes of a single instance
+
+You can view the details of a single unit.
+
+### [Portal](#tab/azure-portal)
+
+In the list of the BareMetal instance, select the single instance you want to view.
+
+
+The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left, you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see them here as well. By default, the BareMetal Instance units don't have tags assigned.
+
+On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however, don't indicate whether it's up and running.
+
+The possible hardware revisions are:
+
+* Revision 3 (Rev 3)
+
+* Revision 4 (Rev 4)
+
+* Revision 4.2 (Rev 4.2)
+
+>[!NOTE]
+>Rev 4.2 is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal. For more information, see [BareMetal Infrastructure on Azure](concepts-baremetal-infrastructure-overview.md).
+
+Also, on the right side, you'll find the [Azure Proximity Placement Group's](../virtual-machines/co-location.md) name, which is created automatically for each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs that host the application layer. When you use the Proximity Placement Group associated with the BareMetal Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.
+
+>[!TIP]
+>To locate the application layer in the same Azure datacenter as Revision 4.x, see [Azure proximity placement groups for optimal network latency](/azure/virtual-machines/workloads/sap/sap-proximity-placement-scenarios).
+
+### [Azure CLI](#tab/azure-cli)
+
+To see details of a BareMetal Instance, run the [az baremetalinstance show](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_show) command:
+
+```azurecli
+az baremetalinstance show --resource-group DSM05A-T550 --instance-name orcllabdsm01
+```
+
+If you're uncertain of the instance name, run the `az baremetalinstance list` command, described above.
++
+
+## Check activities of a single instance
+
+You can check the activities of a single unit. One of the main activities recorded are restarts of the unit. The data listed includes the activity's status, timestamp the activity triggered, subscription ID, and the Azure user who triggered the activity.
+
+
+Changes to the unit's metadata in Azure also get recorded in the Activity log. Besides the restart initiated, you can see the activity of **Write BareMetallnstances**. This activity makes no changes on the BareMetal Instance unit itself but documents the changes to the unit's metadata in Azure.
+
+Another activity that gets recorded is when you add or delete a [tag](../azure-resource-manager/management/tag-resources.md) to an instance.
+
+## Add and delete an Azure tag to an instance
+
+### [Portal](#tab/azure-portal)
+
+You can add Azure tags to a BareMetal Instance unit or delete them. The way tags get assigned doesn't differ from assigning tags to VMs. As with VMs, the tags exist in the Azure metadata, and for BareMetal Instances, they have the same restrictions as the tags for VMs.
+
+Deleting tags work the same way as with VMs. Applying and deleting a tag are listed in the BareMetal Instance unit's Activity log.
+
+### [Azure CLI](#tab/azure-cli)
+
+Assigning tags to BareMetal Instances works the same as for virtual machines. The tags exist in the Azure metadata, and for BareMetal Instances, they have the same restrictions as the tags for VMs.
+
+To add tags to a BareMetal Instance unit, run the [az baremetalinstance update](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_update) command:
+
+```azurecli
+az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllabdsm01 --set tags.Dept=Finance tags.Status=Normal
+```
+
+Use the same command to remove a tag:
+
+```azurecli
+az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllabdsm01 --remove tags.Dept
+```
+++
+## Check properties of an instance
+
+When you acquire the instances, you can go to the Properties section to view the data collected about the instances. The data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage snapshot configuration.
+
+Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal Instance stack. You'll use this IP address when you edit the [configuration file for storage snapshot backups](../virtual-machines/workloads/sap/hana-backup-restore.md#set-up-storage-snapshots).
+
+
+## Restart a unit through the Azure portal
+
+There are various situations where the OS won't finish a restart, which requires a power restart of the BareMetal Instance unit.
+
+### [Portal](#tab/azure-portal)
+
+You can do a power restart of the unit directly from the Azure portal:
+
+Select **Restart** and then **Yes** to confirm the restart of the unit.
+
+
+When you restart a BareMetal Instance unit, you'll experience a delay. During this delay, the power state moves from **Starting** to **Started**, which means the OS has started up completely. As a result, after a restart, you can't log into the unit as soon as the state switches to **Started**.
+
+### [Azure CLI](#tab/azure-cli)
+
+To restart a BareMetal Instance unit, use the [az baremetalinstance restart](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_restart) command:
+
+```azurecli
+az baremetalinstance restart --resource-group DSM05a-T550 --instance-name orcllabdsm01
+```
+++
+>[!IMPORTANT]
+>Depending on the amount of memory in your BareMetal Instance unit, a restart and a reboot of the hardware and the operating system can take up to one hour.
+
+## Open a support request for BareMetal Instances
+
+You can submit support requests specifically for a BareMetal Instance unit.
+1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+
+ - **Issue type:** Select an issue type
+
+ - **Subscription:** Select your subscription
+
+ - **Service:** BareMetal Infrastructure
+
+ - **Resource:** Provide the name of the instance
+
+ - **Summary:** Provide a summary of your request
+
+ - **Problem type:** Select a problem type
+
+ - **Problem subtype:** Select a subtype for the problem
+
+1. Select the **Solutions** tab to find a solution to your problem. If you can't find a solution, go to the next step.
+
+1. Select the **Details** tab and select whether the issue is with VMs or the BareMetal Instance units. This information helps direct the support request to the correct specialists.
+
+1. Indicate when the problem began and select the instance region.
+
+1. Provide more details about the request and upload a file if needed.
+
+1. Select **Review + Create** to submit the request.
+
+It takes up to five business days for a support representative to confirm your request.
+
+## Next steps
+
+If you want to learn more about the workloads, see [BareMetal workload types](../virtual-machines/workloads/sap/get-started.md).
baremetal-infrastructure Know Baremetal Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/know-baremetal-terms.md
In this article, we'll cover some important BareMetal terms.
- **Tenant**: A customer deployed in BareMetal Instance stamp gets isolated into a *tenant.* A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants can't see each other or communicate with each other on the BareMetal Instance stamp level. A customer can choose to have deployments into different tenants. Even then, there's no communication between tenants on the BareMetal Instance stamp level. ## Next steps
-Learn more about the [BareMetal Infrastructure](workloads/sap/baremetal-overview-architecture.md) or how to [identify and interact with BareMetal Instance units](workloads/sap/baremetal-infrastructure-portal.md).
+Learn more about the [BareMetal Infrastructure](concepts-baremetal-infrastructure-overview.md) or how to [identify and interact with BareMetal Instance units](connect-baremetal-infrastructure.md).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Remove old diagnostics settings for each role in the Service Configuration (.csc
## Required Service Definition file (.csdef) updates
+> [!NOTE]
+> Changes in service definition file (.csdef) requires the package file (.cspkg) to be generated again. Please build and repackage your .cspkg post making the following changes in the .csdef file to get the latest settings for your cloud service
+ ### 1) Virtual Machine sizes The following sizes are deprecated in Azure Resource Manager. However, if you want to continue to use them update the `vmsize` name with the associated Azure Resource Manager naming convention.
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Deploy a Cloud Service (extended support) > [!NOTE]
-> An alternative way of deploying your cloud service (extended support) is via [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal for your future deployments
+> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via Powershell
1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template.
This tutorial explains how to create a Cloud Service (extended support) deployme
``` 6. (Optional) Create an extension profile to add extensions to your cloud service. For this example, we are adding the remote desktop and Windows Azure diagnostics extension.
-
+ > [!Note]
+ > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed
+ ```json "extensionProfile": { "extensions": [
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-rdp.md
The Azure portal uses the remote desktop extension to enable remote desktop even
2. Select **Add**. 3. Choose the roles to enable remote desktop for. 4. Fill in the required fields for user name, password, expiry, and certificate (not required).
+> [Note]
+> The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed
- :::image type="content" source="media/remote-desktop-2.png" alt-text="Image shows inputting the information required to connect to remote desktop.":::
+ :::image type="content" source="media/remote-desktop-2.png" alt-text="Image shows inputting the information required to connect to remote desktop.":::
5. When finished, select **Save**. It will take a few moments before your role instances are ready to receive connections.
Once remote desktop is enabled on the roles, you can initiate a connection direc
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+**Known issues**
+
+**C++/C#/Java**: `DialogServiceConnector` cannot use a `CustomCommandsConfig` to access a Custom Commands application and will instead encounter a connection error. This can be worked around by manually adding your application ID to the request with `config.SetServiceProperty("X-CommandsAppId", "your-application-id", ServicePropertyChannel.UriQueryParameter)`. The expected behavior of `CustomCommandsConfig` will be restored in the next release.
+ **Highlights summary** - Smaller memory and disk footprint making the SDK more efficient - this time the focus was on Android. - Improved support for compressed audio for both speech-to-text and text-to-speech, creating more efficient client/server communication.
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Last updated 03/05/2021
## Prerequisites > [!NOTE]
-> Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service subscription key or a single-service subscription key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+>
+> 1. Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service subscription key or a single-service subscription key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+> 2. Document Translation is currently available in the **S1 Standard Service Plan**. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
To get started, you'll need:
To get started, you'll need:
* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
-* A completed [**Document Translation (Preview) form**](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOEE4UVdFQVBRQVBWWDBRQUM3WjYxUEpUTC4u) to enable your Azure subscription to use the new Document Translation feature.
- ## Get your custom domain name and subscription key > [!IMPORTANT]
container-registry Container Registry Access Selected Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-access-selected-networks.md
Title: Configure public registry access description: Configure IP rules to enable access to an Azure container registry from selected public IP addresses or address ranges. Previously updated : 08/17/2020 Last updated : 03/08/2021 # Configure public IP network rules
IP network rules are configured on the public registry endpoint. IP network rule
Configuring IP access rules is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md).
+Each registry supports a maximum of 100 network access rules.
+ [!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)] ## Access from selected public network - CLI
container-registry Container Registry Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-vnet.md
Configuring a registry service endpoint is available in the **Premium** containe
* Future development of service endpoints for Azure Container Registry isn't currently planned. We recommend using [private endpoints](container-registry-private-link.md) instead. * You can't use the Azure portal to configure service endpoints on a registry. * Only an [Azure Kubernetes Service](../aks/intro-kubernetes.md) cluster or Azure [virtual machine](../virtual-machines/linux/overview.md) can be used as a host to access a container registry using a service endpoint. *Other Azure services including Azure Container Instances aren't supported.*
-* Each registry supports a maximum of 100 network access rules.
* Service endpoints for Azure Container Registry aren't supported in the Azure US Government cloud or Azure China cloud. [!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)]
cosmos-db Mongodb Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-version-upgrade.md
Previously updated : 03/02/2021 Last updated : 03/19/2021
If you are upgrading from version 3.2, you will need to replace the existing end
## How to upgrade
-1. Go to the Azure portal and navigate to your Azure Cosmos DB API for MongoDB account overview blade. Verify your current server version is what you expect.
+1. Sign into the [Azure portal.](https://portal.azure.com/)
- :::image type="content" source="./media/mongodb-version-upgrade/1.png" alt-text="Azure portal with MongoDB account overview" border="false":::
+1. Navigate to your Azure Cosmos DB API for MongoDB account. Open the **Overview** pane and verify that your current **Server version** is either 3.2 or 3.6.
-2. From the options on the left, select the `Features` blade. This will reveal the Account level features that are available for your database account.
+ :::image type="content" source="./media/mongodb-version-upgrade/check-current-version.png" alt-text="Check the current version of your MongoDB account from the Azure portal." border="true":::
- :::image type="content" source="./media/mongodb-version-upgrade/2.png" alt-text="Azure portal with MongoDB account overview with Features blade highlighted" border="false":::
+1. From the left menu, open the `Features` pane. This pane shows the account level features that are available for your database account.
-3. Click on the `Upgrade Mongo server version` row. If you don't see this option, your account might not be eligible for this upgrade. Please file [a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if that is the case.
+1. Select the `Upgrade MongoDB server version` row. If you don't see this option, your account might not be eligible for this upgrade. Please file [a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if that is the case.
- :::image type="content" source="./media/mongodb-version-upgrade/3.png" alt-text="Features blade with options." border="false":::
+ :::image type="content" source="./media/mongodb-version-upgrade/upgrade-server-version.png" alt-text="Open the Features blade and upgrade your account." border="true":::
-4. Review the information displayed about the upgrade. Click on `Enable` as soon as you are ready to start the process.
+1. Review the information displayed about the upgrade. Select `Set server version to 4.0` (or 3.6 depending upon your current version).
- :::image type="content" source="./media/mongodb-version-upgrade/4.png" alt-text="Expanded upgrade guidance." border="false":::
+ :::image type="content" source="./media/mongodb-version-upgrade/select-upgrade.png" alt-text="Review upgrade guidance and select upgrade." border="true":::
-5. After starting the process, the `Features` menu will show the status of the upgrade. The status will go from `Pending`, to `In Progress`, to `Upgraded`. This process will not affect the existing functionality or operations of the database account.
+1. After you start the upgrade, the **Feature** menu is greyed out and the status is set to *Pending*. The upgrade takes around 15 minutes to complete. This process will not affect the existing functionality or operations of your database account. After it's complete, the **Update MongoDB server version** status will show the upgraded version. Please [contact support](https://azure.microsoft.com/en-us/support/create-ticket/) if there was an issue processing your request.
- :::image type="content" source="./media/mongodb-version-upgrade/5.png" alt-text="Upgrade status after initiating." border="false":::
+1. The following are some considerations after upgrading your account:
-6. Once the upgrade is completed, the status will show as `Upgraded`. Click on it to learn more about the next steps and actions you need to take to finalize the process. Please [contact support](https://azure.microsoft.com/en-us/support/create-ticket/) if there was an issue processing your request.
-
- :::image type="content" source="./media/mongodb-version-upgrade/6.png" alt-text="Upgraded account status." border="false":::
-
-7.
- 1. If you upgraded from 3.2, go back to the `Overview` blade, and copy the new connection string to use in your application. The old connection string running 3.2 will not be interrupted. To ensure a consistent experience, all your applications must use the new endpoint.
- 2. If you upgraded from 3.6, your existing connection string will be upgraded to the version specified and should continue to be used.
-
- :::image type="content" source="./media/mongodb-version-upgrade/7.png" alt-text="New overview blade." border="false":::
+ 1. If you upgraded from 3.2, go back to the **Overview** pane, and copy the new connection string to use in your application. The old connection string running 3.2 will not be interrupted. To ensure a consistent experience, all your applications must use the new endpoint.
+ 1. If you upgraded from 3.6, your existing connection string will be upgraded to the version specified and should continue to be used.
## How to downgrade
-You may also downgrade your account from 4.0 to 3.6 via the same steps in the 'How to Upgrade' section.
-If you upgraded from 3.2 to (4.0 or 3.6) and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
+You may also downgrade your account from 4.0 to 3.6 via the same steps in the 'How to Upgrade' section.
+If you upgraded from 3.2 to (4.0 or 3.6) and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
## Next steps
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partitioning-overview.md
Previously updated : 10/12/2020 Last updated : 03/19/2021
Azure Cosmos DB uses hash-based partitioning to spread logical partitions across
Transactions (in stored procedures or triggers) are allowed only against items in a single logical partition.
-You can learn more about [how Azure Cosmos DB manages partitions](partitioning-overview.md). (It's not necessary to understand the internal details to build or run your applications, but added here for a curious reader.)
- ## Replica sets Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica set hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
cost-management-billing Prepay Hana Large Instances Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md
Previously updated : 07/24/2020 Last updated : 03/19/2021
First, get the reservation order and price for the provisioned HANA large instan
The following example uses [armclient](https://github.com/projectkudu/ARMClient) to make REST API calls with PowerShell. Here's what the reservation order and Calculate Price API request and request body should resemble: ```azurepowershell-interactive
-armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2018-06-01 "{
+armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2019-04-01 "{
'sku': { 'name': 'SAP_HANA_On_Azure_S224om' },
armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2018-06-
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',
+ 'billingplan': 'Monthly'
'displayName': 'testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111'], 'appliedScopeType': 'Single',
Make your purchase using the returned `quoteId` and the `reservationOrderId` tha
Here's an example request: ```azurepowershell-interactive
-armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-version=2018-06-01 "{
+armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222-2222-222222222222?api-version=2019-04-01 "{
'sku': { 'name': 'SAP_HANA_On_Azure_S224om' }, 'location': 'eastus', 'properties': {
- 'reservedResourceType': 'SapHana',
+ 'reservedResourceType': 'SapHana',
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',
+ 'billingplan': 'Monthly'
+ 'displayName': ' testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123'], 'appliedScopeType': 'Single',
data-factory Transform Data Using Hadoop Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-hadoop-hive.md
If you are new to Azure Data Factory, read through [Introduction to Azure Data F
| defines | Specify parameters as key/value pairs for referencing within the Hive script. | No | | queryTimeout | Query timeout value (in minutes). Applicable when the HDInsight cluster is with Enterprise Security Package enabled. | No |
+>[!NOTE]
+>The default value for queryTimeout is 120 minutes.
+ ## Next steps See the following articles that explain how to transform data in other ways:
See the following articles that explain how to transform data in other ways:
* [Spark activity](transform-data-using-spark.md) * [.NET custom activity](transform-data-using-dotnet-custom-activity.md) * [Azure Machine Learning Studio (classic) Batch Execution activity](transform-data-using-machine-learning.md)
-* [Stored procedure activity](transform-data-using-stored-procedure.md)
+* [Stored procedure activity](transform-data-using-stored-procedure.md)
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-event-aggregation.md
Last updated 1/20/2021 -+ # Event aggregation (Preview)
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
Last updated 11/10/2020
This article describes how to configure the [Postman REST client](https://www.getpostman.com/) to interact with the Azure Digital Twins APIs, through the following steps:
-1. Use the [Azure CLI](/cli/azure/install-azure-cli) to get a bearer token that you will use to make API requests in Postman.
-1. Set up a Postman collection and configure the Postman REST client to use your bearer token to authenticate.
-1. Use the configured Postman to create and send a request to the Azure Digital Twins APIs.
+1. Use the Azure CLI to [**get a bearer token**](#get-bearer-token) that you will use to make API requests in Postman.
+1. Set up a [**Postman collection**](#about-postman-collections) and configure the Postman REST client to use your bearer token to authenticate. When setting up the collection, you can choose either of these options:
+ 1. [**Import**](#import-collection-of-azure-digital-twins-apis) a pre-built collection of Azure Digital Twins API requests.
+ 1. [**Create**](#create-your-own-collection) your own collection from scratch.
+1. [**Add requests**](#add-an-individual-request) to your configured collection and send them to the Azure Digital Twins APIs.
## Prerequisites
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0 ```
-1. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authenticate your requests.
+1. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
- :::image type="content" source="media/how-to-use-postman/console-access-token.png" alt-text="View of a local console window showing the result of the az account get-access-token command. One of the fields in the result is called accessToken and its sample value--beginning with ey--is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman/console-access-token.png" alt-text="Screenshot of a local console window showing the result of the az account get-access-token command. One of the fields in the result is called accessToken and its sample value--beginning with ey--is highlighted.":::
>[!TIP] >This token is valid for at least five minutes and a maximum of 60 minutes. If you run out of time allotted for the current token, you can repeat the steps in this section to get a new one.
-## Set up Postman collection and authorization
+Next, you'll set up Postman to use this token to make API requests to Azure Digital Twins.
-Next, set up Postman to make API requests.
-These steps happen in your local Postman application, so go ahead and open the Postman application on your computer.
+## About Postman collections
+
+Requests in Postman are saved in **collections** (groups of requests). When you create a collection to group your requests, you can apply common settings to many requests at once. This can greatly simplify authorization if you plan to create more than one request against the Azure Digital Twins APIs, as you only have to configure these details once for the entire collection.
+
+When working with Azure Digital Twins, you can get started by importing a [pre-built collection of all the Azure Digital Twins requests](#import-collection-of-azure-digital-twins-apis). You may want to do this if you're exploring the APIs and want to quickly set up a project with request examples.
+
+Alternatively, you can also choose to start from scratch, by [creating your own empty collection](#create-your-own-collection) and populating it with individual requests that call only the APIs you need.
+
+The following sections describe both of these processes. The rest of the article takes place in your local Postman application, so go ahead and open the Postman application on your computer now.
+
+## Import collection of Azure Digital Twins APIs
+
+A quick way to get started with Azure Digital Twins in Postman is to import a pre-built collection of requests for the Azure Digital Twins APIs.
+
+### Download the collection file
+
+The first step in importing the API set is to download a collection.
+
+There are currently two Azure Digital Twins collections available for you to choose from:
+* [**Azure Digital Twins Postman Collection**](https://github.com/microsoft/azure-digital-twins-postman-samples): This collection provides a simple getting started experience for Azure Digital Twins in Postman. The requests include sample data, so you can run them with minimal edits required. Choose this collection if you want a digestible set of key API requests containing sample information.
+ - To find the collection, navigate to the repo link and open the file named *postman_collection.json*.
+* [**Azure Digital Twins data plane Swagger**](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself.
+ - To find the collection, navigate to the repo link and choose the folder for the latest spec version. From here, open the file called *digitaltwins.json*.
+
+Here's how to download your chosen collection to your machine so that you can import it into Postman.
+1. Use the links above to open the collection file in GitHub in your browser.
+1. Select the **Raw** button to open the raw text of the file.
+ :::image type="content" source="media/how-to-use-postman/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman/swagger-raw.png":::
+1. Copy the text from the window, and paste it into a new file on your machine.
+1. Save the file with a *.json* extension (the file name can be whatever you want, as long as you can remember it to find the file later).
+
+### Import the collection
+
+Next, import the collection into Postman.
+
+1. From the main Postman window, select the **Import** button.
+ :::image type="content" source="media/how-to-use-postman/postman-import-collection.png" alt-text="Screenshot of a newly opened Postman window. The 'Import' button is highlighted." lightbox="media/how-to-use-postman/postman-import-collection.png":::
+
+1. In the Import window that follows, select **Upload Files** and navigate to the collection file on your machine that you created earlier. Select Open.
+1. Select the **Import** button to confirm.
+
+ :::image type="content" source="media/how-to-use-postman/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window. The Azure Digital Twins API file is showing as a file to import as a collection. The 'Import' button is highlighted.":::
+
+The newly imported collection can now be seen from your main Postman view, in the Collections tab.
++
+Next, continue on to the next section to add a bearer token to the collection for authorization and connect it to your Azure Digital twins instance.
+
+### Configure authorization
+
+Next, edit the collection you've created to configure some access details. Highlight the collection you've created and select the **View more actions** icon to pull up a menu. Select **Edit**.
++
+Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the **token value** you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+
+1. In the edit dialog for your collection, make sure you're on the **Authorization** tab.
+
+ :::image type="content" source="media/how-to-use-postman/postman-authorization-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Authorization' tab." lightbox="media/how-to-use-postman/postman-authorization-imported.png":::
+
+1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
+
+ :::image type="content" source="media/how-to-use-postman/postman-paste-token-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Authorization' tab. A Type of 'OAuth 2.0' is selected, and Access Token box where the access token value can be pasted is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-imported.png":::
+
+### Configure collection variables
+
+Next, help the collection connect easily to your Azure Digital Twins resources by setting some collection-level **variables**. When many requests in a collection require the same value (like the host name of your Azure Digital Twins instance), you can store the value in a variable that applies to every request in the collection. Both of the downloadable collections for Azure Digital Twins come with pre-created variables that you can set at the collection level.
+
+1. Still in the edit dialog for your collection, move to the **Variables** tab.
+
+1. Use your instance's **host name** from the [*Prerequisites*](#prerequisites) section to set the CURRENT VALUE field of the relevant variable. Select **Save**.
+
+ :::image type="content" source="media/how-to-use-postman/postman-variables-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Variables' tab. The 'CURRENT VALUE' field is highlighted." lightbox="media/how-to-use-postman/postman-variables-imported.png":::
+
+1. If your collection has additional variables or if you'd like to add your own, fill and save those values as well.
+
+When you're finished with the above steps, you're done configuring the collection. You can close the editing tab for the collection if you want.
+
+### Explore requests
+
+Next, explore the requests inside the Azure Digital Twins API collection. You can expand the collection to view the pre-created requests (sorted by category of operation).
+
+Different requests require different information about your instance and its data. To see all the information required to craft a particular request, look up the request details in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
+
+You can edit the details of a request in the Postman collection using these steps:
+
+1. Select it from the list to pull up its editable details.
+
+1. Fill in values for the variables listed in the **Params** tab under **Path Variables**.
+
+ :::image type="content" source="media/how-to-use-postman/postman-request-details-imported.png" alt-text="Screenshot of the main Postman window. The Azure Digital Twins API collection is expanded to the 'Digital Twins Get Relationship By Id' request. Details of the request are shown in the center of the page, where the 'Path Variables' section is highlighted." lightbox="media/how-to-use-postman/postman-request-details-imported.png":::
+
+1. Provide any necessary **Headers** or **Body** details in the respective tabs.
+
+Once all the required details are provided, you can run the request with the **Send** button.
+
+You can also add your own requests to the collection, using the process described in the [*Add an individual request*](#add-an-individual-request) section below.
+
+## Create your own collection
+
+Instead of importing the existing collection of all Azure Digital Twins APIs, you can also create your own collection from scratch. You can then populate it with individual requests using the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
### Create a Postman collection
-Requests in Postman are saved in **collections** (groups of requests). When you create a collection to group your requests, you can apply common settings to many requests at once. This can greatly simplify authorization if you plan to create more than one request against the Azure Digital Twins APIs, as you only have to configure authentication once for the entire collection.
+1. To create a collection, select the **New** button in the main postman window.
-1. To create a collection, hit the *+ New Collection* button.
+ :::image type="content" source="media/how-to-use-postman/postman-new.png" alt-text="Screenshot of the main Postman window. The 'New' button is highlighted." lightbox="media/how-to-use-postman/postman-new.png":::
- :::image type="content" source="media/how-to-use-postman/postman-new-collection.png" alt-text="View of a newly opened Postman window. The 'New Collection' button is highlighted":::
+ Choose a type of **Collection**.
-1. In the *CREATE A NEW COLLECTION* window that follows, provide a **Name** and optional **Description** for your collection.
+ :::image type="content" source="media/how-to-use-postman/postman-new-collection-2.png" alt-text="Screenshot of the 'Create New' dialog in Postman. The 'Collection' option is highlighted.":::
+
+1. This will open a tab for filling the details of the new collection. Select the Edit icon next to the collection's default name (**New Collection**) to replace it with your own choice of name.
+
+ :::image type="content" source="media/how-to-use-postman/postman-new-collection-3.png" alt-text="Screenshot of the new collection's edit dialog in Postman. The Edit icon next to the name 'New Collection' is highlighted." lightbox="media/how-to-use-postman/postman-new-collection-3.png":::
Next, continue on to the next section to add a bearer token to the collection for authorization.
-### Add authorization token and finish collection
+### Configure authorization
+
+Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the **token value** you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
-1. In the *CREATE A NEW COLLECTION* dialog, move to the *Authorization* tab. This is where you will place the **token value** you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+1. Still in the edit dialog for your new collection, move to the **Authorization** tab.
- :::image type="content" source="media/how-to-use-postman/postman-authorization.png" alt-text="The 'CREATE A NEW COLLECTION' Postman window, showing the 'Authorization' tab.":::
+ :::image type="content" source="media/how-to-use-postman/postman-authorization-custom.png" alt-text="Screenshot of the new collection's edit dialog in Postman, showing the 'Authorization' tab." lightbox="media/how-to-use-postman/postman-authorization-custom.png":::
-1. Set the *Type* to _**OAuth 2.0**_, and paste your access token into the *Access Token* box.
+1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
- :::image type="content" source="media/how-to-use-postman/postman-paste-token.png" alt-text="The 'CREATE A NEW COLLECTION' Postman window, showing the 'Authorization' tab. A Type of 'OAuth 2.0' is selected, and Access Token box where the access token value can be pasted is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman/postman-paste-token-custom.png" alt-text="Screenshot of the new collection's edit dialog in Postman, showing the 'Authorization' tab. A Type of 'OAuth 2.0' is selected, and Access Token box where the access token value can be pasted is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-custom.png":::
-1. After pasting in your bearer token, hit *Create* to finish creating your collection.
+When you're finished with the above steps, you're done configuring the collection. You can close the edit tab for the new collection if you want.
-Your new collection can now be seen from your main Postman view, under *Collections*.
+The new collection can be seen from your main Postman view, in the Collections tab.
-## Create a request
+## Add an individual request
-After completing the previous steps, you can create requests to the Azure Digital Twin APIs.
+Now that your collection is set up, you can add your own requests to the Azure Digital Twin APIs.
-1. To create a request, hit the *+ New* button.
+1. To create a request, use the **New** button again.
- :::image type="content" source="media/how-to-use-postman/postman-new-request.png" alt-text="View of the main Postman window. The 'New' button is highlighted":::
+ :::image type="content" source="media/how-to-use-postman/postman-new.png" alt-text="Screenshot of the main Postman window. The 'New' button is highlighted." lightbox="media/how-to-use-postman/postman-new.png":::
-1. Choose *Request*.
+ Choose a type of **Request**.
- :::image type="content" source="media/how-to-use-postman/postman-new-request-2.png" alt-text="View of the options you can select to create something new. The 'Request' option is highlighted":::
+ :::image type="content" source="media/how-to-use-postman/postman-new-request-2.png" alt-text="Screenshot of the 'Create New' dialog in Postman. The 'Request' option is highlighted.":::
-1. This action opens the *Save request* window, where you can enter a name for your request, give it an optional description, and choose the collection that it's a part of. Fill in the details and save the request to the collection you created earlier.
+1. This action opens the SAVE REQUEST window, where you can enter a name for your request, give it an optional description, and choose the collection that it's a part of. Fill in the details and save the request to the collection you created earlier.
:::row::: :::column:::
- :::image type="content" source="media/how-to-use-postman/postman-save-request.png" alt-text="View of the 'Save request' window where you can fill out the fields described. The 'Save to Azure Digital Twins collection' button is highlighted":::
+ :::image type="content" source="media/how-to-use-postman/postman-save-request.png" alt-text="Screenshot of the 'Save request' window in Postman, where you can fill out the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
:::column-end::: :::column::: :::column-end:::
After completing the previous steps, you can create requests to the Azure Digita
You can now view your request under the collection, and select it to pull up its editable details. ### Set request details
To proceed with an example query, this article will use the Query API (and its [
1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST `https://digitaltwins-hostname/query?api-version=2020-10-31`*. 1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. This is where you will use your instance's **host name** from the [*Prerequisites*](#prerequisites) section.
- :::image type="content" source="media/how-to-use-postman/postman-request-url.png" alt-text="In the details of the new request, the query URL from the reference documentation has been filled into the request URL box." lightbox="media/how-to-use-postman/postman-request-url.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-request-url.png" alt-text="Screenshot of the new request's details in Postman. The query URL from the reference documentation has been filled into the request URL box." lightbox="media/how-to-use-postman/postman-request-url.png":::
-1. Check that the parameters shown for the request in the *Params* tab match those described in the reference documentation. For this request in Postman, the `api-version` parameter was automatically filled when the request URL was entered in the previous step. For the Query API, this is the only required parameter, so this step is done.
-1. In the *Authorization* tab, set the *Type* to *Inherit auth from parent*. This indicates that this request will use the authentication you set up earlier for the entire collection.
-1. Check that the headers shown for the request in the *Headers* tab match those described in the reference documentation. For this request, several headers have been automatically filled. For the Query API, none of the header options are required, so this step is done.
-1. Check that the body shown for the request in the *Body* tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here is an example body for this request that queries for all the digital twins in the instance:
+1. Check that the parameters shown for the request in the **Params** tab match those described in the reference documentation. For this request in Postman, the `api-version` parameter was automatically filled when the request URL was entered in the previous step. For the Query API, this is the only required parameter, so this step is done.
+1. In the **Authorization** tab, set the Type to **Inherit auth from parent**. This indicates that this request will use the authorization you set up earlier for the entire collection.
+1. Check that the headers shown for the request in the **Headers** tab match those described in the reference documentation. For this request, several headers have been automatically filled. For the Query API, none of the header options are required, so this step is done.
+1. Check that the body shown for the request in the **Body** tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here is an example body for this request that queries for all the digital twins in the instance:
- :::image type="content" source="media/how-to-use-postman/postman-request-body.png" alt-text="In the details of the new request, the Body tab is shown. It contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman/postman-request-body.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman. The Body tab is shown, and it contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman/postman-request-body.png":::
For more information about crafting Azure Digital Twins queries, see [*How-to: Query the twin graph*](how-to-query-graph.md). 1. Check the reference documentation for any other fields that may be required for your type of request. For the Query API, all requirements have now been met in the Postman request, so this step is done.
-1. Use the *Send* button to send your completed request.
- :::image type="content" source="media/how-to-use-postman/postman-request-send.png" alt-text="Near the details of the new request, the Send button is highlighted." lightbox="media/how-to-use-postman/postman-request-send.png":::
+1. Use the **Send** button to send your completed request.
+ :::image type="content" source="media/how-to-use-postman/postman-request-send.png" alt-text="Screenshot of Postman showing the details of the new request. The Send button is highlighted." lightbox="media/how-to-use-postman/postman-request-send.png":::
After sending the request, the response details will appear in the Postman window below the request. You can view the response's status code and any body text. You can also compare the response to the expected response data given in the reference documentation, to verify the result or learn more about any errors that arise.
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/cache-usage-models.md
This table summarizes the usage model differences:
[!INCLUDE [usage-models-table.md](includes/usage-models-table.md)]
-<!-- | Usage model | Caching mode | Back-end verification | Maximum write-back delay |
-|-|--|--|--|
-| Read heavy, infrequent writes | Read | Never | None |
-| Greater than 15% writes | Read/write | 8 hours | 20 minutes |
-| Clients bypass the cache | Read | 30 seconds | None |
-| Greater than 15% writes, frequent back-end checking (30 seconds) | Read/write | 30 seconds | 20 minutes |
-| Greater than 15% writes, frequent back-end checking (60 seconds) | Read/write | 60 seconds | 20 minutes |
-| Greater than 15% writes, frequent write-back | Read/write | 30 seconds | 30 seconds |
-| Read heavy, checking the backing server every 3 hours | Read | 3 hours | None |
> If you have questions about the best usage model for your Azure HPC Cache workflow, talk to your Azure representative or open a support request for help. ## Next steps
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
description: Explains how to configure additional settings for the cache like MT
Previously updated : 03/15/2021 Last updated : 03/17/2021
If you need to set a custom DNS server for your cache, use the provided fields:
> [!NOTE] > The cache will use only the first DNS server it successfully finds. -->
+Consider using a test cache to check and refine your DNS setup before you use it in a production environment.
+ ### Refresh storage target DNS If your DNS server updates IP addresses, the associated NFS storage targets will become temporarily unavailable. Read how to update your custom DNS system IP addresses in [Edit storage targets](hpc-cache-edit-storage.md#update-ip-address-custom-dns-configurations-only).
This feature is available for Azure Blob storage targets only, and its configura
Snapshots are taken every eight hours, at UTC 0:00, 08:00, and 16:00.
-Azure HPC Cache stores daily, weekly, and monthly snapshots until they are replaced by new ones. The limits are:
+Azure HPC Cache stores daily, weekly, and monthly snapshots until they are replaced by new ones. The snapshot retention limits are:
* Up to 20 daily snapshots * Up to 8 weekly snapshots * Up to 3 monthly snapshots
-Access the snapshots from the `.snapshot` directory in your blob storage target's namespace.
+Access the snapshots from the `.snapshot` directory in the root of your mounted blob storage target.
hpc-cache Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/customer-keys.md
You can use Azure Key Vault to control ownership of the keys used to encrypt you
Azure HPC Cache also is protected by [VM host encryption](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) on the managed disks that hold your cached data, even if you add a customer key for the cache disks. Adding a customer-managed key for double encryption gives an extra level of security for customers with high security needs. Read [Server-side encryption of Azure disk storage](../virtual-machines/disk-encryption.md) for details.
-<!-- This feature is available only in some of the Azure regions where Azure HPC Cache is available. Refer to the [Region availability](hpc-cache-overview.md#region-availability) list for details. -->
- There are three steps to enable customer-managed key encryption for Azure HPC Cache: 1. Set up an Azure Key Vault to store the keys.
At cache creation time you must specify a vault, key, and key version to use for
Read the [Azure Key Vault documentation](../key-vault/general/overview.md) for details. > [!NOTE]
-> The Azure Key Vault must use the same subscription and be in the same region as the Azure HPC Cache. Make sure that the region you choose [supports the customer-managed keys feature](hpc-cache-overview.md#region-availability).
+> The Azure Key Vault must use the same subscription and be in the same region as the Azure HPC Cache. Make sure that the region you choose [supports both products](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=hpc-cache,key-vault).
## 2. Create the cache with customer-managed keys enabled You must specify the encryption key source when you create your Azure HPC Cache. Follow the instructions in [Create an Azure HPC Cache](hpc-cache-create.md), and specify the key vault and key in the **Disk encryption keys** page. You can create a new key vault and key during cache creation. > [!TIP]
-> If the **Disk encryption keys** page does not appear, make sure that your cache is in one of the supported regions.
+> If the **Disk encryption keys** page does not appear, make sure that your cache is in one of the [supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=hpc-cache,key-vault).
The user who creates the cache must have privileges equal to the [Key Vault contributor role](../role-based-access-control/built-in-roles.md#key-vault-contributor) or higher.
These articles explain more about using Azure Key Vault and customer-managed key
After you have created the Azure HPC Cache and authorized Key Vault-based encryption, continue to set up your cache by giving it access to your data sources.
-* [Add storage targets](hpc-cache-add-storage.md)
+* [Add storage targets](hpc-cache-add-storage.md)
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
An NFS storage target has different settings from a Blob storage target. The usa
> Before you create an NFS storage target, make sure your storage system is accessible from the Azure HPC Cache and meets permission requirements. Storage target creation will fail if the cache can't access the storage system. Read [NFS storage requirements](hpc-cache-prerequisites.md#nfs-storage-requirements) and [Troubleshoot NAS configuration and NFS storage target issues](troubleshoot-nas.md) for details. ### Choose a usage model
-<!-- referenced from GUI - update aka.ms link to point at new article when published -->
+<!-- referenced from GUI by aka.ms link -->
When you create a storage target that uses NFS to reach its storage system, you need to choose a usage model for that target. This model determines how your data is cached.
This table summarizes the differences among all of the usage models:
[!INCLUDE [usage-models-table.md](includes/usage-models-table.md)]
-<!-- | Usage model | Caching mode | Back-end verification | Maximum write-back delay |
-|--|--|--|--|
-| Read heavy, infrequent writes | Read | Never | None |
-| Greater than 15% writes | Read/write | 8 hours | 20 minutes |
-| Clients bypass the cache | Read | 30 seconds | None |
-| Greater than 15% writes, frequent back-end checking (30 seconds) | Read/write | 30 seconds | 20 minutes |
-| Greater than 15% writes, frequent back-end checking (60 seconds) | Read/write | 60 seconds | 20 minutes |
-| Greater than 15% writes, frequent write-back | Read/write | 30 seconds | 30 seconds |
-| Read heavy, checking the backing server every 3 hours | Read | 3 hours | None | -->
- > [!NOTE] > The **Back-end verification** value shows when the cache automatically compares its files with source files in remote storage. However, you can trigger a comparison by sending a client request that includes a readdirplus operation on the back-end storage system. Readdirplus is a standard NFS API (also called extended read) that returns directory metadata, which causes the cache to compare and update files.
After creating storage targets, continue with these tasks to get your cache read
* [Mount the Azure HPC Cache](hpc-cache-mount.md) * [Move data to Azure Blob storage](hpc-cache-ingest.md)
-If you need to update any settings, you can [edit a storage target](hpc-cache-edit-storage.md).
+If you need to update any settings, you can [edit a storage target](hpc-cache-edit-storage.md).
hpc-cache Hpc Cache Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-create.md
The message includes some useful information, including these items:
After your cache appears in the **Resources** list, you can move to the next step. * [Define storage targets](hpc-cache-add-storage.md) to give your cache access to your data sources.
-* If you use customer-managed encryption keys, you need to [authorize Azure Key Vault encryption](customer-keys.md#3-authorize-azure-key-vault-encryption-from-the-cache) from the cache's overview page to complete your cache setup. You must do this step before you can add storage. Read [Use customer-managed encryption keys](customer-keys.md) for details.
+* If you use customer-managed encryption keys, you need to [authorize Azure Key Vault encryption](customer-keys.md#3-authorize-azure-key-vault-encryption-from-the-cache) from the cache's overview page to complete your cache setup. You must do this step before you can add storage. Read [Use customer-managed encryption keys](customer-keys.md) for details.
hpc-cache Hpc Cache Edit Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-edit-storage.md
Use the **Namespace** page for your Azure HPC Cache. The namespace page is descr
Click the name of the path that you want to change, and create the new path in the edit window that appears.
-![Screenshot of the namespace page after clicking on a Blob namespace path - the edit fields appear on a pane to the right](media/edit-namespace-blob.png)
+![Screenshot of the namespace page after clicking on a Blob namespace path - the edit fields appear on a pane to the right](media/update-namespace-blob.png)
After making changes, click **OK** to update the storage target, or click **Cancel** to discard changes.
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-prerequisites.md
More information is included in [Troubleshoot NAS configuration and NFS storage
* Check firewall settings to be sure that they allow traffic on all of these required ports. Be sure to check firewalls used in Azure as well as on-premises firewalls in your data center.
-* **Directory access:** Enable the `showmount` command on the storage system. Azure HPC Cache uses this command to check that your storage target configuration points to a valid export, and also to make sure that multiple mounts don't access the same subdirectories (a risk for file collision).
-
- > [!NOTE]
- > If your NFS storage system uses NetApp's ONTAP 9.2 operating system, **do not enable `showmount`**. [Contact Microsoft Service and Support](hpc-cache-support-ticket.md) for help.
-
- Learn more about directory listing access in the NFS storage target [troubleshooting article](troubleshoot-nas.md#enable-export-listing).
- * **Root access** (read/write): The cache connects to the back-end system as user ID 0. Check these settings on your storage system: * Enable `no_root_squash`. This option ensures that the remote root user can access files owned by root.
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/troubleshoot-nas.md
The back-end storage system keeps internal aliases for file handles, but Azure H
To avoid this possible file collision for files in multiple exports, Azure HPC Cache automatically mounts the shallowest available export in the path (``/ifs`` in the example) and uses the file handle given from that export. If multiple exports use the same base path, Azure HPC Cache needs root access to that path.
-## Enable export listing
-<!-- link in prereqs article -->
+<!-- ## Enable export listing
The NAS must list its exports when the Azure HPC Cache queries it.
On most NFS storage systems, you can test this by sending the following query fr
Use a Linux client from the same virtual network as your cache, if possible.
-If that command doesn't list the exports, the cache will have trouble connecting to your storage system. Work with your NAS vendor to enable export listing.
+If that command doesn't list the exports, the cache will have trouble connecting to your storage system. Work with your NAS vendor to enable export listing. -->
## Adjust VPN packet size restrictions <!-- link in prereqs article and configuration article -->
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
IoT Edge components can be installed or updated individually, and are backwards
| Release | Security daemon | Edge hub<br>Edge agent | Libiothsm | Moby | |--|--|--|--|--|
-| **1.1.0 LTS**<sup>1</sup> | 1.1.0 | 1.1.0 | 1.1.0 | |
+| **1.1 LTS**<sup>1</sup> | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | |
| **1.0.10** | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br>1.0.10.3<br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | |
-| **1.0.9** | 1.0.9.5<br>1.0.9.4<br>1.0.9.3<br>1.0.9.2<br>1.0.9.1<br>1.0.9 | 1.0.9.5<br>1.0.9.4<br>1.0.9.3<br>1.0.9.2<br>1.0.9.1<br>1.0.9 | 1.0.9.5<br>1.0.9.4<br>1.0.9.3<br>1.0.9.2<br>1.0.9.1<br>1.0.9 | |
-| **1.0.8** | 1.0.8 | 1.0.8.5<br>1.0.8.4<br>1.0.8.3<br>1.0.8.2<br>1.0.8.1<br>1.0.8 | 1.0.8 | 3.0.6 |
-| **1.0.7** | 1.0.7.1<br>1.0.7 | 1.0.7.1<br>1.0.7 | 1.0.7.1<br>1.0.7 | 3.0.5<br>3.0.4 (ARMv7hl, CentOS) |
-| **1.0.6** | 1.0.6.1<br>1.0.6 | 1.0.6.1<br>1.0.6 | 1.0.6.1<br>1.0.6 | |
+| **1.0.9** | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | |
+| **1.0.8** | 1.0.8 | 1.0.8<br>1.0.8.1<br>1.0.8.2<br>1.0.8.3<br>1.0.8.4<br>1.0.8.5 | 1.0.8 | 3.0.6 |
+| **1.0.7** | 1.0.7<br>1.0.7.1 | 1.0.7<br>1.0.7.1 | 1.0.7<br>1.0.7.1 | 3.0.4 (ARMv7hl, CentOS)<br>3.0.5 |
+| **1.0.6** | 1.0.6<br>1.0.6.1 | 1.0.6<br>1.0.6.1 | 1.0.6<br>1.0.6.1 | |
| **1.0.5** | 1.0.5 | 1.0.5 | 1.0.5 | 3.0.2 | <sup>1</sup>IoT Edge 1.1 is the first long-term support (LTS) release channel. This version introduced no new features, but will receive security updates and fixes to regressions. IoT Edge 1.1 LTS uses .NET Core 3.1, and will be supported until December 3, 2022 to match the [.NET Core and .NET 5 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see
| IoT Edge version | Microsoft.Azure.Devices.Client SDK version | ||--|
-| 1.1.0 (LTS) | 1.28.0 |
+| 1.1 (LTS) | 1.28.0 |
| 1.0.10 | 1.28.0 | | 1.0.9 | 1.21.1 | | 1.0.8 | 1.20.3 |
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-schema.md
If you want to import an update into Device Update for IoT Hub, be sure you've r
| UpdateId | `UpdateId` object | Update identity. | | UpdateType | string | Update type: <br/><br/> * Specify `microsoft/apt:1` when performing a package-based update using reference agent.<br/> * Specify `microsoft/swupdate:1` when performing an image-based update using reference agent.<br/> * Specify `microsoft/simulator:1` when using sample agent simulator.<br/> * Specify a custom type if developing a custom agent. | Format: <br/> `{provider}/{type}:{typeVersion}`<br/><br/> Maximum of 32 characters total | | InstalledCriteria | string | String interpreted by the agent to determine whether the update was applied successfully: <br/> * Specify **value** of SWVersion for update type `microsoft/swupdate:1`.<br/> * Specify `{name}-{version}` for update type `microsoft/apt:1`, of which name and version are obtained from the APT file.<br/> * Specify hash of the update file for update type `microsoft/simulator:1`.<br/> * Specify a custom string if developing a custom agent.<br/> | Maximum of 64 characters |
-| Compatibility | Array of `CompatibilityInfo` objects | Compatibility information of device compatible with this update. | Maximum of 10 items |
+| Compatibility | Array of `CompatibilityInfo` [objects](#compatibilityinfo-object) | Compatibility information of device compatible with this update. | Maximum of 10 items |
| CreatedDateTime | date/time | Date and time at which the update was created. | Delimited ISO 8601 date and time format, in UTC | | ManifestVersion | string | Import manifest schema version. Specify `2.0`, which will be compatible with `urn:azureiot:AzureDeviceUpdateCore:1` interface and `urn:azureiot:AzureDeviceUpdateCore:4` interface. | Must be `2.0` | | Files | Array of `File` objects | Update payload files | Maximum of 5 files |
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
Automated ML doesn't differentiate between binary and multiclass metrics. The sa
For example, instead of calculating recall as `tp / (tp + fn)`, the multiclass averaged recall (`micro`, `macro`, or `weighted`) averages over both classes of a binary classification dataset. This is equivalent to calculating the recall for the `true` class and the `false` class separately, and then taking the average of the two.
+Automated ML doesn't calculate binary metrics, that is metrics for binary classification datasets. However, these metrics can be manually calculated using the [confusion matrix](#confusion-matrix) that Automated ML generated for that particular run. For example, you can calculate precision, `tp / (tp + fp)`, with the true positive and false positive values shown in a 2x2 confusion matrix chart.
+ ## Confusion matrix Confusion matrices provide a visual for how a machine learning model is making systematic errors in its predictions for classification models. The word "confusion" in the name comes from a model "confusing" or mislabeling samples. A cell at row `i` and column `j` in a confusion matrix contains the number of samples in the evaluation dataset that belong to class `C_i` and were classified by the model as class `C_j`.
machine-learning Spark Advanced Data Exploration Modeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/spark-advanced-data-exploration-modeling.md
A common way to perform hyperparameter optimization used here is a grid search,
The models we use include logistic and linear regression, random forests, and gradient boosted trees:
-* [Linear regression with SGD](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.regression.LinearRegressionWithSGD) is a linear regression model that uses a Stochastic Gradient Descent (SGD) method and for optimization and feature scaling to predict the tip amounts paid.
+* [Linear regression with SGD](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.regression.LinearRegressionWithSGD.html#pyspark.mllib.regression.LinearRegressionWithSGD
+) is a linear regression model that uses a Stochastic Gradient Descent (SGD) method and for optimization and feature scaling to predict the tip amounts paid.
* [Logistic regression with LBFGS](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.classification.LogisticRegressionWithLBFGS) or "logit" regression, is a regression model that can be used when the dependent variable is categorical to do data classification. LBFGS is a quasi-Newton optimization algorithm that approximates the BroydenΓÇôFletcherΓÇôGoldfarbΓÇôShanno (BFGS) algorithm using a limited amount of computer memory and that is widely used in machine learning. * [Random forests](https://spark.apache.org/docs/latest/mllib-ensembles.html#Random-Forests) are ensembles of decision trees. They combine many decision trees to reduce the risk of overfitting. Random forests are used for regression and classification and can handle categorical features and can be extended to the multiclass classification setting. They do not require feature scaling and are able to capture non-linearities and feature interactions. Random forests are one of the most successful machine learning models for classification and regression. * [Gradient boosted trees](https://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-trees-gbts) (GBTS) are ensembles of decision trees. GBTS train decision trees iteratively to minimize a loss function. GBTS is used for regression and classification and can handle categorical features, do not require feature scaling, and are able to capture non-linearities and feature interactions. They can also be used in a multiclass-classification setting.
BoostedTreeRegressionFileLoc = modelDir + "GradientBoostingTreeRegression_2016-0
## What's next? Now that you have created regression and classification models with the Spark MlLib, you are ready to learn how to score and evaluate these models.
-**Model consumption:** To learn how to score and evaluate the classification and regression models created in this topic, see [Score and evaluate Spark-built machine learning models](spark-model-consumption.md).
+**Model consumption:** To learn how to score and evaluate the classification and regression models created in this topic, see [Score and evaluate Spark-built machine learning models](spark-model-consumption.md).
machine-learning Spark Data Exploration Modeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/spark-data-exploration-modeling.md
Time taken to execute above cell: 0.24 second
### Feature scaling
-Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function. The code for feature scaling uses the [StandardScaler](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.feature.StandardScaler) to scale the features to unit variance. It is provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD), a popular algorithm for training a wide range of other machine learning models such as regularized regressions or support vector machines (SVM).
+Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function. The code for feature scaling uses the [StandardScaler](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.classification.LogisticRegressionWithLBFGS.html#pyspark.mllib.classification.LogisticRegressionWithLBFGS
+) to scale the features to unit variance. It is provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD), a popular algorithm for training a wide range of other machine learning models such as regularized regressions or support vector machines (SVM).
> [!NOTE] > We have found the LinearRegressionWithSGD algorithm to be sensitive to feature scaling.
Now that you have created regression and classification models with the Spark Ml
**Model consumption:** To learn how to score and evaluate the classification and regression models created in this topic, see [Score and evaluate Spark-built machine learning models](spark-model-consumption.md).
-**Cross-validation and hyperparameter sweeping**: See [Advanced data exploration and modeling with Spark](spark-advanced-data-exploration-modeling.md) on how models can be trained using cross-validation and hyper-parameter sweeping
+**Cross-validation and hyperparameter sweeping**: See [Advanced data exploration and modeling with Spark](spark-advanced-data-exploration-modeling.md) on how models can be trained using cross-validation and hyper-parameter sweeping
machine-learning Spark Model Consumption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/spark-model-consumption.md
Time taken to execute above cell: 5.37 seconds
### Create RDD objects with feature arrays for input into models This section contains code that shows how to index categorical text data as an RDD object and one-hot encode it so it can be used to train and test MLlib logistic regression and tree-based models. The indexed data is stored in [Resilient Distributed Dataset (RDD)](https://spark.apache.org/docs/latest/api/java/org/apache/spark/rdd/RDD.html) objects. The RDDs are the basic abstraction in Spark. An RDD object represents an immutable, partitioned collection of elements that can be operated on in parallel with Spark.
-It also contains code that shows how to scale data with the `StandardScalar` provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD), a popular algorithm for training a wide range of machine learning models. The [StandardScaler](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.feature.StandardScaler) is used to scale the features to unit variance. Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function.
+It also contains code that shows how to scale data with the `StandardScalar` provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD), a popular algorithm for training a wide range of machine learning models. The [StandardScaler](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.tree.RandomForest.html#pyspark.mllib.tree.RandomForest
+) is used to scale the features to unit variance. Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function.
```python # CREATE RDD OBJECTS WITH FEATURE ARRAYS FOR INPUT INTO MODELS
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-partner-customer-usage-attribution.md
Previously updated : 03/09/2021 Last updated : 03/19/2021
There are secondary use cases for customer usage attribution outside of the comm
>[!IMPORTANT] >- Customer usage attribution is not intended to track the work of systems integrators, managed service providers, or tools designed primarily to deploy and manage Azure resources. >- Customer usage attribution is for new deployments and does not support tracking resources that have already been deployed.
->- Not all Azure services are compatible with customer usage attribution. Azure Kubernetes Services (AKS) and VM Scale Sets have known issues that cause under-reporting of usage.
+>- Not all Azure services are compatible with customer usage attribution. Azure Kubernetes Services (AKS), VM Scale Sets, and Azure Batch have known issues that cause under-reporting of usage.
## Commercial marketplace Azure apps
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Migrate
+description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 03/17/2021++++++
+# Azure Policy built-in definitions for Azure Migrate
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Migrate. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Migrate
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
postgresql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-double-encryption.md
Title: Infrastructure double encryption - Azure portal - Azure Database for PostgreSQL description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.--++ Previously updated : 06/30/2020 Last updated : 03/14/2021 # Infrastructure double encryption for Azure Database for PostgreSQL
Learn how to use the how set up and manage Infrastructure double encryption for
## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - Portal
-Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from Azure portal:
+Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from Azure portal:
1. Select **Create a resource** (+) in the upper-left corner of the portal.
Follow these steps to create an Azure Database for MySQL server with Infrastruct
## Create an Azure Database for PostgreSQL server with Infrastructure Double encryption - CLI
-Follow these steps to create an Azure Database for MySQL server with Infrastructure double encryption from CLI:
+Follow these steps to create an Azure Database for PostgreSQL server with Infrastructure double encryption from CLI:
This example creates a resource group named `myresourcegroup` in the `westus` location.
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-quickstart-blob.md
Previously updated : 01/12/2021 Last updated : 03/21/2021 # Quickstart: Create an Azure Cognitive Search cognitive skillset in the Azure portal
-A skillset is an AI-based feature that uses deep learning models to extract information and structure from large undifferentiated text or image files, making the content both indexable and searchable in Azure Cognitive Search.
+This quickstart demonstrates skillset support in the portal, showing how Optical Character Recognition (OCR) and entity recognition can be used to create searchable text content from images and application files.
-In this quickstart, you'll combine services and data in the Azure cloud to create the skillset. Once everything is in place, you'll run the **Import data** wizard in the Azure portal to pull it all together. The end result is a searchable index populated with data created by AI processing that you can query in the portal ([Search explorer](search-explorer.md)).
+To prepare, you'll create a few resources and upload sample images and application content files. Once everything is in place, you'll run the **Import data** wizard in the Azure portal to pull it all together. The end result is a searchable index populated with data created by AI processing that you can query in the portal ([Search explorer](search-explorer.md)).
+
+Prefer to start with code? See [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md) or an [Tutorial: Use .NET and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) instead.
## Prerequisites
In the following steps, set up a blob container in Azure Storage to store hetero
+ Choose the same region as Azure Cognitive Search to avoid bandwidth charges.
- + Choose the StorageV2 (general purpose V2) account type if you want to try out the knowledge store feature later, in another walkthrough. Otherwise, choose any type.
+ + Choose the StorageV2 (general purpose V2).
1. Open the Blob services pages and create a container. You can use the default public access level.
-1. In container, click **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that are not full text searchable in their native formats.
+1. In Container, click **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that are not full text searchable in their native formats.
:::image type="content" source="media/cognitive-search-quickstart-blob/sample-data.png" alt-text="Source files in Azure blob storage" border="false":::
search Cognitive Search Tutorial Blob Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-blob-dotnet.md
Last updated 01/23/2021
-# Tutorial: AI-generated searchable content from Azure blobs using the .NET SDK
+# Tutorial: Use .NET and AI to generate searchable content from Azure blobs
If you have unstructured text or images in Azure Blob storage, an [AI enrichment pipeline](cognitive-search-concept-intro.md) can extract information and create new content for full-text search or knowledge mining scenarios.
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-ranking.md
Inputs to summarization are the long string from the preparation phase. From tha
Output is a [semantic caption](semantic-how-to-query-request.md), in plain text and with highlights. The caption is smaller than the long string, usually fewer than 200 words per document, and it's considered the most representative of the document.
-A [semantic answer](semantic-answers.md) will also be returned if you specified the "answers" parameter, if the query was posed as a question, and if a passage can be found in the long string that looks like a plausible answer to the question.
+A [semantic answer](semantic-answers.md) will also be returned if you specified the "answers" parameter, if the query was posed as a question, and if a passage can be found in the long string that is likely to provide an answer to the question.
## Scoring and ranking
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
> [!IMPORTANT] > Semantic search is in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. These features are billable. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-Semantic search is a collection of query-related features that support a higher-quality, more natural query experience.
+Semantic search is a collection of query-related capabilities that add semantic relevance and language understanding to search results. *Semantic ranking* looks for context and relatedness among terms, elevating matches that make more sense given the query. Language understanding finds *captions* and *answers* within your content that summarize the matching document or answer a question, which can then be rendered on a search results page for a more productive search experience.
-These capabilities include a semantic reranking of search results, as well as caption and answer extraction, with semantic highlighting over relevant terms and phrases. State-of-the-art pretrained models are used for extraction and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
+State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure as an add-on feature. For more information about the research and AI investments backing semantic search, see [How AI from Bing is powering Azure Cognitive Search (Microsoft Research Blog)](https://www.microsoft.com/research/blog/the-science-behind-semantic-search-how-ai-from-bing-is-powering-azure-cognitive-search/).
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 03/10/2021 Last updated : 03/18/2021
If you're looking for the latest release notes, you'll find them in the [What's
## Planned changes -- [Recommendations from AWS will be released for general availability (GA)](#recommendations-from-aws-will-be-released-for-general-availability-ga)-- [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated)-- [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation)-- [Deprecation of 11 Azure Defender alerts](#deprecation-of-11-azure-defender-alerts)--
-### Recommendations from AWS will be released for general availability (GA)
-
-**Estimated date for change:** April 2021
-
-Azure Security Center protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
-
-The recommendations coming from AWS Security Hub have been in preview since the cloud connectors were introduced. Recommendations flagged as **Preview** aren't included in the calculations of your secure score, but should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score.
-
-With this change, two sets of AWS recommendations will move to GA:
--- [Security Hub's PCI DSS controls](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-pci-controls.html)-- [Security Hub's CIS AWS Foundations Benchmark controls](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html)-
-When these are GA and the assessments run on your AWS resources, the results will impact your combined secure score for all your multi and hybrid cloud resources.
+| Planned change | Estimated date for change |
+|--||
+| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | March 2021 |
+| [Deprecation of 11 Azure Defender alerts](#deprecation-of-11-azure-defender-alerts) | March 2021 |
+| [21 recommendations moving between security controls](#21-recommendations-moving-between-security-controls) | April 2021 |
+| [Two further recommendations from "Apply system updates" security control being deprecated](#two-further-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
+| [Recommendations from AWS will be released for general availability (GA)](#recommendations-from-aws-will-be-released-for-general-availability-ga) | April 2021 |
+| [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) | Q2 2021 |
+| | |
### Two recommendations from "Apply system updates" security control being deprecated
We recommend checking your continuous export and workflow automation configurati
Learn more about these recommendations in the [security recommendations reference page](recommendations-reference.md). -
-### Enhancements to SQL data classification recommendation
-
-**Estimated date for change:** Q2 2021
-
-The recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result the recommendation's ID will also change (currently, it's b0df6f56-862d-4730-8597-38c0fd4ebd59).
-- ### Deprecation of 11 Azure Defender alerts **Estimated date for change:** March 2021
Next month, the eleven Azure Defender alerts listed below will be deprecated.
+
+### 21 recommendations moving between security controls
+
+**Estimated date for change:** April 2021
+
+The following recommendations are being moved to a different security control. Security controls are logical groups of related security recommendations, and reflects your vulnerable attack surfaces. This move ensures that each of these recommendations is in the most appropriate control to meet its objective.
+
+Learn which recommendations are in each security control in Security controls and their recommendations.
+
+|Recommendation |Change and impact |
+|||
+|Vulnerability assessment should be enabled on your SQL servers<br>Vulnerability assessment should be enabled on your SQL managed instances<br>Vulnerabilities on your SQL databases should be remediated new<br>Vulnerabilities on your SQL databases in VMs should be remediated |Moving from Remediate vulnerabilities (worth 6 points)<br>to Remediate security configurations (worth 4 points).<br>Depending on your environment, these recommendations will have a reduced impact on your score.|
+|There should be more than one owner assigned to your subscription<br>Automation account variables should be encrypted<br>IoT Devices - Auditd process stopped sending events<br>IoT Devices - Operating system baseline validation failure<br>IoT Devices - TLS cipher suite upgrade needed<br>IoT Devices - Open Ports On Device<br>IoT Devices - Permissive firewall policy in one of the chains was found<br>IoT Devices - Permissive firewall rule in the input chain was found<br>IoT Devices - Permissive firewall rule in the output chain was found<br>Diagnostic logs in IoT Hub should be enabled<br>IoT Devices - Agent sending underutilized messages<br>IoT Devices - Default IP Filter Policy should be Deny<br>IoT Devices - IP Filter rule large IP range<br>IoT Devices - Agent message intervals and size should be adjusted<br>IoT Devices - Identical Authentication Credentials<br>IoT Devices - Audited process stopped sending events<br>IoT Devices - Operating system (OS) baseline configuration should be fixed|Moving to **Implement security best practices**.<br>When a recommendation moves to the Implement security best practices security control, which is worth no points, the recommendation no longer affects your secure score.|
+|||
++
+### Two further recommendations from "Apply system updates" security control being deprecated
+
+**Estimated date for change:** April 2021
+
+The following two recommendations are being deprecated:
+
+- **OS version should be updated for your cloud service roles** - By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016.
+- **Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version** - This recommendation's evaluations aren't as wide-ranging as we'd like them to be. The current version of this recommendation will eventually be replaced with an enhanced version that's better aligned with our customer's security needs.
++
+### Recommendations from AWS will be released for general availability (GA)
+
+**Estimated date for change:** April 2021
+
+Azure Security Center protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+
+The recommendations coming from AWS Security Hub have been in preview since the cloud connectors were introduced. Recommendations flagged as **Preview** aren't included in the calculations of your secure score, but should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score.
+
+With this change, two sets of AWS recommendations will move to GA:
+
+- [Security Hub's PCI DSS controls](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-pci-controls.html)
+- [Security Hub's CIS AWS Foundations Benchmark controls](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html)
+
+When these are GA and the assessments run on your AWS resources, the results will impact your combined secure score for all your multi and hybrid cloud resources.
+++
+### Enhancements to SQL data classification recommendation
+
+**Estimated date for change:** Q2 2021
+
+The recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result the recommendation's ID will also change (currently, it's b0df6f56-862d-4730-8597-38c0fd4ebd59).
+++ ## Next steps
-For all recent changes to the product, see [What's new in Azure Security Center?](release-notes.md).
+For all recent changes to the product, see [What's new in Azure Security Center?](release-notes.md).
sentinel Tutorial Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-built-in.md
ms.devlang: na
na Previously updated : 07/06/2020 Last updated : 03/19/2021
The following template types are available:
These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. > [!IMPORTANT]
- > The machine learning behavioral analytics rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ > - The machine learning behavioral analytics rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ >
+ > - By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
- **Scheduled**
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-access-control.md
To set file and directory level permissions, see any of the following articles:
|.NET |[Use .NET to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-dotnet.md)| |Java|[Use Java to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-java.md)| |Python|[Use Python to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md)|
+|JavaScript (Node.js)|[Use the JavaScript SDK in Node.js to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)|
|PowerShell|[Use PowerShell to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-powershell.md)| |Azure CLI|[Use Azure CLI to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)| |REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)|
storage Data Lake Storage Directory File Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-directory-file-acl-javascript.md
Title: Use JavaScript to manage data in Azure Data Lake Storage Gen2
+ Title: Use JavaScript (Node.js) to manage data in Azure Data Lake Storage Gen2
description: Use Azure Storage Data Lake client library for JavaScript to manage directories and files in storage accounts that has hierarchical namespace enabled. Previously updated : 02/17/2021 Last updated : 03/19/2021
-# Use JavaScript to manage directories and files in Azure Data Lake Storage Gen2
+# Use JavaScript SDK in Node.js to manage directories and files in Azure Data Lake Storage Gen2
-This article shows you how to use JavaScript to create and manage directories and files in storage accounts that have a hierarchical namespace.
+This article shows you how to use Node.js to create and manage directories and files in storage accounts that have a hierarchical namespace.
-To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use JavaScript to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-javascript.md).
+To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-javascript.md).
[Package (Node Package Manager)](https://www.npmjs.com/package/@azure/storage-file-datalake) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-file-datalake/samples) | [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues)
npm install @azure/storage-file-datalake
Import the `storage-file-datalake` package by placing this statement at the top of your code file. ```javascript
-const AzureStorageDataLake = require("@azure/storage-file-datalake");
+const {
+AzureStorageDataLake,
+DataLakeServiceClient,
+StorageSharedKeyCredential
+} = require("@azure/storage-file-datalake");
``` ## Connect to the account
virtual-desktop Key Distribution Center Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/key-distribution-center-proxy.md
Title: Set up Kerberos Key Distribution Center proxy Windows Virtual Desktop - A
description: How to set up a Windows Virtual Desktop host pool to use a Kerberos Key Distribution Center proxy. Previously updated : 01/30/2021 Last updated : 03/20/2021
> This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article will show you how to configure a Kerberos Key Distribution Center (KDC) proxy (preview) for your host pool. This proxy lets organizations authenticate with Kerberos outside of their enterprise boundaries. For example, you can use the KDC proxy to enable Smartcard authentication for external clients.
+Security-conscious customers, such as financial or government organizations, often sign in using Smartcards. Smartcards make deployments more secure by requiring multifactor authentication (MFA). However, for the RDP portion of a Windows Virtual Desktop session, Smartcards require a direct connection, or "line of sight," with an Active Directory (AD) domain controller for Kerberos authentication. Without this direct connection, users can't automatically sign in to the organization's network from remote connections. Users in a Windows Virtual Desktop deployment can use the KDC proxy service to proxy this authentication traffic and sign in remotely. The KDC proxy allows for authentication for the Remote Desktop Protocol of a Windows Virtual Desktop session, letting the user sign in securely. This makes working from home much easier, and allows for certain disaster recovery scenarios to run more smoothly.
+
+However, setting up the KDC proxy typically involves assigning the Windows Server Gateway role in Windows Server 2016 or later. How do you use a Remote Desktop Services role to sign in to Windows Virtual Desktop? To answer that, let's take a quick look at the components.
+
+There are two components to the Windows Virtual Desktop service that need to be authenticated:
+
+- The feed in the Windows Virtual Desktop client that gives users a list of available desktops or applications they have access to. This authentication process happens in Azure Active Directory, which means this component isn't the focus of this article.
+- The RDP session that results from a user selecting one of those available resources. This component uses Kerberos authentication and requires a KDC proxy for remote users.
+
+This article will show you how to configure the feed in the Windows Virtual Desktop client in the Azure portal. If you want to learn how to configure the RD Gateway role, see [Deploy the RD Gateway role](/windows-server/remote/rd-gateway-role).
+
+## Requirements
+
+To configure a Windows Virtual Desktop session host with a KDC proxy, you'll need the following things:
+
+- Access to the Azure portal and an Azure administrator account.
+- The remote client machines must be running either Windows 10 or Windows 7 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed.
+- You must have a KDC proxy already installed on your machine. To learn how to do that, see [Set up the RD Gateway role for Windows Virtual Desktop](rd-gateway-role.md).
+- The machine's OS must be Windows Server 2016 or later.
+
+Once you've made sure you meet these requirements, you're ready to get started.
## How to configure the KDC proxy
To configure the KDC proxy:
3. Select the host pool you want to enable the KDC proxy for, then select **RDP Properties**. > [!div class="mx-imgBorder"]
- > ![A screenshot of the Azure portal page showing a user selecting Host pools, then the name of the example host pool, then RDP properties.](media/rdp-properties.png)
+ > ![A screenshot of the Azure portal page showing a user selecting host pools, then the name of the example host pool, then RDP properties.](media/rdp-properties.png)
4. Select the **Advanced** tab, then enter a value in the following format without spaces:
+
> kdcproxyname:s:\<fqdn\>
+
> [!div class="mx-imgBorder"] > ![A screenshot showing the Advanced tab selected, with the value entered as described in step 4.](media/advanced-tab-selected.png) 5. Select **Save**.
-6. The selected host pool should now begin to issue RDP connection files with the kdcproxyname field that you entered included.
+6. The selected host pool should now begin to issue RDP connection files that include the kdcproxyname value you entered in step 4.
## Next steps
-The RDGateway role in Remote Desktop Services includes a KDC proxy service. See [Deploy the RD Gateway role in Windows Virtual Desktop](rd-gateway-role.md) for how to set one up to be a target for Windows Virtual Desktop.
+To learn how to manage the Remote Desktop Services side of the KDC proxy and assign the RD Gateway role, see [Deploy the RD Gateway role](/windows-server/remote/rd-gateway-role).
+
+If you're interested in scaling your KDC proxy servers, learn how to set up high availability for KDC proxy at [Add high availability to the RD Web and Gateway web front](/windows-server/remote/remote-desktop-services/rds-rdweb-gateway-ha).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Title: What's new in Windows Virtual Desktop? - Azure
description: New features and product updates for Windows Virtual Desktop. Previously updated : 02/23/2021 Last updated : 03/20/2021
Check out these articles to learn about updates for our clients for Windows Virt
- [Android](/windows-server/remote/remote-desktop-services/clients/android-whatsnew) - [Web](/windows-server/remote/remote-desktop-services/clients/web-client-whatsnew)
+## Windows Virtual Desktop Agent updates
+
+The Windows Virtual Desktop agent updates at least once per month.
+
+Here's what's changed in Windows Virtual Desktop Agent:
+
+- Version 1.0.2800.2800: This update was released in March 2021 and fixed a reverse connection issue.
+- Version 1.0.2800.2700: This update was released in February 2021 and fixed an access denied orchestration issue.
+ ## FSLogix updates Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](/fslogix/whats-new).
virtual-machine-scale-sets Virtual Machine Scale Sets Design Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md
A scale set configured with user-managed storage accounts is currently limited t
A scale set built on a custom image (one built by you) can have a capacity of up to 600 VMs when configured with Azure Managed disks. If the scale set is configured with user-managed storage accounts, it must create all OS disk VHDs within one storage account. As a result, the maximum recommended number of VMs in a scale set built on a custom image and user-managed storage is 20. If you turn off overprovisioning, you can go up to 40.
-For more VMs than these limits allow, you need to deploy multiple scale sets as shown in [this template](https://github.com/Azure/azure-quickstart-templates/tree/master/301-custom-images-at-scale).
+For more VMs than these limits allow, you need to deploy multiple scale sets as shown in [this template](https://azure.microsoft.com/resources/templates/301-custom-images-at-scale/).
virtual-machines H Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/h-series.md
Title: H-series - Azure Virtual Machines description: Specifications for the H-series VMs.-+
H-series VMs are optimized for applications driven by high CPU frequencies or la
<sup>1</sup> For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network. - > [!NOTE]
-> Among the [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), the H-series are not-SR-IOV enabled. Therefore, the supported [VM Images](./workloads/hpc/configure.md#vm-images), [InfiniBand driver](./workloads/hpc/enable-infiniband.md) requirements and supported [MPI libraries](./workloads/hpc/setup-mpi.md) are different from the SR-IOV enabled VMs.
+> Among the [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), the H-series are not SR-IOV enabled. Therefore, the supported [VM Images](./workloads/hpc/configure.md#vm-images), [InfiniBand driver](./workloads/hpc/enable-infiniband.md) requirements and supported [MPI libraries](./workloads/hpc/setup-mpi.md) are different from the SR-IOV enabled VMs.
+
+## Software specifications
+
+| Software Specifications |HC-series VM |
+|--|--|
+| Max MPI Job Size | 4800 cores (300 VMs in a single virtual machine scale set with singlePlacementGroup=true) |
+| MPI Support | Intel MPI 5.x, MS-MPI |
+| OS Support for non-SRIOV RDMA | CentOS/RHEL 6.5 - 7.4, SLES 12 SP4+, WinServer 2012 - 2016 |
+| Orchestrator Support | CycleCloud, Batch, AKS |
++ ## Other sizes
H-series VMs are optimized for applications driven by high CPU frequencies or la
## Next steps -- Learn more about [configuring your VMs](./workloads/hpc/configure.md), [enabling InfiniBand](./workloads/hpc/enable-infiniband.md), [setting up MPI](./workloads/hpc/setup-mpi.md) and optimizing HPC applications for Azure at [HPC Workloads](./workloads/hpc/overview.md).-- Read about the latest announcements and some HPC examples and results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/). - Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Hb Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hb-series.md
Previously updated : 03/08/2021 Last updated : 03/19/2021
HB-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
| | | | | | | | | | | | | | | Standard_HB60rs | 60 | AMD EPYC 7551 | 228 | 263 | 2.0 | 2.55 | 2.55 | 100 | All | 700 | 4 | 8 |
-Learn more about the underlying [architecture](./workloads/hpc/hb-series-overview.md), and expected [performance](./workloads/hpc/hb-series-performance.md) of the HB-series VM.
+Learn more about the:
+- [architecture and VM topology](./workloads/hpc/hb-series-overview.md),
+- supported [software stack](./workloads/hpc/hb-series-overview.md#software-specifications) including supported OS, and
+- expected [performance](./workloads/hpc/hb-series-performance.md) of the HB-series VM.
[!INCLUDE [hpc-include.md](./workloads/hpc/includes/hpc-include.md)]
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hbv2-series.md
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connec
| | | | | | | | | | | | | | | Standard_HB120rs_v2 | 120 | AMD EPYC 7V12 | 456 | 350 | 2.45 | 3.1 | 3.3 | 200 | All | 480 + 960 | 8 | 8 |
-Learn more about:
-- Underlying [architecture and VM topology](./workloads/hpc/hbv2-series-overview.md)-- [Supported software stack](./workloads/hpc/hbv2-series-overview.md#software-specifications) including supported OS-- Expected [performance](./workloads/hpc/hbv2-performance.md) of the HBv2-series VM.
+Learn more about the:
+- [architecture and VM topology](./workloads/hpc/hbv2-series-overview.md),
+- supported [software stack](./workloads/hpc/hbv2-series-overview.md#software-specifications) including supported OS, and
+- expected [performance](./workloads/hpc/hbv2-performance.md) of the HBv2-series VM.
[!INCLUDE [hpc-include](./workloads/hpc/includes/hpc-include.md)]
virtual-machines Hbv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hbv3-series.md
HBv3-series VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.675 GHz.
-All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage strongly recommended.
+All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
[Premium Storage](premium-storage-performance.md): Supported<br> [Premium Storage caching](premium-storage-performance.md): Supported<br>
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Coming soon<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>-
+<br>
|Size |vCPU |Processor |Memory (GiB) |Memory bandwidth GB/s |Base CPU frequency (GHz) |All-cores frequency (GHz, peak) |Single-core frequency (GHz, peak) |RDMA performance (Gb/s) |MPI support |Temp storage (GiB) |Max data disks |Max Ethernet vNICs | |-|-|-|-|-|-|-|-|-|-|-|-|-|
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
|Standard_HB120-32rs_v3 |32 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 | |Standard_HB120-16rs_v3 |16 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 | -
-Learn more about:
-- Underlying [architecture and VM topology](./workloads/hpc/hbv3-series-overview.md)-- [Supported software stack](./workloads/hpc/hbv3-series-overview.md#software-specifications) including supported OS-- Expected [performance](./workloads/hpc/hbv3-performance.md) of the HBv3-series VM.
+Learn more about the:
+- [architecture and VM topology](./workloads/hpc/hbv3-series-overview.md),
+- supported [software stack](./workloads/hpc/hbv3-series-overview.md#software-specifications) including supported OS, and
+- expected [performance](./workloads/hpc/hbv3-performance.md) of the HBv3-series VM.
[!INCLUDE [hpc-include](./workloads/hpc/includes/hpc-include.md)]
Learn more about:
- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). - For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).-- Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+- Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Hc Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hc-series.md
HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues)<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>- <br> | Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs | | | | | | | | | | | | | | | | Standard_HC44rs | 44 | Intel Xeon Platinum 8168 | 352 | 191 | 2.7 | 3.4 | 3.7 | 100 | All | 700 | 4 | 8 |
-Learn more about the underlying [architecture, VM topology](./workloads/hpc/hc-series-overview.md) and expected [performance](./workloads/hpc/hc-series-performance.md) of the HC-series VM.
+Learn more about the:
+- [architecture and VM topology](./workloads/hpc/hc-series-overview.md),
+- supported [software stack](./workloads/hpc/hc-series-overview.md#software-specifications) including supported OS, and
+- expected [performance](./workloads/hpc/hc-series-performance.md) of the HC-series VM.
[!INCLUDE [hpc-include](./workloads/hpc/includes/hpc-include.md)]
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-hpc.md
Previously updated : 12/09/2020 Last updated : 03/19/2021 # High performance computing VM sizes
-Azure H-series virtual machines (VMs) are designed to deliver leadership-class performance, scalability, and cost efficiency for a variety of real-world HPC workloads.
+Azure H-series virtual machines (VMs) are designed to deliver leadership-class performance, scalability, and cost efficiency for various real-world HPC workloads.
+
+[HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.675 GHz.
+
+All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
[HBv2-series](hbv2-series.md) VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 340 GB/sec of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
-HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. Each of these VM types are connected in a non-blocking fat tree for optimized and consistent RDMA performance. HBv2 VMs support Adaptive Routing and the Dynamic Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and usage of them is strongly recommended.
+HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. Each of these VM types is connected in a non-blocking fat tree for optimized and consistent RDMA performance. HBv2 VMs support Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
[HB-series](hb-series.md) VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, explicit finite element analysis, and weather modeling. HB VMs feature 60 AMD EPYC 7551 processor cores, 4 GB of RAM per CPU core, and no hyperthreading. The AMD EPYC platform provides more than 260 GB/sec of memory bandwidth.
HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series
[H-series](h-series.md) VMs are optimized for applications driven by high CPU frequencies or large memory per core requirements. H-series VMs feature 8 or 16 Intel Xeon E5 2667 v3 processor cores, 7 or 14 GB of RAM per CPU core, and no hyperthreading. H-series features 56 Gb/sec Mellanox FDR InfiniBand in a non-blocking fat tree configuration for consistent RDMA performance. H-series VMs support Intel MPI 5.x and MS-MPI. > [!NOTE]
-> All HBv2, HB, and HC-series VMs have exclusive access to the physical servers. There is only 1 VM per physical server and there is no shared multi-tenancy with any other VMs for these VM sizes.
+> All HBv3, HBv2, HB, and HC-series VMs have exclusive access to the physical servers. There is only 1 VM per physical server and there is no shared multi-tenancy with any other VMs for these VM sizes.
> [!NOTE]
-> The [A8 ΓÇô A11 VMs](./sizes-previous-gen.md#a-seriescompute-intensive-instances) are planned for retirement on 3/2021. For more information, see [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/).
+> The [A8 ΓÇô A11 VMs](./sizes-previous-gen.md#a-seriescompute-intensive-instances) are retired as of 3/2021. No new VM deployments of these sizes are now possible. If you have existing VMs, refer to emailed notifications for next steps including migrating to other VM sizes in [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/).
## RDMA-capable instances
-Most of the HPC VM sizes (HBv2, HB, HC, H16r, H16mr, A8 and A9) feature a network interface for remote direct memory access (RDMA) connectivity. Selected [N-series](./nc-series.md) sizes designated with 'r' (ND40rs_v2, ND24rs, NC24rs_v3, NC24rs_v2 and NC24r) are also RDMA-capable. This interface is in addition to the standard Azure Ethernet network interface available in the other VM sizes.
+Most of the HPC VM sizes feature a network interface for remote direct memory access (RDMA) connectivity. Selected [N-series](./nc-series.md) sizes designated with 'r' are also RDMA-capable. This interface is in addition to the standard Azure Ethernet network interface available in the other VM sizes.
-This interface allows the RDMA-capable instances to communicate over an InfiniBand (IB) network, operating at HDR rates for HBv2, EDR rates for HB, HC, NDv2, FDR rates for H16r, H16mr, and other RDMA-capable N-series virtual machines, and QDR rates for A8 and A9 VMs. These RDMA capabilities can boost the scalability and performance of certain Message Passing Interface (MPI) applications.
+This secondary interface allows the RDMA-capable instances to communicate over an InfiniBand (IB) network, operating at HDR rates for HBv3, HBv2, EDR rates for HB, HC, NDv2, and FDR rates for H16r, H16mr, and other RDMA-capable N-series virtual machines. These RDMA capabilities can boost the scalability and performance of Message Passing Interface (MPI) based applications.
> [!NOTE]
-> In Azure HPC, there are two classes of VMs depending on whether they are SR-IOV enabled for InfiniBand. Currently, almost all the newer generation, RDMA-capable or InfiniBand enabled VMs on Azure are SR-IOV enabled except for H16r, H16mr, NC24r, A8, A9.
+> **SR-IOV support**: In Azure HPC, currently there are two classes of VMs depending on whether they are SR-IOV enabled for InfiniBand. Currently, almost all the newer generation, RDMA-capable or InfiniBand enabled VMs on Azure are SR-IOV enabled except for H16r, H16mr, and NC24r.
> RDMA is only enabled over the InfiniBand (IB) network and is supported for all RDMA-capable VMs. > IP over IB is only supported on the SR-IOV enabled VMs. > RDMA is not enabled over the Ethernet network. -- **Operating System** - Linux is very well supported for HPC VMs; distros such as CentOS, RHEL, Ubuntu, SUSE are commonly used. Regarding Windows support, Windows Server 2016 and newer versions are supported on all the HPC series VMs. Windows Server 2012 R2, Windows Server 2012 are also supported on the non-SR-IOV enabled VMs (H16r, H16mr, A8 and A9). Note that [Windows Server 2012 R2 is not supported on HBv2 and other VMs with more than 64 (virtual or physical) cores](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows). See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately.--- **InfiniBand and Drivers** - On InfiniBand enabled VMs, the appropriate drivers are required to enable RDMA. On Linux, for both SR-IOV and non-SR-IOV enabled VMs, the CentOS-HPC VM images in the Marketplace come pre-configured with the appropriate drivers. The Ubuntu VM images can be configured with the right drivers using the [instructions here](https://techcommunity.microsoft.com/t5/azure-compute/configuring-infiniband-for-ubuntu-hpc-and-gpu-vms/ba-p/1221351). See [Configure and Optimize VMs for Linux OS](./workloads/hpc/configure.md) for more details on ready-to-use VM Linux OS images.-
- On Linux, the [InfiniBandDriverLinux VM extension](./extensions/hpc-compute-infiniband-linux.md) can be used to install the Mellanox OFED drivers and enable InfiniBand on the SR-IOV enabled H- and N-series VMs. Learn more about enabling InfiniBand on RDMA-capable VMs at [HPC Workloads](./workloads/hpc/enable-infiniband.md).
+- **Operating System** - Linux distributions such as CentOS, RHEL, Ubuntu, SUSE are commonly used. Windows Server 2016 and newer versions are supported on all the HPC series VMs. Windows Server 2012 R2 and Windows Server 2012 are also supported on the non-SR-IOV enabled VMs. Note that [Windows Server 2012 R2 is not supported on HBv2 onwards as VM sizes with more than 64 (virtual or physical) cores](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows). See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately. The respective VM size pages also list out the software stack support.
- On Windows, the [InfiniBandDriverWindows VM extension](./extensions/hpc-compute-infiniband-windows.md) installs Windows Network Direct drivers (on non-SR-IOV VMs) or Mellanox OFED drivers (on SR-IOV VMs) for RDMA connectivity. In certain deployments of A8 and A9 instances, the HpcVmDrivers extension is added automatically. Note that the HpcVmDrivers VM extension is being deprecated; it will not be updated.
+- **InfiniBand and Drivers** - On InfiniBand enabled VMs, the appropriate drivers are required to enable RDMA. See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately. Also see [enabling InfiniBand](./workloads/hpc/enable-infiniband.md) to learn about VM extensions or manual installation of InfiniBand drivers.
- To add the VM extension to a VM, you can use [Azure PowerShell](/powershell/azure/) cmdlets. For more information, see [Virtual machine extensions and features](./extensions/overview.md). You can also work with extensions for VMs deployed in the [classic deployment model](/previous-versions/azure/virtual-machines/windows/classic/agents-and-extensions-classic).
+- **MPI** - The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. On non-SR-IOV enabled VMs, supported MPI implementations use the Microsoft Network Direct (ND) interface to communicate between VMs. Hence, only Intel MPI 5.x and Microsoft MPI (MS-MPI) 2012 R2 or later versions are supported. Later versions of the Intel MPI runtime library may or may not be compatible with the Azure RDMA drivers. See [Setup MPI for HPC](./workloads/hpc/setup-mpi.md) for more details on setting up MPI on HPC VMs on Azure.
-- **MPI** - The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. On non-SR-IOV enabled VMs, supported MPI implementations use the Microsoft Network Direct (ND) interface to communicate between VMs. Hence, only Microsoft MPI (MS-MPI) 2012 R2 or later and Intel MPI 5.x versions are supported. Later versions (2017, 2018) of the Intel MPI runtime library may or may not be compatible with the Azure RDMA drivers. See [Setup MPI for HPC](./workloads/hpc/setup-mpi.md) for more details on setting up MPI on HPC VMs on Azure.--- **RDMA network address space** - The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.
+ > [!NOTE]
+ > **RDMA network address space**: The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.
## Cluster configuration options
-Azure provides several options to create clusters of Windows HPC VMs that can communicate using the RDMA network, including:
+Azure provides several options to create clusters of HPC VMs that can communicate using the RDMA network, including:
- **Virtual machines** - Deploy the RDMA-capable HPC VMs in the same scale set or availability set (when you use the Azure Resource Manager deployment model). If you use the classic deployment model, deploy the VMs in the same cloud service. -- **Virtual machine scale sets** - In a virtual machine scale set, ensure that you limit the deployment to a single placement group for InfiniBand communication within the scale set. For example, in a Resource Manager template, set the `singlePlacementGroup` property to `true`. Note that the maximum scale set size that can be spun up with `singlePlacementGroup` property to `true` is capped at 100 VMs by default. If your HPC job scale needs are higher than 100 VMs in a single tenant, you may request an increase, [open an online customer support request](../azure-portal/supportability/how-to-create-azure-support-request.md) at no charge. The limit on the number of VMs in a single scale set can be increased to 300. Note that when deploying VMs using Availability Sets the maximum limit is at 200 VMs per Availability Set.
+- **Virtual machine scale sets** - In a virtual machine scale set, ensure that you limit the deployment to a single placement group for InfiniBand communication within the scale set. For example, in a Resource Manager template, set the `singlePlacementGroup` property to `true`. Note that the maximum scale set size that can be spun up with `singlePlacementGroup=true` is capped at 100 VMs by default. If your HPC job scale needs are higher than 100 VMs in a single tenant, you may request an increase, [open an online customer support request](../azure-portal/supportability/how-to-create-azure-support-request.md) at no charge. The limit on the number of VMs in a single scale set can be increased to 300. Note that when deploying VMs using Availability Sets the maximum limit is at 200 VMs per Availability Set.
-- **MPI among virtual machines** - If RDMA (e.g. using MPI communication) is required between virtual machines (VMs), ensure that the VMs are in the same virtual machine scale set or availability set.
+ > [!NOTE]
+ > **MPI among virtual machines**: If RDMA (e.g. using MPI communication) is required between virtual machines (VMs), ensure that the VMs are in the same virtual machine scale set or availability set.
-- **Azure CycleCloud** - Create an HPC cluster in [Azure CycleCloud](/azure/cyclecloud/) to run MPI jobs.
+- **Azure CycleCloud** - Create an HPC cluster using [Azure CycleCloud](/azure/cyclecloud/) to run MPI jobs.
- **Azure Batch** - Create an [Azure Batch](../batch/index.yml) pool to run MPI workloads. To use compute-intensive instances when running MPI applications with Azure Batch, see [Use multi-instance tasks to run Message Passing Interface (MPI) applications in Azure Batch](../batch/batch-mpi.md).
Azure provides several options to create clusters of Windows HPC VMs that can co
- **Azure subscription** ΓÇô To deploy more than a few compute-intensive instances, consider a pay-as-you-go subscription or other purchase options. If you're using an [Azure free account](https://azure.microsoft.com/free/), you can use only a limited number of Azure compute cores. -- **Pricing and availability** - These VM sizes are offered only in the Standard pricing tier. Check [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) for availability in Azure regions.
+- **Pricing and availability** - Check [VM pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [availability](https://azure.microsoft.com/global-infrastructure/services/) by Azure regions.
- **Cores quota** ΓÇô You might need to increase the cores quota in your Azure subscription from the default value. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the H-series. To request a quota increase, [open an online customer support request](../azure-portal/supportability/how-to-create-azure-support-request.md) at no charge. (Default limits may vary depending on your subscription category.)
Azure provides several options to create clusters of Windows HPC VMs that can co
## Next steps - Learn more about [configuring your VMs](./workloads/hpc/configure.md), [enabling InfiniBand](./workloads/hpc/enable-infiniband.md), [setting up MPI](./workloads/hpc/setup-mpi.md) and optimizing HPC applications for Azure at [HPC Workloads](./workloads/hpc/overview.md).-- Read about the latest announcements and some HPC examples and results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- Review the [HBv3-series overview](./workloads/hpc/hbv3-series-overview.md) and [HC-series overview](./workloads/hpc/hc-series-overview.md).
+- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-connect-vnet-express-route.md
$myVNetName = "VNet01"
$myGWName = "VNet01GW" $myGWConfig = "VNet01GWConfig" $myGWPIPName = "VNet01GWPIP"
-$myGWSku = "HighPerformance" # Supported values for HANA large instances are: HighPerformance or UltraPerformance
+$myGWSku = "UltraPerformance" # Supported values for HANA large instances are: UltraPerformance
# These Commands create the Public IP and ExpressRoute Gateway $vnet = Get-AzVirtualNetwork -Name $myVNetName -ResourceGroupName $myGroupName
New-AzVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName -Loc
-GatewaySku $myGWSku -VpnType PolicyBased -EnableBgp $true ```
-In this example, the HighPerformance gateway SKU was used. Your options are HighPerformance or UltraPerformance as the only gateway SKUs that are supported for SAP HANA on Azure (large instances).
-
-> [!IMPORTANT]
-> For HANA large instances of the Type II class SKU, you must use the UltraPerformance Gateway SKU.
+The only supported gateway SKU for SAP HANA on Azure (large instances) is **UltraPerformance**.
## Link virtual networks
virtual-machines Hana Li Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-li-portal.md
# Azure HANA Large Instances control through Azure portal >[!NOTE]
->For Rev 4.2, follow the instructions in the [Manage BareMetal Instances through the Azure portal](../../../baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal.md) topic.
+>For Rev 4.2, follow the instructions in the [Manage BareMetal Instances through the Azure portal](../../../baremetal-infrastructure/connect-baremetal-infrastructure.md) topic.
This document covers the way how [HANA Large Instances](./hana-overview-architecture.md) are presented in [Azure portal](https://portal.azure.com) and what activities can be conducted through Azure portal with HANA Large Instance units that are deployed for you. Visibility of HANA Large Instances in Azure portal is provided through an Azure resource provider for HANA Large Instances, which currently is in public preview
vpn-gateway Troubleshoot Vpn With Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md
The following logs are available in Azure:
|***Name*** | ***Description*** | | | |
-|**GatewayDiagnosticLog** | Contains diagnostic logs for gateway configuration events, primary changes and maintenance events |
-|**TunnelDiagnosticLog** | Contains tunnel state change events. Tunnel connect/disconnect events have a summarized reason for the state change if applicable |
-|**RouteDiagnosticLog** | Logs changes to static routes and BGP events that occur on the gateway |
-|**IKEDiagnosticLog** | Logs IKE control messages and events on the gateway |
-|**P2SDiagnosticLog** | Logs point-to-site control messages and events on the gateway |
+|**GatewayDiagnosticLog** | Contains diagnostic logs for gateway configuration events, primary changes, and maintenance events. |
+|**TunnelDiagnosticLog** | Contains tunnel state change events. Tunnel connect/disconnect events have a summarized reason for the state change if applicable. |
+|**RouteDiagnosticLog** | Logs changes to static routes and BGP events that occur on the gateway. |
+|**IKEDiagnosticLog** | Logs IKE control messages and events on the gateway. |
+|**P2SDiagnosticLog** | Logs point-to-site control messages and events on the gateway. |
-Notice that there are several columns available in these tables. In this article we are only presenting the most relevant ones for easier log consumption.
+Notice that there are several columns available in these tables. In this article, we are only presenting the most relevant ones for easier log consumption.
## <a name="setup"></a>Set up logging
To learn how set up diagnostic log events from Azure VPN Gateway using Azure Log
## <a name="GatewayDiagnosticLog"></a>GatewayDiagnosticLog
-Configuration changes are audited in the **GatewayDiagnosticLog** table. Note that it could take some minutes before changes you execute are reflected in the logs.
+Configuration changes are audited in the **GatewayDiagnosticLog** table. It could take some minutes before changes you execute are reflected in the logs.
Here you have a sample query as reference.
This query on **GatewayDiagnosticLog** will show you multiple columns.
|***Name*** | ***Description*** | | | |
-|**TimeGenerated** | the timestamp of each event, in UTC timezone|
-|**OperationName** |the event that happened. It can be either of *SetGatewayConfiguration, SetConnectionConfiguration, HostMaintenanceEvent, GatewayTenantPrimaryChanged, MigrateCustomerSubscription, GatewayResourceMove, ValidateGatewayConfiguration*|
+|**TimeGenerated** | the timestamp of each event, in UTC timezone.|
+|**OperationName** |the event that happened. It can be either of *SetGatewayConfiguration, SetConnectionConfiguration, HostMaintenanceEvent, GatewayTenantPrimaryChanged, MigrateCustomerSubscription, GatewayResourceMove, ValidateGatewayConfiguration*.|
|**Message** | the detail of what operation is happening, and lists successful/failure results.| The example below shows the activity logged when a new configuration was applied: Notice that a SetGatewayConfiguration will be logged every time some configuration is modified both on a VPN Gateway or a Local Network Gateway.
This query on **TunnelDiagnosticLog** will show you multiple columns.
|***Name*** | ***Description*** | | | |
-|**TimeGenerated** | the timestamp of each event, in UTC timezone|
+|**TimeGenerated** | the timestamp of each event, in UTC timezone.|
|**OperationName** | the event that happened. It can be either *TunnelConnected* or *TunnelDisconnected*.|
-| **Instance\_s** | the gateway role instance that triggered the event. It can be either GatewayTenantWorker\_IN\_0 or GatewayTenantWorker\_IN\_1 which are the names of the two instances of the gateway.|
+| **Instance\_s** | the gateway role instance that triggered the event. It can be either GatewayTenantWorker\_IN\_0 or GatewayTenantWorker\_IN\_1, which are the names of the two instances of the gateway.|
| **Resource** | indicates the name of the VPN gateway. | | **ResourceGroup** | indicates the resource group where the gateway is.| Example output: The **TunnelDiagnosticLog** is very useful to troubleshoot past events about unexpected VPN disconnections. Its lightweight nature offers the possibility to analyze large time ranges over several days with little effort.
Only after you identify the timestamp of a disconnection, you can switch to the
Some troubleshooting tips: - If you see a disconnection event on one gateway instance, followed by a connection event on the **different** gateway instance in a few seconds, you are looking at a gateway failover. This is usually an expected behavior due to maintenance on a gateway instance. To learn more about this behavior, see [About Azure VPN gateway redundancy](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-highlyavailable#about-azure-vpn-gateway-redundancy).-- The same behavior will be observed if you intentionally run a Gateway Reset on the Azure side - which causes a reboot of the active gateway instance. To learn more about this behavior, see [Reset a VPN Gateway](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-resetgw-classic)
+- The same behavior will be observed if you intentionally run a Gateway Reset on the Azure side - which causes a reboot of the active gateway instance. To learn more about this behavior, see [Reset a VPN Gateway](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-resetgw-classic).
- If you see a disconnection event on one gateway instance, followed by a connection event on the **same** gateway instance in a few seconds, you may be looking at a network glitch causing a DPD timeout, or a disconnection erroneously sent by the on-premises device. ## <a name="RouteDiagnosticLog"></a>RouteDiagnosticLog
This query on **RouteDiagnosticLog** will show you multiple columns.
|***Name*** | ***Description*** | | | |
-|**TimeGenerated** | the timestamp of each event, in UTC timezone|
-|**OperationName** | the event that happened. Can be either of *StaticRouteUpdate, BgpRouteUpdate, BgpConnectedEvent, BgpDisconnectedEvent*|
+|**TimeGenerated** | the timestamp of each event, in UTC timezone.|
+|**OperationName** | the event that happened. Can be either of *StaticRouteUpdate, BgpRouteUpdate, BgpConnectedEvent, BgpDisconnectedEvent*.|
| **Message** | the detail of what operation is happening.| The output will show useful information about BGP peers connected/disconnected and routes exchanged.
The output will show useful information about BGP peers connected/disconnected a
Example: ## <a name="IKEDiagnosticLog"></a>IKEDiagnosticLog
This query on **IKEDiagnosticLog** will show you multiple columns.
|***Name*** | ***Description*** | | | |
-|**TimeGenerated** | the timestamp of each event, in UTC timezone|
-| **RemoteIP** | the IP address of the on-premises VPN device. In real world scenarios, it is useful to filter by the IP address of the relevant on-premises device shall there be more than one |
-|**LocalIP** | the IP address of the VPN Gateway we are troubleshooting. In real world scenarios, it is useful to filter by the IP address of the relevant VPN gateway shall there be more than one in your subscription |
-|**Event** | contains a diagnostic message useful for troubleshooting. They usually start with a keyword and refer to the actions performed by the Azure Gateway **\[SEND\]** indicates an event caused by an IPSec packet sent by the Azure Gateway **\[RECEIVED\]** indicates an event in consequence of a packet received from on-premises device **\[LOCAL\]** indicates an action taken locally by the Azure Gateway |
+|**TimeGenerated** | the timestamp of each event, in UTC timezone.|
+| **RemoteIP** | the IP address of the on-premises VPN device. In real world scenarios, it is useful to filter by the IP address of the relevant on-premises device shall there be more than one. |
+|**LocalIP** | the IP address of the VPN Gateway we are troubleshooting. In real world scenarios, it is useful to filter by the IP address of the relevant VPN gateway shall there be more than one in your subscription. |
+|**Event** | contains a diagnostic message useful for troubleshooting. They usually start with a keyword and refer to the actions performed by the Azure Gateway: **\[SEND\]** indicates an event caused by an IPSec packet sent by the Azure Gateway. **\[RECEIVED\]** indicates an event in consequence of a packet received from on-premises device. **\[LOCAL\]** indicates an action taken locally by the Azure Gateway. |
-Notice how RemoteIP, LocalIP and Event columns are not present in the original column list on AzureDiagnostics database, but are added to the query by parsing the output of the "Message" column to simplify its analysis.
+Notice how RemoteIP, LocalIP, and Event columns are not present in the original column list on AzureDiagnostics database, but are added to the query by parsing the output of the "Message" column to simplify its analysis.
Troubleshooting tips: - In order to identify the start of an IPSec negotiation, you need to find the initial SA\_INIT message. Such message could be sent by either side of the tunnel. Whoever sends the first packet is called "initiator" in IPsec terminology, while the other side becomes the "responder". The first SA\_INIT message is always the one where rCookie = 0. -- If the IPsec tunnel fails to establish, Azure will keep retrying every few seconds. For this reason troubleshooting "VPN down" issues is very convenient on IKEdiagnosticLog because you do not have to wait for a specific time to reproduce the issue. Also, the failure will in theory always be the same every time we try so you could just zoom into one "sample" failing negotiation at any time.
+- If the IPsec tunnel fails to establish, Azure will keep retrying every few seconds. For this reason, troubleshooting "VPN down" issues is very convenient on IKEdiagnosticLog because you do not have to wait for a specific time to reproduce the issue. Also, the failure will in theory always be the same every time we try so you could just zoom into one "sample" failing negotiation at any time.
- The SA\_INIT contains the IPSec parameters that the peer wants to use for this IPsec negotiation. The official document
-[Default IPsec/IKE parameters](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-vpn-devices#ipsec) lists the IPsec parameters supported by the Azure Gateway with default settings
+[Default IPsec/IKE parameters](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-vpn-devices#ipsec) lists the IPsec parameters supported by the Azure Gateway with default settings.
## <a name="P2SDiagnosticLog"></a>P2SDiagnosticLog
-The last available table for VPN diagnostics is **P2SDiagnosticLog**. This traces the activity for Point to Site.
+The last available table for VPN diagnostics is **P2SDiagnosticLog**. This table traces the activity for Point to Site.
Here you have a sample query as reference.
This query on **P2SDiagnosticLog** will show you multiple columns.
|***Name*** | ***Description*** | | | |
-|**TimeGenerated** | the timestamp of each event, in UTC timezone|
-|**OperationName** | the event that happened. Will be *P2SLogEvent*|
+|**TimeGenerated** | the timestamp of each event, in UTC timezone.|
+|**OperationName** | the event that happened. Will be *P2SLogEvent*.|
| **Message** | the detail of what operation is happening.| The output will show all of the Point to Site settings that the gateway has applied, as well as the IPsec policies in place. Also, whenever a client will connect via IKEv2 or OpenVPN Point to Site, the table will log packet activity, EAP/RADIUS conversations and successful/failure results by user. ## Next Steps