Updates from: 01/25/2022 02:08:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
The SuccessFactors connector supports expansion of the position object. To expan
| positionNameFR | $.employmentNav.results[0].jobInfoNav.results[0].positionNav.externalName_fr_FR | | positionNameDE | $.employmentNav.results[0].jobInfoNav.results[0].positionNav.externalName_de_DE |
+### Provisioning users in the Onboarding module
+Inbound user provisioning from SAP SuccessFactors to on-premises Active Directory and Azure AD now supports advance provisioning of pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. Upon encountering a new hire profile with future start date, the Azure AD provisioning service queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external`. The status code `active_external` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579).
+
+The default behavior of the provisioning service is to process pre-hires in the Onboarding module.
+
+If you want to exclude processing of pre-hires in the Onboarding module, update your provisioning job configuration as follows:
+1. Open the attribute-mapping blade of your SuccessFactors provisioning app.
+1. Under show advanced options, edit the SuccessFactors attribute list to add a new attribute called `userStatus`.
+1. Set the JSONPath API expression for this attribute as: `$.employmentNav.results[0].userNav.status`
+1. Save the schema to return back to the attribute mapping blade.
+1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS active_external`
+1. Save the mapping and validate that the scoping filter works using provisioning on demand.
++ ## Writeback scenarios This section covers different write-back scenarios. It recommends configuration approaches based on how email and phone number is setup in SuccessFactors.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Administrators can add any Azure AD registered application to Conditional Access
- Applications that use [password based single sign-on](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md) > [!NOTE]
-> Since Conditional Access policy sets the requirements for accessing a service you are not able to apply it to a client (public/native) application. Other words the policy is not set directly on a client (public/native) application, but is applied when a client calls a service. For example, a policy set on SharePoint service applies to the clients calling SharePoint. A policy set on Exchange applies to the attempt to access the email using Outlook client. That is why client (public/native) applications are not available for selection in the Cloud Apps picker and Conditional Access option is not available in the application settings for the client (public/native) application registered in your tenant.
+> Since Conditional Access policy sets the requirements for accessing a service you are not able to apply it to a client (public/native) application. In other words, the policy is not set directly on a client (public/native) application, but is applied when a client calls a service. For example, a policy set on SharePoint service applies to the clients calling SharePoint. A policy set on Exchange applies to the attempt to access the email using Outlook client. That is why client (public/native) applications are not available for selection in the Cloud Apps picker and Conditional Access option is not available in the application settings for the client (public/native) application registered in your tenant.
Some applications do not appear in the picker at all. The only way to include these applications in a Conditional Access policy is to include **All apps**.
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Previously updated : 10/20/2021 Last updated : 01/24/2022
This type of authorization is common for daemons and service accounts that need
In order to enable this ACL-based authorization pattern, Azure AD doesn't require that applications be authorized to get tokens for another application. Thus, app-only tokens can be issued without a `roles` claim. Applications that expose APIs must implement permission checks in order to accept tokens.
-If you'd like to prevent applications from getting role-less app-only access tokens for your application, [ensure that user assignment requirements are enabled for your app](../manage-apps/assign-user-or-group-access-portal.md). This will block users and applications without assigned roles from being able to get a token for this application.
+If you'd like to prevent applications from getting role-less app-only access tokens for your application, [ensure that user assignment requirements are enabled for your app](../manage-apps/what-is-access-management.md#requiring-user-assignment-for-an-app). This will block users and applications without assigned roles from being able to get a token for this application.
### Application permissions
Instead of using ACLs, you can use APIs to expose a set of **application permiss
To use application permissions with your own API (as opposed to Microsoft Graph), you must first [expose the API](howto-add-app-roles-in-azure-ad-apps.md) by defining scopes in the API's app registration in the Azure portal. Then, [configure access to the API](howto-add-app-roles-in-azure-ad-apps.md#assign-app-roles-to-applications) by selecting those permissions in your client application's app registration. If you haven't exposed any scopes in your API's app registration, you won't be able to specify application permissions to that API in your client application's app registration in the Azure portal.
-When authenticating as an application (as opposed to with a user), you can't use *delegated permissions* - scopes that are granted by a user - because there is no user for you app to act on behalf of. You must use application permissions, also known as roles, that are granted by an admin for the application or via pre-authorization by the web API.
+When authenticating as an application (as opposed to with a user), you can't use *delegated permissions* - scopes that are granted by a user - because there is no user for your app to act on behalf of. You must use application permissions, also known as roles, that are granted by an admin for the application or via pre-authorization by the web API.
For more information about application permissions, see [Permissions and consent](v2-permissions-and-consent.md#permission-types).
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
Consider the sample checklist to define the user experience (UX) requirements:
- If you expect high UX customization such as pixel to pixel, you may need a front-end developer to help you.
+- Azure AD B2C provides capabilities for customizing HTML and CSS, however, it has additional requirements for [JavaScript](../../active-directory-b2c/javascript-and-page-layout.md?pivots=b2c-custom-policy#guidelines-for-using-javascript).
+
+- An embedded experience can be implemented [using iframe support](../../active-directory-b2c/embedded-login.md?pivots=b2c-custom-policy). For a single-page application, you'll also need a second "sign-in" HTML page that loads into the `<iframe>` element.
+ ## Monitor an Azure AD B2C solution This phase includes the following capabilities:
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
The following sections describe current support for administrative unit scenario
| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | | | |
-| Administrative unit-scoped management of user properties, passwords, and licenses | Supported | Supported | Supported |
+| Administrative unit-scoped management of user properties, passwords | Supported | Supported | Supported |
+| Administrative unit-scoped management of user licenses | Supported | Not Supported | Supported |
| Administrative unit-scoped blocking and unblocking of user sign-ins | Supported | Supported | Supported | | Administrative unit-scoped management of user multifactor authentication credentials | Supported | Supported | Not supported |
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/gpu-multi-instance.md
+
+ Title: Multi-instance GPU Node pool (preview)
+description: Learn how to create a Multi-instance GPU Node pool and schedule tasks on it
++ Last updated : 1/24/2022+++
+# Multi-instance GPU Node pool
+
+Nvidia's A100 GPU can be divided in up to seven independent instances. Each instance has their own memory and Stream Multiprocessor (SM). For more information on the Nvidia A100, follow [Nvidia A100 GPU][Nvidia A100 GPU].
+
+This article will walk you through how to create a multi-instance GPU node pool on Azure Kubernetes Service clusters and schedule tasks.
++
+## GPU Instance Profile
+
+GPU Instance Profiles define how a GPU will be partitioned. The following table shows the available GPU Instance Profile for the `Standard_ND96asr_v4`, the only instance type that supports the A100 GPU at this time.
++
+| Profile Name | Fraction of SM |Fraction of Memory | Number of Instances created |
+|--|--|--|--|
+| MIG 1g.5gb | 1/7 | 1/8 | 7 |
+| MIG 2g.10gb | 2/7 | 2/8 | 3 |
+| MIG 3g.20gb | 3/7 | 4/8 | 2 |
+| MIG 4g.20gb | 4/7 | 4/8 | 1 |
+| MIG 7g.40gb | 7/7 | 8/8 | 1 |
+
+As an example, the GPU Instance Profile of `MIG 1g.5gb` indicates that each GPU instance will have 1g SM(Computing resource) and 5gb memory. In this case, the GPU will be partitioned into seven instances.
+
+The available GPU Instance Profiles available for this instance size are `MIG1g`, `MIG2g`, `MIG3g`, `MIG4g`, `MIG7g`
+
+> [!IMPORTANT]
+> The applied GPU Instance Profile cannot be changed after node pool creation.
++
+## Create an AKS cluster
+To get started, create a resource group and an AKS cluster. If you already have a cluster, you can skip this step. Follow the example below to the resource group name `myresourcegroup` in the `southcentralus` region:
+
+```azurecli-interactive
+az group create --name myresourcegroup --location southcentralus
+```
+
+```azurecli-interactive
+az aks create \
+ --resource-group myresourcegroup \
+ --name migcluster\
+ --node-count 1
+```
+
+## Create a multi-instance GPU node pool
+
+You can choose to either use the `az` command line or http request to the ARM API to create the node pool
+
+### Azure CLI
+If you're using command line, use the `az aks nodepool add` command to create the node pool and specify the GPU instance profile through `--gpu-instance-profile`
+```
+
+az aks nodepool add \
+ --name mignode \
+ --resourcegroup myresourcegroup \
+ --cluster-name migcluster \
+ --node-size Standard_ND96asr_v4 \
+ --gpu-instance-profile MIG1g
+```
+
+### HTTP request
+
+If you're using http request, you can place GPU instance profile in the request body:
+```
+{
+ "properties": {
+ "count": 1,
+ "vmSize": "Standard_ND96asr_v4",
+ "type": "VirtualMachineScaleSets",
+ "gpuInstanceProfile": "MIG1g"
+ }
+}
+```
++++
+## Run tasks using kubectl
+
+### MIG strategy
+Before you install the Nvidia plugins, you need to specify which strategy to use for GPU partitioning.
+
+The two strategies "Single" and "Mixed" won't affect how you execute CPU workloads, but how GPU resources will be displayed.
+
+- Single Strategy
+
+ The single strategy treats every GPU instance as a GPU. If you're using this strategy, the GPU resources will be displayed as:
+
+ ```
+ nvidia.com/gpu: 1
+ ```
+
+- Mixed Strategy
+
+ The mixed strategy will expose the GPU instances and the GPU instance profile. If you use this strategy, the GPU resource will be displayed as:
+
+ ```
+ nvidia.com/mig1g.5gb: 1
+ ```
+
+### Install the NVIDIA device plugin and GPU feature discovery
+
+Set your MIG Strategy
+```
+export MIG_STRATEGY=single
+```
+or
+```
+export MIG_STRATEGY=mixed
+```
+
+Install the Nvidia device plugin and GPU feature discovery using helm
+
+```
+helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
+helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery
+helm repo update #do not forget to update the helm repo
+```
+
+```
+helm install \
+--version=0.7.0 \
+--generate-name \
+--set migStrategy=${MIG_STRATEGY} \
+nvdp/nvidia-device-plugin
+```
+
+```
+helm install \
+--version=0.2.0 \
+--generate-name \
+--set migStrategy=${MIG_STRATEGY} \
+nvgfd/gpu-feature-discovery
+```
++
+### Confirm multi-instance GPU capability
+As an example, if you used MIG1g as the GPU instance profile, confirm the node has multi-instance GPU capability by running:
+```
+kubectl describe mignode
+```
+If you're using single strategy, you'll see:
+```
+Allocable:
+ nvidia.com/gpu: 56
+```
+If you're using mixed strategy, you'll see:
+```
+Allocable:
+ nvidia.com/mig-1g.5gb: 56
+```
+
+### Schedule work
+Use the `kubectl` run command to schedule work using single strategy:
+```
+kubectl run -it --rm \
+--image=nvidia/cuda:11.0-base \
+--restart=Never \
+--limits=nvidia.com/mig-1g.5gb=1 \
+mixed-strategy-example -- nvidia-smi -L
+```
+
+Use the `kubectl` run command to schedule work using mixed strategy:
+```
+kubectl run -it --rm \
+--image=nvidia/cuda:11.0-base \
+--restart=Never \
+--limits=nvidia.com/gpu=1 \
+single-strategy-example -- nvidia-smi -L
+```
++
+## Troubleshooting
+- If you do not see multi-instance GPU capability after the node pool has been created, confirm the API version is not older than 2021-08-01.
+
+<!-- LINKS - internal -->
++
+<!-- LINKS - external-->
+[Nvidia A100 GPU]:https://www.nvidia.com/en-us/data-center/a100/
+
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/limit-egress-traffic.md
The following FQDN / application rules are required for using Windows Server bas
## AKS addons and integrations
+### Microsoft Defender for Containers
+
+#### Required FQDN / application rules
+
+The following FQDN / application rules are required for AKS clusters that have Microsoft Defender for Containers enabled.
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Active Directory Authentication. |
+| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.|
+| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
+ ### Azure Monitor for containers There are two options to provide access to Azure Monitor for containers, you may allow the Azure Monitor [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) **or** provide access to the required FQDN/Application Rules.
The following FQDN / application rules are required for AKS clusters that have t
| FQDN | Port | Use | |--|--|-|
-| dc.services.visualstudio.com | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
-| *.ods.opinsights.azure.com | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
-| *.oms.opinsights.azure.com | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
-| *.monitoring.azure.com | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`*.monitoring.azure.com`** | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
### Azure Policy
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/configure-custom-domain.md
Choose the steps according to the [domain certificate](#domain-certificate-optio
1. Select **+Add**, or select an existing [endpoint](#endpoints-for-custom-domains) that you want to update. 1. In the window on the right, select the **Type** of endpoint for the custom domain. 1. In the **Hostname** field, specify the name you want to use. For example, `api.contoso.com`.
-1. Under **Certificate**, select **Managed** to enable a free certificate managed by API Management. Te managed certificate is available in preview for the Gateway endpoint only.
+1. Under **Certificate**, select **Managed** to enable a free certificate managed by API Management. The managed certificate is available in preview for the Gateway endpoint only.
1. Copy the following values and use them to [configure DNS](#dns-configuration): * **TXT record** * **CNAME record**
You can also get a domain ownership identifier by calling the [Get Domain Owners
## Next steps
-[Upgrade and scale your service](upgrade-and-scale.md)
+[Upgrade and scale your service](upgrade-and-scale.md)
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Title: Troubleshoot session affinity issues
description: This article provides information on how to troubleshoot session affinity issues in Azure Application Gateway -+ Previously updated : 11/14/2019- Last updated : 01/24/2022+ # Troubleshoot Azure Application Gateway session affinity issues
Sometimes the session affinity issues might occur when you forget to enable ΓÇ£C
![Screenshot shows SETTINGS with H T T P settings selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-1.png)
-4. Click **appGatewayBackendHttpSettings** on the right side to check whether you have selected **Enabled** for Cookie based affinity.
+4. Select the HTTP setting, and on the **Add HTTP setting** page, check if **Cookie based affinity** is enabled.
- ![Screenshot shows the gateway settings for an app gateway, inlcuidng whether Cookie based affinity is selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-2.jpg)
+ ![Screenshot shows the gateway settings for an app gateway, including whether Cookie based affinity is selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-2.png)
You can collect additional logs and analyze them to troubleshoot the issues rela
To collect the Application Gateway logs, follow the instructions:
-Enable logging through the Azure portal
+Enable logging using the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), find your resource and then click **Diagnostic logs**.
+1. In the [Azure portal](https://portal.azure.com/), find your resource and then select **Diagnostic setting**.
- For Application Gateway, three logs are available: Access log, Performance log, Firewall log
+ For Application Gateway, three logs are available: Access log, Performance log, and Firewall log.
-2. To start to collect data, click **Turn on diagnostics**.
+2. To start to collect data, select **Add diagnostic setting**.
- ![Screenshot shows an application gateway with Diagnostics logs selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-5.png)
+ ![Screenshot shows an application gateway with Diagnostics settings selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-5.png)
-3. The **Diagnostics settings** blade provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. Click **Configure** under **Log Analytics** to set your workspace. You can also use event hubs and a storage account to save the diagnostic logs.
+3. The **Diagnostic setting** page provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the diagnostic logs.
![Screenshot shows the Diagnostics settings pane with Log Analytics Configure selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-6.png)
-4. Confirm the settings and then click **Save**.
+4. Confirm the settings and then select **Save**.
- ![Screenshot shows the Diagnostics settings pane with Save selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-7.png)
-#### View and analyze the Application Gateway access logs
-1. In the Azure portal under the Application Gateway resource view, select **Diagnostics logs** in the **MONITORING** section .
-
- ![Screenshot shows MONITORING with Diagnostics logs selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-8.png)
-
-2. On the right side, select ΓÇ£**ApplicationGatewayAccessLog**ΓÇ£ in the drop-down list under **Log categories.**
-
- ![Screenshot shows the Log categories dropdown list with ApplicationGatewayAccessLog selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-9.png)
-
-3. In the Application Gateway Access Log list, click the log you want to analyze and export, and then export the JSON file.
-
-4. Convert the JSON file that you exported in step 3 to CSV file and view them in Excel, Power BI, or any other data-visualization tool.
-
-5. Check the following data:
--- **ClientIP**ΓÇô This is the client IP address from the connecting client.-- **ClientPort** - This is the source port from the connecting client for the request.-- **RequestQuery** ΓÇô This indicates the destination server that the request is received.-- **Server-Routed**: Back-end pool instance that the request is received.-- **X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the back-end servers. For example: X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.0.2.4.-
- - **SERVER-STATUS**: HTTP response code that Application Gateway received from the back end.
-
- ![Screenshot shows server status in plain text, mostly obscured, with clientPort and SERVER-ROUTED highlighted.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-11.png)
-
-If you see two items are coming from the same ClientIP and Client Port, and they are sent to the same back-end server, that means the Application Gateway configured correctly.
-
-If you see two items are coming from the same ClientIP and Client Port, and they are sent to the different back-end servers, that means the request is bouncing between backend servers, select ΓÇ£**Application is using cookie-based affinity but requests still bouncing between back-end servers**ΓÇ¥ at the bottom to troubleshoot it.
### Use web debugger to capture and analyze the HTTP or HTTPS traffics
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/install-client-tools.md
The following table lists common tools required for creating and managing Azure
| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/sql/azure-data-studio/download-azure-data-studio) | | Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.| | PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.|
-| Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-powershell-from-psgallery) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management) |
+| Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) \| [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/) |
| curl <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package | | oc | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
azure-arc Postgresql Hyperscale Server Group Placement On Kubernetes Cluster Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes.md
The architecture looks like:
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/5_full_list_of_pods.png" alt-text="All pods in namespace on various nodes":::
-As described above, the coordinator nodes (Pod 1) of the Azure Arc-enabled Postgres Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable because the coordinator node typically uses very few resources in comparison to what a worker node may be using. For this reason, carefully chose:
+As described above, the coordinator nodes (Pod 1) of the Azure Arc-enabled PostgreSQL Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable because the coordinator node typically uses very few resources in comparison to what a worker node may be using. For this reason, carefully chose:
- the size of the Kubernetes cluster and the characteristics of each of its physical nodes (memory, vCore) - the number of physical nodes inside the Kubernetes cluster - the applications or workloads you host on the Kubernetes cluster.
To benefit the most from the scalability and the performance of scaling Azure Ar
- between all the PostgreSQL instances that constitute the Azure Arc-enabled PostgreSQL Hyperscale server group You can achieve this in several ways:-- Scale out both Kubernetes and Azure Arc-enabled Postgres Hyperscale: consider scaling horizontally the Kubernetes cluster the same way you are scaling the Azure Arc-enabled PostgreSQL Hyperscale server group. Add a physical node to the cluster for each worker you add to the server group.-- Scale out Azure Arc-enabled Postgres Hyperscale without scaling out Kubernetes: by setting the right resource constraints (request and limits on memory and vCore) on the workloads hosted in Kubernetes (Azure Arc-enabled PostgreSQL Hyperscale included), you will enable the colocation of workloads on Kubernetes and reduce the risk of resource contention. You need to make sure that the physical characteristics of the physical nodes of the Kubernetes cluster can honor the resources constraints you define. You should also ensure that equilibrium remains as the workloads evolve over time or as more workloads are added in the Kubernetes cluster.
+- Scale out both Kubernetes and Azure Arc-enabled PostgreSQL Hyperscale: consider scaling horizontally the Kubernetes cluster the same way you are scaling the Azure Arc-enabled PostgreSQL Hyperscale server group. Add a physical node to the cluster for each worker you add to the server group.
+- Scale out Azure Arc-enabled PostgreSQL Hyperscale without scaling out Kubernetes: by setting the right resource constraints (request and limits on memory and vCore) on the workloads hosted in Kubernetes (Azure Arc-enabled PostgreSQL Hyperscale included), you will enable the colocation of workloads on Kubernetes and reduce the risk of resource contention. You need to make sure that the physical characteristics of the physical nodes of the Kubernetes cluster can honor the resources constraints you define. You should also ensure that equilibrium remains as the workloads evolve over time or as more workloads are added in the Kubernetes cluster.
- Use the Kubernetes mechanisms (pod selector, affinity, anti-affinity) to influence the placement of the pods. ## Next steps
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Before you can proceed with the tasks in this article you need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
-You need an an indirectly connected data controller with the `imageTag v1.0.0_2021-07-30` or greater.
+You need an indirectly connected data controller with the `imageTag v1.0.0_2021-07-30` or greater.
## Limitations
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
Last updated 09/28/2021
# Enable your VMware vCenter resources in Azure
-After you've connected your VMware vCenter to Azure, you'll represent it in Azure. Representing your vCenter in Azure allows you to browse your vCenter inventory from the Azure portal.
+After you've connected your VMware vCenter to Azure, you can browse your vCenter inventory from the Azure portal.
:::image type="content" source="media/browse-vmware-inventory.png" alt-text="Screenshot of where to browse your VMware Inventory from the Azure portal." lightbox="media/browse-vmware-inventory.png":::
-You can visit the VMware vCenter blade in Azure arc to view all the connected vCenters. From here, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations.
+Visit the VMware vCenter blade in Azure Arc center to view all the connected vCenters. From there, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations.
> [!IMPORTANT] > In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
-## Create a representation of VMware resources in Azure
+## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure
-In this section, you'll enable resource pools, networks, and VM templates in Azure.
+In this section, you will enable resource pools, networks, and other non-VM resources in Azure.
>[!NOTE] >Enabling Azure Arc on a VMware vSphere resource is a read-only operation on vCenter. That is, it doesn't make changes to your resource in vCenter.
-1. From your browser, go to the [vCenters blade on Azure Arc Center](https://portal.azure.com/?microsoft_azure_hybridcompute_assettypeoptions=%7B%22VMwarevCenter%22%3A%7B%22options%22%3A%22%22%7D%7D&feature.customportal=false&feature.canmodifystamps=true&feature.azurestackhci=true&feature.scvmmdisktoc=true&feature.scvmmnettoc=true&feature.scvmmsizetoc=true&feature.scvmmvmnetworkingtab=true&feature.scvmmvmdiskstab=true&feature.vmwarearcvm=true&feature.vmwarevmnetworktab=true&feature.vmwarevmdiskstab=true&feature.appliances=true&feature.customlocations=true&feature.arcvmguestmanagement=true&feature.vmwareExtensionToc=true&feature.arcvmextensions=true&feature.vcenters=true&feature.vcenterguestmanagement=true&feature.hideassettypes=Microsoft_Azure_Compute_VirtualMachine&feature.showassettypes=Microsoft_Azure_Compute_AllVirtualMachine#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/vCenter) and navigate to your inventory resources blade.
+1. From your browser, go to the vCenters blade on [Azure Arc Center](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) and navigate to your inventory resources blade.
-1. Select the resource or resources you want to enable and then select **Enable in Azure**.
+2. Select the resource or resources you want to enable and then select **Enable in Azure**.
-1. Select your Azure Subscription and Resource Group and then select **Enable**.
+3. Select your Azure Subscription and Resource Group and then select **Enable**.
This starts a deployment and creates a resource in Azure, creating representations for your VMware vSphere resources. It allows you to manage who can access those resources through Azure role-based access control (RBAC) granularly.
-1. Repeat these steps for one or more network, resource pool, and VM template resources.
+4. Repeat these steps for one or more network, resource pool, and VM template resources.
## Enable existing virtual machines in Azure
-1. From your browser, go to the [vCenters blade on Azure Arc Center](https://portal.azure.com/?microsoft_azure_hybridcompute_assettypeoptions=%7B%22VMwarevCenter%22%3A%7B%22options%22%3A%22%22%7D%7D&feature.customportal=false&feature.canmodifystamps=true&feature.azurestackhci=true&feature.scvmmdisktoc=true&feature.scvmmnettoc=true&feature.scvmmsizetoc=true&feature.scvmmvmnetworkingtab=true&feature.scvmmvmdiskstab=true&feature.vmwarearcvm=true&feature.vmwarevmnetworktab=true&feature.vmwarevmdiskstab=true&feature.appliances=true&feature.customlocations=true&feature.arcvmguestmanagement=true&feature.vmwareExtensionToc=true&feature.arcvmextensions=true&feature.vcenters=true&feature.vcenterguestmanagement=true&feature.hideassettypes=Microsoft_Azure_Compute_VirtualMachine&feature.showassettypes=Microsoft_Azure_Compute_AllVirtualMachine#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/vCenter) and navigate to your vCenter.
+1. From your browser, go to the vCenters blade on [Azure Arc Center](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview) and navigate to your vCenter.
:::image type="content" source="media/enable-guest-management.png" alt-text="Screenshot of how to enable an existing virtual machine in the Azure portal." lightbox="media/enable-guest-management.png":::
In this section, you'll enable resource pools, networks, and VM templates in Azu
1. (Optional) Select **Install guest agent** and then provide the Administrator username and password of the guest operating system.
- The [guest agent](../servers/agent-overview.md) is the connected machine agent. You can install this agent later by selecting the VM in the virtual machine inventory resource blade on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc enabled VMware vSphere](manage-vmware-vms-in-azure.md).
+ The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc enabled VMware vSphere](manage-vmware-vms-in-azure.md).
1. Select **Enable** to start the deployment of the VM represented in Azure.
For information on the capabilities enabled by a guest agent, see [Manage access
## Next steps
-[Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md).
+- [Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md).
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
Last updated 11/08/2021
# Manage access to VMware resources through Azure Role-Based Access Control
-Once your VMware vCenter resources have been enabled for access through Azure, the final step is setting up a self-service experience for your teams. It provides access to the compute, storage, networking, and other vCenter resources to deploy and manage virtual machines (VMs).
-
-This article describes how to use custom roles to manage granular access to VMware resources through Azure.
+Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs.
> [!IMPORTANT] > In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
-## Arc enabled VMware vSphere custom roles
+## Arc-enabled VMware vSphere built-in roles
-You can select from three custom roles to meet your RBAC needs. You can apply these roles to a whole subscription, resource group, or a single resource.
+There are three built-in roles to meet your access control requirements. You can apply these roles to a whole subscription, resource group, or a single resource.
- **Azure Arc VMware Administrator** role - is used by administrators
You can select from three custom roles to meet your RBAC needs. You can apply th
- **Azure Arc VMware VM Contributor** role - is used by anyone who needs to deploy and manage VMs
->[!NOTE]
->These roles will eventually be converted into built-in roles.
- ### Azure Arc VMware Administrator role
-The **Azure Arc VMware Administrator** role is a custom role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment.
-
-```json
-{
- "properties": {
- "roleName": "Azure Arc VMware Administrator",
- "description": "Azure Arc VMware Administrator has full permissions to connect new vCenter instances to Azure and decide which resource pools, networks and templates can be used by developers, and also create, update and delete VMs",
- "assignableScopes": [
- "/subscriptions/00000000-0000-0000-0000-000000000000"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.ConnectedVMwarevSphere/*",
- "Microsoft.Insights/AlertRules/*",
- "Microsoft.Insights/MetricAlerts/*",
- "Microsoft.Support/*",
- "Microsoft.Authorization/*/read",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
-}
-```
-
-Copy the above JSON into an empty file and save the file as `AzureArcVMwareAdministratorRole.json`. Make sure to replace the `00000000-0000-0000-0000-000000000000` with your subscription ID.
+The **Azure Arc VMware Administrator** role is a built-in role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment.
### Azure Arc VMware Private Cloud User role
-The **Azure Arc VMware Private Cloud User** role is a custom role that provides permissions to use the VMware vSphere resources made accessible through Azure. Assign this role to any users or groups that need to deploy, update, or delete VMs.
-
-We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template that you want the user to deploy VMs using:
-
-```json
-{
- "properties": {
- "roleName": "Azure Arc VMware Private Cloud User",
- "description": "Azure Arc VMware Private Cloud User has permissions to use the VMware cloud resources to deploy VMs.",
- "assignableScopes": [
- "/subscriptions/00000000-0000-0000-0000-000000000000"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Insights/AlertRules/*",
- "Microsoft.Insights/MetricAlerts/*",
- "Microsoft.Support/*",
- "Microsoft.Authorization/*/read",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.ConnectedVMwarevSphere/virtualnetworks/join/action",
- "Microsoft.ConnectedVMwarevSphere/virtualnetworks/Read",
- "Microsoft.ConnectedVMwarevSphere/virtualmachinetemplates/clone/action",
- "Microsoft.ConnectedVMwarevSphere/virtualmachinetemplates/Read",
- "Microsoft.ConnectedVMwarevSphere/resourcepools/deploy/action",
- "Microsoft.ConnectedVMwarevSphere/resourcepools/Read",
- "Microsoft.ExtendedLocation/customLocations/Read"
- ],
- "notActions": [],
- "dataActions": [
- "Microsoft.ExtendedLocation/customLocations/deploy/action"
- ],
- "notDataActions": []
- }
- ]
- }
-}
-```
-
-Copy the above JSON into an empty file and save the file as `AzureArcVMwarePrivateCloudUserRole.json`. Make sure to replace the `00000000-0000-0000-0000-000000000000` with your subscription ID.
+The **Azure Arc VMware Private Cloud User** role is a built-in role that provides permissions to use the VMware vSphere resources made accessible through Azure. Assign this role to any users or groups that need to deploy, update, or delete VMs.
+
+We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template that you want the user to deploy VMs using.
### Azure Arc VMware VM Contributor
-The **Azure Arc VMware VM Contributor** role is a custom role that provides permissions to conduct all VMware virtual machine operations. Assign this role to any users or groups that need to deploy, update, or delete VMs.
+The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. Assign this role to any users or groups that need to deploy, update, or delete VMs.
We recommend assigning this role at the subscription or resource group you want the user to deploy VMs using:
-```json
-{
- "properties": {
- "roleName": "Arc VMware VM Contributor",
- "description": "Arc VMware VM Contributor has permissions to perform all actions to update ",
- "assignableScopes": [
- "/subscriptions/00000000-0000-0000-0000-000000000000"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Insights/AlertRules/*",
- "Microsoft.Insights/MetricAlerts/*",
- "Microsoft.Support/*",
- "Microsoft.Authorization/*/read",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.ConnectedVMwarevSphere/virtualmachines/Delete",
- "Microsoft.ConnectedVMwarevSphere/virtualmachines/Write",
- "Microsoft.ConnectedVMwarevSphere/virtualmachines/Read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
-}
-```
-
-Copy the above JSON into an empty file and save the file as `AzureArcVMwareVMContributorRole.json`. Replace the `00000000-0000-0000-0000-000000000000` with your subscription ID.
-
-## Add custom roles to your subscription
-
-In this step, you'll add the custom roles to your subscription. Repeat these steps for each custom role and each subscription.
-
-1. From your browser, go to the [Azure portal](https://portal.azure.com) and select the subscription.
-
-1. Select **Access control (IAM)** > **Add** > **Add a custom role**.
-
-1. From the Baseline permissions field, select **Start from JSON** and then select the json file you saved earlier.
-
-1. Select **Review + Create** to review and then select **Create**.
-
-## Assign custom roles to users or groups
+## Assigning the roles to users/groups
-In this step, you'll add the custom roles to users or groups in the subscription, resource group, or a single resource. Repeat these steps for each scope and role.
+1. Go to the [Azure portal](https://portal.azure.com).
-1. From your browser, go to the [Azure portal](https://portal.azure.com) and select the subscription, resource group, or a single resource.
+2. Search and navigate to the subscription, resource group, or the resource at which scope you want to provide this role.
-1. Locate the Arc enabled VMware vSphere resources. Navigate to the resource group and select the **Show hidden types** checkbox. Then search for **VMware**.
+3. To find the Arc-enabled VMware vSphere resources like resource pools, clusters, hosts, datastores, networks, or virtual machine templates:
+ 1. navigate to the resource group and select the **Show hidden types** checkbox.
+ 2. search for *"VMware"*.
-1. Select **Access control (IAM)** > **Add role assignments** > **Grant access to this resource**.
+4. Click on **Access control (IAM)** in the table of contents on the left.
-1. Select the custom role you want to assign:
+5. Click on **Add role assignments** on the **Grant access to this resource**.
- - **Azure Arc VMware Administrator**
+6. Select the custom role you want to assign (one of **Azure Arc VMware Administrator**, **Azure Arc VMware Private Cloud User**, or **Azure Arc VMware VM Contributor**).
- - **Azure Arc VMware Private Cloud User**
+7. Search for Azure Active Directory user or group that you want assign this role to.
- - **Azure Arc VMware VM Contributor**
+8. Click on the AAD user or group name to select. Repeat this for each user/group you want to provide this permission.
-1. Search for and select the Azure Active Directory (AAD) user or group. Repeat these steps for each user or group you want to grant permission.
+9. Repeat the above steps for each scope and role.
## Next steps
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
Last updated 11/10/2021
# Manage VMware VMs in Azure through Arc-enabled VMware vSphere
-In this article, you'll install extensions supported in Azure Arc-enabled servers. The extensions can use various Azure management services like Azure Policy, Azure Security Center, and Azure Monitor.
-
-You can do various operations on the VMware VMs that are enabled by Azure Arc, such as:
+In this article, you will learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as:
- Start, stop, and restart a VM
You can do various operations on the VMware VMs that are enabled by Azure Arc, s
:::image type="content" source="media/browse-virtual-machines.png" alt-text="Screenshot showing the VMware virtual machine operations." lightbox="media/manage-virtual-machines.png":::
-For more information, such as benefits and capabilities, see [VM extension management with Azure Arc-enabled servers](../servers/manage-vm-extensions.md).
+To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM.
> [!IMPORTANT] > In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
For more information, such as benefits and capabilities, see [VM extension manag
Before you can install an extension, you must enable guest management on the VMware VM.
-1. Make sure your target machine is:
+1. Make sure your target machine:
+
+ - is running a [supported operating system](../servers/agent-overview.md#supported-operating-systems).
- - Running a [supported operating system](../servers/agent-overview.md#supported-operating-systems).
+ - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/agent-overview.md#networking-configuration) are not blocked.
- - Able to connect through the firewall to communicate over the internet and these [URLs](../servers/agent-overview.md#networking-configuration) aren't blocked.
+ - has VMware tools installed and running.
- - Communicating through a proxy server to the internet is not supported.
+ - is powered on and the resource bridge has network connectivity to the host running the VM.
>[!NOTE] >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`. > >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template.
-1. From your browser, go to the [Azure portal](https://aka.ms/AzureArcVM).
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
2. Search for and select the VMware VM and select **Configuration**. 3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**.
- For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group.
+ For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group.
## Install the LogAnalytics extension
-1. From your browser, go to the [Azure portal](https://aka.ms/AzureArcVM).
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
1. Search for and select the VMware VM that you want to install extension.
The deployment starts the installation of the extension on the selected VM.
If you no longer need the VM, you can delete it.
-1. From your browser, go to the [Azure portal](https://aka.ms/AzureArcVM)
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
2. Search for and select the VM you want to delete.
If you no longer need the VM, you can delete it.
## Next steps
-[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md)
+[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere?
-description: Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms.
+ Title: What is Azure Arc-enabled VMware vSphere (preview)?
+description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms.
Last updated 11/10/2021
-# What is Azure Arc-enabled VMware vSphere?
+# What is Azure Arc-enabled VMware vSphere (preview)?
-Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure. It also delivers a consistent management experience across both platforms.
+Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure. With Azure Arc-enabled VMware vSphere, you get a consistent management experience across Azure and VMware vSphere infrastructure.
-Arc-enabled VMware vSphere allows you to:
+Arc-enabled VMware vSphere (preview) allows you to:
-- Conduct various VMware virtual machine (VM) lifecycle operations directly from Azure, such as create, start/stop, resize, and delete.
+- Perform various VMware virtual machine (VM) lifecycle operations directly from Azure, such as create, start/stop, resize, and delete.
- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC).
To deliver this experience, you need to deploy the [Azure Arc resource bridge](.
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere currently works with VMware vSphere version 6.5 and above.
+Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7.
+
+> [!NOTE]
+> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 2500 VMs. If your vCenter has more than 2500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
## Supported scenarios
-The following scenarios are supported in Azure Arc-enabled VMware vSphere:
+The following scenarios are supported in Azure Arc-enabled VMware vSphere (preview):
-- Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure and browse the VMware virtual machine inventory in Azure
+- Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure and browse the VMware virtual machine inventory in Azure.
-- Administrators can use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. They can also bulk-enabled guest management on registered virtual machines.
+- Administrators can use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. They can also enable guest management on many registered virtual machines at once.
- Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC. -- App teams can use Azure portal, CLI, or REST API to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
+- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
-- App teams and administrators can install extensions, such as the Log Analytics agent, Custom Script Extension, and Dependency Agent, on the virtual machines and do operations supported by the extensions.
+- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, and Dependency Agent, on the virtual machines and do operations supported by the extensions.
## Supported regions
-Azure Arc-enabled VMware vSphere is currently supported in these regions:
+You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions:
- East US - West Europe
-### vCenter requirements
--- For the VMware vCenter Server Appliance, allow inbound connections on TCP port 443 to enable the Azure Arc resource bridge (preview) and VMware cluster extension to communicate with the appliance.--- A resource pool with capacity to allocate 16 GB of RAM and 4 vCPUs.--- A datastore with a minimum of 100 GB free disk space available through the resource pool.--- An external virtual network/switch and internet access, direct or through a proxy server to support outbound communication from Arc resource bridge.-
-### vSphere requirements
-
-A vSphere account that can read all inventory, deploy, and update VMs to all the resource pools (or clusters), networks, and virtual machine templates that you want to use with Azure Arc. The account is also used for ongoing operations of the Arc-enabled VMware vSphere, and deployment of the Azure Arc resource bridge (preview) VM.
-
-If you are using the [Azure VMware solution](../../azure-vmware/introduction.md), this account would be the **cloudadmin** account.
-
-## Deployment
-
-Deploying the Azure Arc resource bridge (preview) is accomplished using three configuration YAML files:
--- **Application.yaml** - This is the primary configuration file that provides a path to the provider configuration and resource configuration YAML files. The file also specifies the network configuration of the resource bridge, and includes generic cluster information that is not provider-specific.-- **Infra.yaml** - A configuration file that includes a set of configuration properties specific to your private cloud provider.-- **Resource.yaml** - A configuration file that contains all the information related to the Azure Resource Manager resource, such as the subscription name and resource group for the resource bridge in Azure.-
-In the current preview release, these configuration files are automatically created when you run the [Az arcappliance createconfig](/cli/azure/arcappliance/createconfig) command, where the command queries the environment and prompts you to make selections through an interactive experience. See the [how to connect your VMware vCenter to Azure Arc using a helper script](quick-start-connect-vcenter-to-arc-using-script.md).
-
-The `appliance.yaml` file is the main configuration file that specifies the path to two YAML files to deploy the Azure Arc resource bridge (preview) in your environment, and to register it in Azure. This file also includes the network configuration settings, specifically its IP address and optionally, a proxy server if direct network connection to the internet is not allowed in your environment.
-
-```bash
-# Relative or absolute path to the infra.yaml file
-infrastructureConfigPath: "VMware-infra.yaml"
-
-# Relative or absolute path to ARM resource configuration file
-applianceResourceFilePath: "VMware-resource.yaml"
-
-# IP address to be used for control plane/API server from the DHCP range available in the environment. This IP address must be reserved for this, and can't be changed. If it is changed, the resource bridge will not be reachable by all the other Arc agents and services.
-applianceClusterConfig:
- controlPlaneEndpoint: <ipAddress>
- networking:
- # Specify the proxy configuration.
- proxy:
- http: "<http://<proxyURL>:<proxyport>"
- https: "<https://<proxyURL>:<proxyport>"
- noproxy: "..."
- # Specify certificate if applicable
- certificateFilePath: "<certificatePath>"
-```
-
-The `infra.yaml` file includes specific information to enable deployment of the virtual machine within the vSphere infrastructure.
-
-```bash
-vsphereprovider:
-# vCenter credentials, which will be used to create the resource bridge.
-credentials:
- address: <vcenterAddress>
- username: <userName>
- password: <password>
-# Current deployment uses the template and snapshot to create the resource bridge VM.
-appliancevm:
- vmtemplate: <templateName>
- templatesnapshot: <snapshotName>
-# The datacenter where the resource bridge VM will be created on.
-datacenter: <datacenteName>
- # The datastore used by the resource bridge VM
-datastore: <datastoreName>
-# The network interface used by the resource bridge VM
-network: <networkInterfaceName>
-# The resource pool where the resource bridge VM will be created on.
-resourcepool: <resourcePoolName>
-# The folder where the resource bridge will be created under.
-folder: <folderName>
-```
-
-The `resource.yaml` file contains all the information related to the Azure Resource Manager resource definition, such as the subscription, resource group, resource name, and location for the resource bridge in Azure.
-
-```bash
-resource:
- resource_group: <resourceGroupName>
- name: <resourceName>
- location: <location>
- subscription: <subscription>
-```
- ## Next steps - [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Last updated 11/10/2021
# Quickstart: Connect your VMware vCenter to Azure Arc using the helper script
-Before using the Azure Arc-enabled VMware vSphere features, you'll need to connect your VMware vCenter Server to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server to Azure Arc using a helper script.
+To start using the Azure Arc-enabled VMware vSphere (preview) features, you'll need to connect your VMware vCenter Server to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server to Azure Arc using a helper script.
-First, the script deploys a lightweight Azure Arc appliance, called [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), as a virtual machine running in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
+First, the script deploys a virtual appliance, called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md), in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
> [!IMPORTANT] > In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
First, the script deploys a lightweight Azure Arc appliance, called [Azure Arc r
### vCenter Server -- vCenter Server running version 6.5 or later.
+- vCenter Server running version 6.7
-- Allow inbound connections on TCP port 443 so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+- Allow inbound connections on TCP port (usually 443) so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
- >[!NOTE]
- >In this release, only the default port of 443 is supported. If you use a different port, the resource bridge (preview) VM creation fails.
+- A resource pool or a cluster with a minimum capacity of 16 GB of RAM, four vCPUs.
-- A resource pool with a minimum capacity of 16 GB of RAM, four vCPUs.--- A datastore with a minimum of 100 GB of free disk space available through the resource pool.
+- A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster.
- An external virtual network/switch and internet access, directly or through a proxy. ### vSphere accounts
-A vSphere account that can read all inventory, deploy, and update VMs to all the resource pools (or clusters), networks, and virtual machine templates that you want to use with Azure Arc. This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the Azure Arc resource bridge (preview) VM deployment.
+A vSphere account that can:
+- read all inventory
+- deploy, and update VMs to all the resource pools (or clusters), networks, and virtual machine templates that you want to use with Azure Arc.
->[!NOTE]
->If you are using the Azure VMware solution, this account would be the `cloudadmin` account.
+This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the Azure Arc resource bridge (preview) VM deployment.
### Workstation
-A Windows or Linux machine that can access both your vCenter Server and internet, directly or through proxy.
+A Windows or Linux machine that can access both your vCenter Server and internet, directly or through a proxy.
## Prepare vCenter Server
-1. Create a resource pool with a reservation of at least 16 GB of RAM and four vCPUs. It should also have at least 100 GB of disk space.
+1. Create a resource pool with a reservation of at least 16 GB of RAM and four vCPUs. It should also have access to a datastore with at least 100 GB of free disk space.
-1. Ensure the vSphere accounts have the appropriate permissions.
+2. Ensure the vSphere accounts have the appropriate permissions.
-## Run the script
+## Download the onboarding script
- Refer to the table for the script parameters:
+1. Go to Azure portal.
-| **Parameter** | **Details** |
-| | |
-| **Subscription** | Azure subscription name or ID where you'll create the Azure resources. |
-| **ResourceGroup** | Resource group where you'll create the Azure Arc resources. |
-| **AzLocation** | Azure location ([region](overview.md#supported-regions)) where the resource metadata would be stored. |
-| **ApplianceName** | You can provide the Azure Arc resource bridge (preview) a name of your choice, for example, *contoso-nyc-appliance*. |
-| **CustomLocationName** | Name for the custom location in Azure. |
-| **VcenterName** | Name for the vCenter Server in Azure, which is the name your teams see when deploying their VMs through Azure Arc. </br> Use the name of the data center or its physical location, for example, *contoso-nyc-dc*. |
+2. Search for **Azure Arc** and click on it.
-### Windows
+3. On the **Overview** page, click on **Add** under **Add your infrastructure for free** or move to the **Infrastructure** tab.
-1. Open a PowerShell console and navigate to the folder where you want to keep the setup files.
+4. Under **Platform** section, click on **Add** under VMware.
-2. Download the script by running the following command.
+ :::image type="content" source="media/add-vmware-vcenter.png" alt-text="Screenshot showing how to add a VMware vCenter through Azure Arc center":::
- ```powershell
- Invoke-WebRequest https://arcvmwaredl.blob.core.windows.net/arc-appliance/arcvmware-setup.ps1 -OutFile arcvmware-setup.ps1
+5. Select **Create a new resource bridge** and click **Next**
- ```
+6. Provide a name of your choice for Arc resource bridge. Eg. `contoso-nyc-resourcebridge`
-3. Run the following command to allow the script to run as an unsigned script. If you close the session before completing all the steps, rerun this in a new session.
+7. Select a Subscription and Resource group where the resource bridge would be created.
- ```powershell
- Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
- ```
+8. Under Region, select an Azure location where the resource metadata would be stored. Currently supported regions are `East US` and `West Europe`.
-4. Execute the script by providing the parameters `Subscription`, `ResourceGroup`, `AzLocation`, `ApplianceName`, `CustomLocationName`, and `VcenterName` (refer to the table [above](#run-the-script) to understand the parameters).
+9. Provide a name for the Custom location. This will be the name which you will see when you deploy VMs. Name it for the datacenter or physical location of your datacenter. Eg: `contoso-nyc-dc`
- ```powershell
- ./arcvmware-setup.ps1 -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VcenterName <VcenterName>
- ```
+10. Leave the option for **Use the same subscription and resource group as your resource bridge** checked.
-### Linux
+11. Provide a name for your vCenter in Azure. Eg: `contoso-nyc-vcenter`
-1. Open a terminal and navigate to the folder where you want to keep the setup files.
+12. Click on **Next: Download and run script >**
-2. Run the following command to download the onboarding script.
+13. If your subscription is not registered with all the required resource providers, a **Register** button will appear. Click the button before proceeding to the next step.
- ```bash
- wget https://arcvmwaredl.blob.core.windows.net/arc-appliance/arcvmware-setup.sh
- ```
+ :::image type="content" source="media/register-arc-vmware-providers.png" alt-text="Screenshot showing button to register required resource providers during vCenter onboarding to Arc":::
-3. Update the script with the parameters `ResourceGroup`,`AzLocation`, `ApplianceName`, `CustomerLocationName`, and `VcenterName` (refer to the table [above](#run-the-script) to understand the parameters).
+14. Based on the operating system of your workstation, download the powershell or bash script, and copy it to the [workstation](#prerequisites).
-4. Run the following command to execute the script.
+15. [Optional] Click on **Next : Verification**. This page will show you the status of your onboarding once you run the script on your workstation. Closing this page will not affect the onboarding.
- ```bash
- sudo bash arcvmware-setup.sh
+## Run the script
+
+### Windows
+
+Follow the below instructions to run the script on a windows machine:
+
+1. Open a PowerShell window and navigate to the folder where you have downloaded the powershell script.
+
+2. Execute the following command to allow the script to run as it is an unsigned script (if you close the session before you complete all the steps, run this again for new session.)
+
+ ``` powershell-interactive
+ Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
```
-## Script runtime
+3. Execute the script
-Script execution can take up to 30 minutes to complete. You're prompted to provide values for several parameters and you can refer to the following table for detailed information.
+ ``` powershell-interactive
+ ./resource-bridge-onboarding-script.ps1
+ ```
-| Requirement | Description |
-| | |
-| **Azure login** | You're prompted to sign into Azure by visiting the [device login](https://www.microsoft.com/devicelogin) site and then pasting the prompted code. |
-| **vCenter FQDN/Address** | Fully qualified domain name (FQDN) for vCenter Server (or an IP address). For example, *10.160.0.1* or *nyc-vcenter.contoso.com*. |
-| **vCenter Username** | Username for the vSphere account. For more information, see [vSphere accounts](#vsphere-accounts) above. |
-| **vCenter password** | Password for the vSphere account. |
-| **Data center selection** | Select the name of the data center (as shown in vSphere client) where the Azure Arc resource bridge (preview) VM should be deployed. |
-| **Network selection** | Select the name of the virtual network or segment to which VM must be connected. This network should allow the appliance to talk to the vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have a DHCP server on your network and want to use it, type **y** and then populate the following: <ul><li><b>Static IP address prefix</b>: Network address in CIDR notation, for example, 192.168.0.0/24.</li><li><b>Static gateway</b>: For example, 192.168.0.0.</li><li><b>DNS servers</b>: Comma-separated list of DNS servers.</li><li><b>Start range IP</b>: Minimum size of two available addresses is required for upgrade scenarios. Provide the start IP of that range.</li><li><b>End range IP</b>: Last IP of the IP range requested in the previous field.</li><li><b>VLAN ID</b> (optional)</li></ul> |
-| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge (preview) VM would be deployed. |
-| **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge (preview) VM. |
-| **Folder** | Select the name of the vSphere folder where the Azure Arc resource bridge (preview) VM should be deployed. |
-| **Vm template Name** | Provide a name for the VM template that is created in your vCenter based on the downloaded OVA. For example, `arc-appliance-template`. |
-| **Control Pane IP** | Provide a reserved IP address. A reserved IP address in your DHCP range or a static IP outside of DHCP range, but still available on the network. This IP address shouldn't be assigned to any other machine on the network. |
-| **Appliance proxy settings** | If you have a proxy in your appliance network, type **y** and then populate the following: <ul><li><b>Http</b>: Address of HTTP proxy server.</li><li><b>NoProxy</b>: addresses to be excluded from proxy.</li><li><b>CertificateFilePath</b>: for SSL-based proxies, path to certificate to be used.</li></ul> |
+### Linux
-Once the command execution completes, you can [try out the capabilities](browse-and-enable-vcenter-resources-in-azure.md) of Azure Arc- enabled VMware vSphere.
+Follow the below instructions to run the script on a Linux machine:
-### Retry command - Windows
+1. Open the terminal and navigate to the folder where you have downloaded the bash script.
-If the appliance creation fails and you need to retry it. Run the command with `-Force` to clean up the previous deployment and onboard again.
+2. Execute the script using the following command:
-```powershell
-./arcvmware-setup.ps1 -Force -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VcenterName <VcenterName>
-```
+ ``` sh
+ bash resource-bridge-onboarding-script.sh
+ ```
-### Retry command - Linux
+## Inputs for the script
-If the appliance creation fails and you need to retry it, run the command with `--force` to clean up the previous deployment and onboard again.
+A typical onboarding using the script takes about 30-60 minutes and you will be prompted for the various details during the execution. Refer to the table below for information on them:
-```bash
-sudo bash arcvmware-setup.sh --force
-```
+| **Requirements** | **Details** |
+| | |
+| **Azure login** | Log in to Azure by visiting [this](https://www.microsoft.com/devicelogin) site and using the code when prompted. |
+| **vCenter FQDN/Address** | FQDN for the vCenter (or an ip address). </br> Eg: `10.160.0.1` or `nyc-vcenter.contoso.com` |
+| **vCenter Username** | Username for the vSphere account. The required permissions for the account are listed in the prerequisites above. |
+| **vCenter password** | Password for the vSphere account |
+| **Data center selection** | Select the name of the datacenter (as shown in vSphere client) where the Arc resource bridge VM should be deployed |
+| **Network selection** | Select the name of the virtual network or segment to which VM must be connected. This network should allow the appliance to talk to the vCenter server and the Azure endpoints (or internet). |
+| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, type ΓÇÿyΓÇÖ else ΓÇÿnΓÇÖ. On choosing static IP configuration, you will be asked the following </br> 1. `Static IP address prefix` : Network address in CIDR notation E.g. `192.168.0.0/24` </br> 2. `Static gateway`: Eg. `192.168.0.0` </br> 3. `DNS servers`: Comma-separated list of DNS servers </br> 4. `Start range IP`: Minimum size of 2 available addresses is required, one of the IP is for the VM, and another one is reserved for upgrade scenarios. Provide the start IP of that range </br> 5. `End range IP`: the last IP of the IP range requested in previous field. </br> 6. `VLAN ID` (Optional) |
+| **Resource pool** | Select the name of the resource pool to which the Arc resource bridge VM would be deployed |
+| **Data store** | Select the name of the datastore to be used for Arc resource bridge VM |
+| **Folder** | Select the name of the vSphere VM and Template folder where Arc resource bridge VM should be deployed. |
+| **VM template Name** | Provide a name for the VM template that will be created in your vCenter based on the downloaded OVA. Eg: arc-appliance-template |
+| **Control Pane IP** | Provide a reserved IP address (a reserved IP address in your DHCP range or a static IP outside of DHCP range but still available on the network). Ensure this IP address isn't assigned to any other machine on the network. |
+| **Appliance proxy settings** | Type ΓÇÿyΓÇÖ if there is proxy in your appliance network, else type ΓÇÿnΓÇÖ. </br> You need to populate the following when you have proxy setup: </br> 1. `Http`: Address of http proxy server </br> 2. `Https`: Address of https proxy server </br> 3. `NoProxy`: Addresses to be excluded from proxy </br> 4. `CertificateFilePath`: For ssl based proxies, path to certificate to be used
+
+Once the command execution completed, your setup is complete and you can try out the capabilities of Azure Arc-enabled VMware vSphere. You can proceed to the [next steps.](browse-and-enable-vcenter-resources-in-azure.md).
## Next steps
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Once your administrator has connected a VMware vCenter to Azure, represented VMw
- An Azure subscription and resource group where you have an Arc VMware VM contributor role. -- A resource pool on which you have Arc Private Cloud Resource User Role.
+- A resource pool/cluster/host on which you have Arc Private Cloud Resource User Role.
- A virtual machine template resource on which you have Arc Private Cloud Resource User Role. -- (Optional) A virtual network resource on which you have Arc Private Cloud Resource User Role.
+- A virtual network resource on which you have Arc Private Cloud Resource User Role.
## How to create a VM in the Azure portal
-1. From your browser, go to the [Azure portal](https://aka.ms/AzureArcVM). You'll see a unified browse experience for Azure and Arc virtual machines.
+1. From your browser, go to the [Azure portal](https://portal.azure.com). Navigate to virtual machines browse view. You'll see a unified browse experience for Azure and Arc virtual machines.
:::image type="content" source="media/browse-virtual-machines.png" alt-text="Screenshot showing the unified browse experience for Azure and Arc virtual machines.":::
-1. Select **Add** and then select **Azure Arc machine** from the drop-down.
+2. Click **Add** and then select **Azure Arc machine** from the drop-down.
- :::image type="content" source="media/create-azure-arc-virtual-machine-2.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine.":::
+ :::image type="content" source="media/create-azure-arc-virtual-machine-1.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine.":::
-1. Select the **Subscription** and **Resource group** where you want to deploy the VM.
+3. Select the **Subscription** and **Resource group** where you want to deploy the VM.
-1. Provide the **Virtual machine name** and then select a **Custom location** that your administrator has shared with you.
+4. Provide the **Virtual machine name** and then select a **Custom location** that your administrator has shared with you.
If multiple kinds of VMs are supported, select **VMware** from the **Virtual machine kind** drop-down.
-1. Select the **Resource pool/cluster/host** into which the VM should be deployed.
+5. Select the **Resource pool/cluster/host** into which the VM should be deployed.
-1. Select the **Template** based on which the VM you'll create.
+6. Select the **datastore** that you want to use for storage.
+
+7. Select the **Template** based on which the VM you'll create.
>[!TIP] >You can override the template defaults for **CPU Cores** and **Memory**. If you selected a Windows template, provide a **Username**, **Password** for the **Administrator account**.
-1. (Optional) Change the disks configured in the template. For example, you can add more disks or update existing disks. These disks are created on the default datastore per the VMware vCenter storage policies.
+8. (Optional) Change the disks configured in the template. For example, you can add more disks or update existing disks. All the disks and VM will be on the datastore selected in step 6.
-1. (Optional) Change the network interfaces configured in the template. For example, you can add network interface (NIC) cards or update existing NICs. You can also change the network to which this NIC will be attached, provided you have appropriate permissions to the network resource.
+9. (Optional) Change the network interfaces configured in the template. For example, you can add network interface (NIC) cards or update existing NICs. You can also change the network to which this NIC will be attached, provided you have appropriate permissions to the network resource.
-1. (Optional) Add tags to the VM resource if necessary.
+10. (Optional) Add tags to the VM resource if necessary.
-1. Select **Create** after reviewing all the properties. It should take a few minutes to provision the VM.
+11. Select **Create** after reviewing all the properties. It should take a few minutes to provision the VM.
## Next steps
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
Title: 'Tutorial: Use Microsoft Azure Maps Creator to create indoor maps'+ description: Tutorial on how to use Microsoft Azure Maps Creator to create indoor maps Previously updated : 10/28/2021 Last updated : 01/24/2022
This tutorial describes how to create indoor maps. In this tutorial, you'll lear
> * Create a dataset from your map data. > * Create a tileset from the data in your dataset. > * Query the Azure Maps Web Feature Service (WFS) API to learn about your map features.
-> * Create a feature stateset by using your map features and the data in your dataset.
-> * Update your feature stateset.
+> * Create a feature stateset that can be used to set the states of features in your dataset.
+> * Update the state of a given map feature.
## Prerequisites
This tutorial describes how to create indoor maps. In this tutorial, you'll lear
This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment. >[!IMPORTANT]
-> This tutorial uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). To view mappings of region to geographical location, [see Creator service geographic scope](creator-geographic-scope.md).
+> This tutorial uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).
## Upload a Drawing package
To upload the Drawing package:
11. Select **Select File**, and then select a Drawing package.
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="Select a Drawing package.":::
+ :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, this is used to select the Drawing package to import into Creator.":::
12. Select **Send**. 13. In the response window, select the **Headers** tab.
-14. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the Drawing package upload.
+14. Copy the value of the **Operation-Location** key. The Operation-Location key is also known as the `status URL` and is required to check the status of the Drawing package upload, which is explained in the next section.
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="Copy the status URL in the Location key.":::
+ :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="A screenshot of Postman showing the header tab in the response window, with the Operation Location key highlighted.":::
### Check the Drawing package upload status
To check the status of the drawing package and retrieve its unique ID (`udid`):
4. Select the **GET** HTTP method.
-5. Enter the `status URL` you copied in [Upload a Drawing package](#upload-a-drawing-package). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied as the last step in the previous section of this article. You will need to append your subscription key to the end of the URL `&subscription-key={Your-Azure-Maps-Primary-Subscription-key}` (replace `{Azure-Maps-Primary-Subscription-key}` with your Azure Maps primary subscription key). The request should look like the following URL:
```http https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
To retrieve content metadata:
4. . Select the **GET** HTTP method.
-5. Enter the `resource Location URL` you copied in [Check Drawing package upload status](#check-the-drawing-package-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `resource Location URL` you copied as the last step in the previous section of this article. You will need to append your subscription key to the end of the URL `&subscription-key={Your-Azure-Maps-Primary-Subscription-key}` (replace `{Azure-Maps-Primary-Subscription-key}` with your Azure Maps primary subscription key). The request should look like the following URL:
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
To retrieve content metadata:
## Convert a Drawing package
-Now that the Drawing package is uploaded, we'll use the `udid` for the uploaded package to convert the package into map data. The Conversion API uses a long-running transaction that implements the pattern defined [here](creator-long-running-operation-v2.md).
+Now that the Drawing package is uploaded, we'll use the `udid` for the uploaded package to convert the package into map data. The Conversion API uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation](creator-long-running-operation-v2.md) article.
To convert a Drawing package:
To create a tileset:
### Check the tileset creation status
-To check the status of the dataset creation process and retrieve the `tilesetId`:
+To check the status of the tileset creation process and retrieve the `tilesetId`:
1. In the Postman app, select **New**.
To query the unit collection in your dataset:
6. Select **Send**.
-7. After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". In this tutorial, we'll use "UNIT26" as our feature `id` in the next section.
+7. After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". In this tutorial, you'll use "UNIT26" as your feature `id` when you [Update a feature state](#update-a-feature-state).
```json {
To create a stateset:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Stateset API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status)):
+5. Enter the following URL to the [Stateset API](/rest/api/maps/v2/feature-state/create-stateset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{datasetId`} with the `datasetId` obtained in [Check the dataset creation status](#check-the-dataset-creation-status)):
```http https://us.atlas.microsoft.com/featurestatesets?api-version=2.0&datasetId={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
To create a stateset:
6. Select the **Headers** tab.
-7. In the **KEY** field, select `Content-Type`.
+7. In the **KEY** field, select `Content-Type`.
8. In the **VALUE** field, select `application/json`.
- :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="Header tab information for stateset creation.":::
+ :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of Postman showing the Header tab of the POST request that shows the Content Type Key with a value of application forward slash json.":::
9. Select the **Body** tab.
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: IT Service Management Connector - Secure Export in Azure Monitor description: This article shows you how to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items. -- Last updated 09/08/2020
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Azure Configurations description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items. -- Last updated 01/03/2021
azure-monitor Itsmc Connections Cherwell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-cherwell.md
Title: Connect Cherwell with IT Service Management Connector description: This article provides information about how to Cherwell with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. -- Last updated 12/21/2020
azure-monitor Itsmc Connections Provance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-provance.md
Title: Connect Provance with IT Service Management Connector description: This article provides information about how to Provance with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. -- Last updated 12/21/2020
azure-monitor Itsmc Connections Scsm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-scsm.md
Title: Connect SCSM with IT Service Management Connector description: This article provides information about how to SCSM with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. -- Last updated 12/21/2020
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
Title: Connect ServiceNow with IT Service Management Connector description: Learn how to connect ServiceNow with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage ITSM work items. -- Last updated 12/21/2020
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections.md
Title: IT Service Management Connector in Azure Monitor description: This article provides information about how to connect your ITSM products/services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. -- Last updated 05/12/2020
azure-monitor Itsmc Connector Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connector-deletion.md
Title: Delete unused ITSM connectors description: This article provides an explanation of how to delete ITSM connectors and the action groups that are associated with it. -- Last updated 12/29/2020
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-definition.md
Title: IT Service Management Connector in Log Analytics description: This article provides an overview of IT Service Management Connector (ITSMC) and information about using it to monitor and manage ITSM work items in Log Analytics and resolve problems quickly. -- Last updated 05/24/2018
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management Connector overview description: This article provides an overview of IT Service Management Connector (ITSMC). -- Last updated 12/16/2020
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with BMC description: This article shows you how to connect your ITSM products/services with BMC on Secure Export in Azure Monitor. -- Last updated 12/31/2020
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with ServiceNow description: This article shows you how to connect your ITSM products/services with ServiceNow on Secure Export in Azure Monitor. -- Last updated 12/31/2020
azure-monitor Itsmc Service Manager Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-service-manager-script.md
Title: Create web app for Service Management Connector description: Create a Service Manager Web app using an automated script to connect with IT Service Management Connector in Azure, and centrally monitor and manage the ITSM work items. -- Last updated 12/06/2021
azure-monitor Itsmc Synced Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-synced-data.md
Title: Data synced from your ITSM product to LA Workspace description: This article provides an overview of Data synced from your ITSM product to LA Workspace. -- Last updated 12/29/2020
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/availability-overview.md
There are four types of availability tests:
* [Custom TrackAvailability test](availability-azure-functions.md): If you decide to create a custom application to run availability tests, you can use the [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) method to send the results to Application Insights. > [!IMPORTANT]
-> Both the [URL ping test](monitor-web-app-availability.md) and the [multi-step web test](availability-multistep.md) rely on the DNS infrastructure of the public internet to resolve the domain names of the tested endpoints. If you're using private DNS, you must ensure that the public domain name servers can remove every domain name of your test. When that's not possible, you can use [custom TrackAvailability tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) instead.
+> Both the [URL ping test](monitor-web-app-availability.md) and the [multi-step web test](availability-multistep.md) rely on the DNS infrastructure of the public internet to resolve the domain names of the tested endpoints. If you're using private DNS, you must ensure that the public domain name servers can resolve every domain name of your test. When that's not possible, you can use [custom TrackAvailability tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) instead.
You can create up to 100 availability tests per Application Insights resource.
See the dedicated [troubleshooting article](troubleshoot-availability.md).
* [Multi-step web tests](availability-multistep.md) * [URL tests](monitor-web-app-availability.md) * [Create and run custom availability tests using Azure Functions](availability-azure-functions.md)
-* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
+* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
> [!IMPORTANT] > The following versions of ASP.NET Core are supported for auto-instrumentation on Windows: ASP.NET Core 3.1, 5.0 and 6.0. Versions 2.0, 2.1, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
+> [!NOTE]
+> Auto-instrumentation used to be known as "codeless attach" before October 2021.
+ [Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead. See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/codeless-overview.md
Auto-instrumentation allows you to enable application monitoring with Applicatio
Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically. In no time, you'll see the metrics, requests, and dependencies in your Application Insights resource, which will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
+> [!NOTE]
+> Auto-instrumentation used to be known as "codeless attach" before October 2021.
++ ## Supported environments, languages, and resource providers As we're adding new integrations, the auto-instrumentation capability matrix becomes complex. The table below shows you the current state of the matter as far as support for various resource providers, languages, and environments go.
For [Python](./opencensus-python.md), use the SDK.
## Azure Functions
-The basic monitoring for Azure Functions is enabled by default to collects log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview for Windows and you can [enable it in Azure portal](./monitor-functions.md).
+The basic monitoring for Azure Functions is enabled by default to collect log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview for Windows and you can [enable it in Azure portal](./monitor-functions.md).
## Azure Spring Cloud
azure-netapp-files Monitor Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/monitor-azure-netapp-files.md
na Previously updated : 01/06/2022 Last updated : 01/24/2022 # Ways to monitor Azure NetApp Files
For Activity log warnings for Azure NetApp Files volumes, see [Activity log warn
## Azure NetApp Files metrics
-Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. By analyzing these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
+Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. With these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then click **Metric** to view the available metrics. For more information about Azure NetApp Files metrics, see [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md).
+## Azure Service Health
+
+The [Azure Service Health dashboard](https://azure.microsoft.com/features/service-health) keeps you informed about the health of your environment. It provides a personalized view of the status of your Azure services in the regions where they are used. The dashboard provides upcoming planned maintenance and relevant health advisories while allowing you to manage service health alerts.
+
+For more information, see [Azure Service Health dashboard](../service-health/service-health-overview.md) documentation.
+ ## Capacity utilization monitoring
-It's important to monitor capacity regularly. You can monitor capacity utilization at the VM level. You can check the used and available capacity of a volume by using Windows or Linux clients. You can also configure alerts by using `ANFCapacityManager`.
+It is important to monitor capacity regularly. You can monitor capacity utilization at the VM level. You can check the used and available capacity of a volume by using Windows or Linux clients. You can also configure alerts by using `ANFCapacityManager`.
For more information, see [Monitor capacity utilization](volume-hard-quota-guidelines.md#how-to-operationalize-the-volume-hard-quota-change).
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-array.md
The output from the preceding example with the default values is:
`union(arg1, arg2, arg3, ...)`
-Returns a single array or object with all elements from the parameters. Duplicate values or keys are only included once.
+Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
An array or object.
+### Remarks
+
+The union function uses the sequence of the parameters to determine the order and values of the result.
+
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+
+For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+ ### Example The following example shows how to use union with arrays and objects:
param firstArray array = [
param secondArray array = [ 'three' 'four'
+ 'two'
] output objectOutput object = union(firstObject, secondObject)
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-object.md
The output from the preceding example with the default values is:
`union(arg1, arg2, arg3, ...)`
-Returns a single array or object with all elements from the parameters. Duplicate values or keys are only included once.
+Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
An array or object.
+### Remarks
+
+The union function uses the sequence of the parameters to determine the order and values of the result.
+
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+
+For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+ ### Example The following example shows how to use union with arrays and objects:
param firstArray array = [
param secondArray array = [ 'three' 'four'
+ 'two'
] output objectOutput object = union(firstObject, secondObject)
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-array.md
The output from the preceding example with the default values is:
`union(arg1, arg2, arg3, ...)`
-Returns a single array or object with all elements from the parameters. Duplicate values or keys are only included once.
+Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
### Parameters
Returns a single array or object with all elements from the parameters. Duplicat
An array or object.
+### Remarks
+
+The union function uses the sequence of the parameters to determine the order and values of the result.
+
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+
+For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+ ### Example The following example shows how to use union with arrays and objects.
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-object.md
The output from the preceding example is:
`union(arg1, arg2, arg3, ...)`
-Returns a single array or object with all elements from the parameters. Duplicate values or keys are only included once.
+Returns a single array or object with all elements from the parameters. For arrays, duplicate values are included once. For objects, duplicate property names are only included once.
### Parameters
Returns a single array or object with all elements from the parameters. Duplicat
An array or object.
+### Remarks
+
+The union function uses the sequence of the parameters to determine the order and values of the result.
+
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+
+For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+ ### Example The following example shows how to use union with arrays and objects:
azure-sql-edge Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/backup-restore.md
keywords:
--++ Last updated 05/19/2020
azure-sql-edge Configure Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/configure-replication.md
keywords:
--++ Last updated 05/19/2020
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/configure.md
keywords:
--++ Last updated 09/22/2020
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/connect.md
keywords:
--++ Last updated 07/25/2020
azure-sql-edge Create External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/create-external-stream-transact-sql.md
keywords:
--++ Last updated 07/27/2020
azure-sql-edge Create Stream Analytics Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/create-stream-analytics-job.md
keywords:
--++ Last updated 07/27/2020
azure-sql-edge Data Retention Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-cleanup.md
keywords: SQL Edge, data retention
--++ Last updated 09/04/2020
azure-sql-edge Data Retention Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-enable-disable.md
keywords: SQL Edge, data retention
--++ Last updated 09/04/2020
azure-sql-edge Data Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-overview.md
keywords: SQL Edge, data retention
--++ Last updated 09/04/2020
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/date-bucket-tsql.md
keywords: Date_Bucket, SQL Edge
--++ Last updated 09/03/2020
azure-sql-edge Deploy Dacpac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-dacpac.md
keywords: SQL Edge, sqlpackage
--++ Last updated 09/03/2020
azure-sql-edge Deploy Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-kubernetes.md
keywords: SQL Edge, container, kubernetes
--++ Last updated 09/22/2020
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-portal.md
keywords: deploy SQL Edge
--++ Last updated 09/22/2020
azure-sql-edge Disconnected Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/disconnected-deployment.md
keywords: SQL Edge, container, docker
--++ Last updated 09/22/2020
azure-sql-edge Drop External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/drop-external-stream-transact-sql.md
keywords:
--++ Last updated 05/19/2020
azure-sql-edge Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/features.md
keywords: introduction to SQL Edge, what is SQL Edge, SQL Edge overview
--++ Last updated 09/03/2020
azure-sql-edge High Availability Sql Edge Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/high-availability-sql-edge-containers.md
keywords: SQL Edge, containers, high availability
--++ Last updated 09/22/2020
azure-sql-edge Imputing Missing Values https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/imputing-missing-values.md
keywords: SQL Edge, timeseries
--++ Last updated 09/22/2020
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/overview.md
keywords: introduction to SQL Edge,what is SQL Edge, SQL Edge overview
--++ Last updated 05/19/2020
azure-sql-edge Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/performance-best-practices.md
keywords: SQL Edge, data retention
--++ Last updated 09/22/2020
azure-sql-edge Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/security-overview.md
keywords: SQL Edge, security
--++ Last updated 09/22/2020
azure-sql-edge Stream Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/stream-data.md
keywords:
--++ Last updated 05/19/2020
azure-sql-edge Streaming Catalog Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/streaming-catalog-views.md
keywords: sys.external_streams, SQL Edge
--++ Last updated 05/19/2019
azure-sql-edge Sys External Job Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-job-streams.md
keywords: sys.external_job_streams, SQL Edge
--++ Last updated 05/19/2019
azure-sql-edge Sys External Streaming Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-streaming-jobs.md
keywords: sys.external_streaming_jobs, SQL Edge
--++ Last updated 05/19/2019
azure-sql-edge Sys External Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-streams.md
keywords: sys.external_streams, SQL Edge
--++ Last updated 05/19/2019
azure-sql-edge Sys Sp Cleanup Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-sp-cleanup-data-retention.md
keywords: sys.sp_cleanup_data_retention (Transact-SQL), SQL Edge
--++ Last updated 09/22/2020
azure-sql-edge Track Data Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/track-data-changes.md
keywords:
--++ Last updated 05/19/2020
azure-sql-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/troubleshoot.md
keywords: SQL Edge, troubleshooting, deployment errors
--++ Last updated 09/22/2020
azure-sql-edge Tutorial Sync Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-sync-data-factory.md
keywords: SQL Edge,sync data from SQL Edge, SQL Edge data factory
--++ Last updated 05/19/2020
azure-sql-edge Tutorial Sync Data Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-sync-data-sync.md
keywords: SQL Edge,sync data from SQL Edge, SQL Edge data sync
--++ Last updated 05/19/2020
azure-sql-edge Usage And Diagnostics Data Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/usage-and-diagnostics-data-configuration.md
description: Learn how to configure usage and diagnostics data in Azure SQL Edge
--++ Last updated 08/04/2020
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
--++ Last updated 01/10/2022
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
ms.devlang: --++ Last updated 10/29/2020
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 12/16/2020
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
ms.devlang: --++ Last updated 07/13/2021
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
ms.devlang: --++ Last updated 01/10/2022
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
ms.devlang: azurecli --++ Last updated 01/17/2022
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
ms.devlang: azurecli --++ Last updated 01/18/2022
azure-sql Import From Bacpac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-powershell.md
ms.devlang: PowerShell --++ Last updated 05/24/2019
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
ms.devlang: azurecli --++ Last updated 01/18/2022
azure-sql Restore Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-powershell.md
ms.devlang: PowerShell --++ Last updated 03/27/2019
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 09/12/2021 # Manage Azure SQL Managed Instance long-term backup retention [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-In Azure SQL Managed Instance, you can configure a [long-term backup retention](../database/long-term-retention-overview.md) policy (LTR) as a public preview feature. This allows you to automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these backups with the Azure portal and PowerShell.
+In Azure SQL Managed Instance, you can configure a [long-term backup retention](../database/long-term-retention-overview.md) policy (LTR). This allows you to automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these backups with the Azure portal and PowerShell.
The following sections show you how to use the Azure portal, PowerShell, and Azure CLI to configure the long-term backup retention, view backups in Azure SQL storage, and restore from a backup in Azure SQL storage.
azure-sql Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/point-in-time-restore.md
The following table shows point-in-time restore scenarios for SQL Managed Instan
| |Restore existing DB to the same instance of SQL Managed Instance| Restore existing DB to another SQL Managed Instance|Restore dropped DB to same SQL Managed Instance|Restore dropped DB to another SQL Managed Instance| |:-|:-|:-|:-|:-|
-|**Azure portal**| Yes|No |Yes|No|
+|**Azure portal**| Yes|Yes|Yes|Yes|
|**Azure CLI**|Yes |Yes |No|No| |**PowerShell**| Yes|Yes |Yes|Yes|
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
ms.devlang: azurecli --++ Last updated 01/18/2022
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
To perform this migration, you must be added as a coadministrator for the subscr
5. Check the status of your registration. Registration can take a few minutes to complete. ```powershell
- Get-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
+ Get-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
``` ## How is migration for Cloud Services (classic) different from Virtual Machines (classic)?
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
keywords: facial recognition, facial recognition software, facial analysis, face
The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
-Identity Verification: Verifies someoneΓÇÖs identity against a government-issued ID card like a passport or driverΓÇÖs license or other enrollment image to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeatedly as someone accesses a digital or physical service.
-
-Touchless Access Control: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, or buildings as well as reception kiosks at offices, hospitals, gyms, clubs, or schools.
-
-Face Redaction: Redact or blur detected faces of people recorded in a video to protect their privacy.
- This documentation contains the following types of articles: * The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./Face-API-How-to-Topics/HowtoDetectFacesinImage.md) contain instructions for using the service in more specific or customized ways. * The [conceptual articles](./concepts/face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
+## Example use cases
+
+Identity verification: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+
+Touchless access control: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
+
+Face redaction: Redact or blur detected faces of people recorded in a video to protect their privacy.
++ ## Face detection and analysis
-Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data, which is used in later operations to identify or verify faces.
+Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
-Optionally, face detection can also extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service (for example, your application could advise users to take off their sunglasses if the user is wearing sunglasses).
+Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
> [!NOTE] > The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to use other Face operations like Identify, Verify, Find Similar, or Face grouping, you should use this service instead.
For more information on face detection and analysis, see the [Face detection](co
## Identity verification
-Modern enterprises and apps can use the the Face identification and Face verification operations to verify that a user is who they claim to be.
+Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be.
### Identification
After you create and train a group, you can do identification against the group
The verification operation answers the question, "Do these two faces belong to the same person?".
-Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify they are the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a selfie and taking a picture of a photo ID to verify their identity.
+Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID.
For more information about identity verification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
The Group operation divides a set of unknown faces into several smaller groups b
All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
-## Sample app
+## Sample apps
The following sample applications show a few ways to use the Face service:
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Title: "Text-to-speech quickstart - Speech service"
-description: Learn how to use the Speech SDK to convert text-to-speech. In this quickstart, you learn about object construction and design patterns, supported audio output formats, the Speech CLI, and custom configuration options for speech synthesis.
+description: Learn how to use the Speech SDK to convert text to speech, including object construction and design patterns, supported audio output formats, the Speech CLI, and custom configuration options for speech synthesis.
zone_pivot_groups: programming-languages-set-twenty-four
keywords: text to speech
-# Get started with Text-to-Speech
+# Get started with text-to-speech
::: zone pivot="programming-language-csharp" [!INCLUDE [C# Basics include](includes/how-to/text-to-speech-basics/text-to-speech-basics-csharp.md)]
keywords: text to speech
## Get position information
-Your project may need to know when a word is spoken by Text-to-Speech so that it can take specific action based on that timing.
-As an example, if you wanted to highlight words as they were spoken, you would need to know what to highlight, when to highlight it, and for how long to highlight it.
+Your project might need to know when a word is spoken by text-to-speech so that it can take specific action based on that timing. For example, if you want to highlight words as they're spoken, you need to know what to highlight, when to highlight it, and for how long to highlight it.
-You can accomplish this using the `WordBoundary` event available within `SpeechSynthesizer`.
-This event is raised at the beginning of each new spoken word and will provide a time offset within the spoken stream and a text offset within the input prompt.
+You can accomplish this by using the `WordBoundary` event within `SpeechSynthesizer`. This event is raised at the beginning of each new spoken word. It provides a time offset within the spoken stream and a text offset within the input prompt:
-* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the start of the next word. This is measured in hundred-nanosecond units (HNS) with 10,000 HNS equivalent to 1 millisecond.
+* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the start of the next word. This is measured in hundred-nanosecond units (HNS), with 10,000 HNS equivalent to 1 millisecond.
* `WordOffset` reports the character position in the input string (original text or [SSML](speech-synthesis-markup.md)) immediately before the word that's about to be spoken. > [!NOTE]
-> `WordBoundary` events are raised as the output audio data becomes available, which will be faster than playback to an output device. Appropriately synchronizing stream timing to "real time" must be accomplished by the caller.
+> `WordBoundary` events are raised as the output audio data becomes available, which will be faster than playback to an output device. The caller must appropriately synchronize stream timing to "real time."
-You can find examples of using `WordBoundary` in the [Text-to-Speech samples](https://aka.ms/csspeech/samples) on GitHub.
+You can find examples of using `WordBoundary` in the [text-to-speech samples](https://aka.ms/csspeech/samples) on GitHub.
## Next steps
-* [Get started with custom neural voice](how-to-custom-voice.md)
+* [Get started with Custom Neural Voice](how-to-custom-voice.md)
* [Improve synthesis with SSML](speech-synthesis-markup.md) * Learn how to use the [Long Audio API](long-audio-api.md) for large text samples like books and news articles * See the [quickstart samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart) on GitHub
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
# What is the Speech service?
-The Speech service is the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](#reference-docs).
+The Speech service is the unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](#reference-docs).
> [!IMPORTANT]
-> The Speech service has replaced Bing Speech API and Translator Speech. See the _Migration_ section for migration instructions.
+> The Speech service has replaced the Bing Speech API and Translator Speech. For migration instructions, see the _Migration_ section.
-The following features are part of the Speech service. Use the links in this table to learn more about common use-cases for each feature, or browse the API reference.
+The following features are part of the Speech service. Use the links in this table to learn more about common use-cases for each feature. You can also browse the API reference.
| Service | Feature | Description | SDK | REST | |||-|--||
-| [Speech-to-Text](speech-to-text.md) | Real-time Speech-to-text | Speech-to-text transcribes or translates audio streams or local files to text in real time that your applications, tools, or devices can consume or display. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
-| | [Batch Speech-to-Text](batch-transcription.md) | Batch Speech-to-text enables asynchronous speech-to-text transcription of large volumes of speech audio data stored in Azure Blob Storage. In addition to converting speech audio to text, Batch Speech-to-text also allows for diarization and sentiment-analysis. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| | [Multi-device Conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation| Yes | No |
-| | [Conversation Transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No |
-| | [Create Custom Speech Models](#customize-your-speech-experience) | If you are using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| | [Pronunciation Assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text.md#pronunciation-assessment-parameters) |
-| [Text-to-Speech](text-to-speech.md) | Prebuilt neural voices | Text-to-Speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are human-like voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
+| [Speech-to-text](speech-to-text.md) | Real-time speech-to-text | Speech-to-text transcribes or translates audio streams or local files to text in real time that your applications, tools, or devices can consume or display. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
+| | [Batch speech-to-text](batch-transcription.md) | Batch speech-to-text enables asynchronous speech-to-text transcription of large volumes of speech audio data stored in Azure Blob Storage. In addition to converting speech audio to text, batch speech-to-text allows for diarization and sentiment analysis. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
+| | [Multidevice conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation.| Yes | No |
+| | [Conversation transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No |
+| | [Create custom speech models](#customize-your-speech-experience) | If you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
+| | [Pronunciation assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text.md#pronunciation-assessment-parameters) |
+| [Text-to-speech](text-to-speech.md) | Prebuilt neural voices | Text-to-speech converts input text into humanlike synthesized speech by using the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are humanlike voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) |
| | [Custom neural voices](#customize-your-speech-experience) | Create custom neural voice fonts unique to your brand or product. | No | [Yes](#reference-docs) |
-| [Speech Translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multi-language translation of speech to your applications, tools, and devices. Use this service for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
-| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. | [Yes](voice-assistants.md) | No |
-| [Speaker Recognition](speaker-recognition-overview.md) | Speaker verification & identification | The Speaker Recognition service provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker Recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. | Yes | [Yes](/rest/api/speakerrecognition/) |
+| [Speech translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multilanguage translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
+| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated custom commands service for task completion. | [Yes](voice-assistants.md) | No |
+| [Speaker recognition](speaker-recognition-overview.md) | Speaker verification and identification | Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?". | Yes | [Yes](/rest/api/speakerrecognition/) |
## Try the Speech service for free
-For the following steps, you need both a Microsoft account and an Azure account. If you do not have a Microsoft account, you can sign up for one free of charge at the [Microsoft account portal](https://account.microsoft.com/account). Select **Sign in with Microsoft** and then, when asked to sign in, select **Create a Microsoft account**. Follow the steps to create and verify your new Microsoft account.
+For the following steps, you need a Microsoft account and an Azure account. If you don't have a Microsoft account, you can sign up for one free of charge at the [Microsoft account portal](https://account.microsoft.com/account). Select **Sign in with Microsoft**. When you're asked to sign in, select **Create a Microsoft account**. Follow the steps to create and verify your new Microsoft account.
-Once you have a Microsoft account, go to the [Azure sign-up page](https://azure.microsoft.com/free/ai/), select **Start free**, and create a new Azure account using a Microsoft account. Here is a video of [how to sign up for Azure free account](https://www.youtube.com/watch?v=GWT2R1C_uUU).
+After you have a Microsoft account, go to the [Azure sign-up page](https://azure.microsoft.com/free/ai/) and select **Start free**. Create a new Azure account by using a Microsoft account. Here's a video of [how to sign up for an Azure free account](https://www.youtube.com/watch?v=GWT2R1C_uUU).
> [!NOTE]
-> When you sign up for a free Azure account, it comes with $200 in service credit that you can apply toward a paid Speech service subscription, valid for up to 30 days. Your Azure services are disabled when your credit runs out or expires at the end of the 30 days. To continue using Azure services, you must upgrade your account. For more information, see [How to upgrade your Azure free account](../../cost-management-billing/manage/upgrade-azure-subscription.md).
+> When you sign up for a free Azure account, it comes with $200 in service credit that you can apply toward a paid Speech service subscription, valid for up to 30 days. Your Azure services are disabled when your credit runs out or expires at the end of the 30 days. To continue using Azure services, you must upgrade your account. For more information, see [Upgrade your Azure free account](../../cost-management-billing/manage/upgrade-azure-subscription.md).
>
-> The Speech service has two service tiers: free(f0) and subscription(s0), which have different limitations and benefits. If you use the free, low-volume Speech service tier you can keep this free subscription even after your free trial or service credit expires. For more information, see [Cognitive Services pricing - Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> The Speech service has two service tiers, free (f0) and subscription (s0), which have different limitations and benefits. If you use the free, low-volume Speech service tier, you can keep this free subscription even after your free trial or service credit expires. For more information, see [Cognitive Services pricing - Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
### Create the Azure resource
-To add a Speech service resource (free or paid tier) to your Azure account:
+To add a Speech service resource to your Azure account by using the free or paid tier:
-1. Sign in to the [Azure portal](https://portal.azure.com/) using your Microsoft account.
+1. Sign in to the [Azure portal](https://portal.azure.com/) by using your Microsoft account.
-1. Select **Create a resource** at the top left of the portal. If you do not see **Create a resource**, you can always find it by selecting the collapsed menu in the upper left corner of the screen.
+1. Select **Create a resource** at the top left of the portal. If you don't see **Create a resource**, you can always find it by selecting the collapsed menu in the upper-left corner of the screen.
-1. In the **New** window, type "speech" in the search box and press ENTER.
+1. In the **New** window, enter **speech** in the search box and select **Enter**.
1. In the search results, select **Speech**.
- :::image type="content" source="media/index/speech-search.png" alt-text="Create Speech resource in Azure portal.":::
+ :::image type="content" source="media/index/speech-search.png" alt-text="Screenshot that shows creating a Speech resource in the Azure portal.":::
-1. Select **Create**, then:
+1. Select **Create** and then:
- - Give a unique name for your new resource. The name helps you distinguish among multiple subscriptions tied to the same service.
- - Choose the Azure subscription that the new resource is associated with to determine how the fees are billed. Here is the introduction for [how to create an Azure subscription](../../cost-management-billing/manage/create-subscription.md#create-a-subscription-in-the-azure-portal) in the Azure portal.
- - Choose the [region](regions.md) where the resource will be used. Azure is a global cloud platform that is generally available in many regions worldwide. To get the best performance, select a region thatΓÇÖs closest to you or where your application runs. The Speech service availabilities vary from different regions. Make sure that you create your resource in a supported region. See [region support for Speech services](./regions.md#speech-to-text-text-to-speech-and-translation).
- - Choose either a free (F0) or paid (S0) pricing tier. For complete information about pricing and usage quotas for each tier, select **View full pricing details** or see [speech services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). For limits on resources, see [Azure Cognitive Services Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-cognitive-services-limits).
- - Create a new resource group for this Speech subscription or assign the subscription to an existing resource group. Resource groups help you keep your various Azure subscriptions organized.
- - Select **Create**. This will take you to the deployment overview and display deployment progress messages.
+ 1. Give a unique name for your new resource. The name helps you distinguish among multiple subscriptions tied to the same service.
+ 1. Choose the Azure subscription that the new resource is associated with to determine how the fees are billed. Here's the introduction for [how to create an Azure subscription](../../cost-management-billing/manage/create-subscription.md#create-a-subscription-in-the-azure-portal) in the Azure portal.
+ 1. Choose the [region](regions.md) where the resource will be used. Azure is a global cloud platform that's generally available in many regions worldwide. To get the best performance, select a region thatΓÇÖs closest to you or where your application runs. The Speech service availabilities vary among different regions. Make sure that you create your resource in a supported region. For more information, see [region support for Speech services](./regions.md#speech-to-text-text-to-speech-and-translation).
+ 1. Choose either a free (F0) or paid (S0) pricing tier. For complete information about pricing and usage quotas for each tier, select **View full pricing details** or see [Speech services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). For limits on resources, see [Azure Cognitive Services limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-cognitive-services-limits).
+ 1. Create a new resource group for this Speech subscription or assign the subscription to an existing resource group. Resource groups help you keep your various Azure subscriptions organized.
+ 1. Select **Create**. This action takes you to the deployment overview and displays deployment progress messages.
-It takes a few moments to deploy your new Speech resource.
+It takes a few moments to deploy your new Speech resource.
### Find keys and location/region
-To find the keys and location/region of a completed deployment, follow these steps:
+To find the keys and location/region of a completed deployment:
-1. Sign in to the [Azure portal](https://portal.azure.com/) using your Microsoft account.
+1. Sign in to the [Azure portal](https://portal.azure.com/) by using your Microsoft account.
-2. Select **All resources**, and select the name of your Cognitive Services resource.
+1. Select **All resources**, and select the name of your Cognitive Services resource.
-3. On the left pane, under **RESOURCE MANAGEMENT**, select **Keys and Endpoint**.
+1. On the left pane, under **RESOURCE MANAGEMENT**, select **Keys and Endpoint**.
-Each subscription has two keys; you can use either key in your application. To copy/paste a key to your code editor or other location, select the copy button next to each key, switch windows to paste the clipboard contents to the desired location.
+ 1. Each subscription has two keys. You can use either key in your application. To copy and paste a key to your code editor or other location, select the copy button next to each key and switch windows to paste the clipboard contents to the desired location.
-Additionally, copy the `LOCATION` value, which is your region ID (ex. `westus`, `westeurope`) for SDK calls.
+ 1. Copy the `LOCATION` value, which is your region ID, for example, `westus` or `westeurope`, for SDK calls.
> [!IMPORTANT]
-> These subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securelyΓÇô for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+> These subscription keys are used to access your Cognitive Services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service.
## Complete a quickstart
-We offer quickstarts in most popular programming languages, each designed to teach you basic design patterns, and have you running code in less than 10 minutes. See the following list for the quickstart for each feature.
+We offer quickstarts in most popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature:
-* [Speech-to-Text quickstart](get-started-speech-to-text.md)
-* [Text-to-Speech quickstart](get-started-text-to-speech.md)
+* [Speech-to-text quickstart](get-started-speech-to-text.md)
+* [Text-to-speech quickstart](get-started-text-to-speech.md)
* [Speech translation quickstart](./get-started-speech-translation.md) * [Intent recognition quickstart](./get-started-intent-recognition.md) * [Speaker recognition quickstart](./get-started-speaker-recognition.md)
-After you've had a chance to get started with the Speech service, try our tutorials that show you how to solve various scenarios.
+After you've had a chance to get started with the Speech service, try our tutorials that show you how to solve various scenarios:
- [Tutorial: Recognize intents from speech with the Speech SDK and LUIS, C#](how-to-recognize-intents-from-speech-csharp.md) - [Tutorial: Voice enable your bot with the Speech SDK, C#](tutorial-voice-enable-your-bot-speech-sdk.md)
After you've had a chance to get started with the Speech service, try our tutori
Sample code is available on GitHub for the Speech service. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models. Use these links to view SDK and REST samples: -- [Speech-to-Text, Text-to-Speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+- [Speech-to-text, text-to-speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
- [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Text-to-Speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+- [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
- [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples) ## Customize your speech experience
-The Speech service works well with built-in models, however, you may want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand.
+The Speech service works well with built-in models. But you might want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand.
-Other products offer speech models tuned for specific purposes like healthcare or insurance, but are available to everyone equally. Customization in Azure Speech becomes part of *your unique* competitive advantage that is unavailable to any other user or customer. In other words, your models are private and custom-tuned for your use-case only.
+Other products offer speech models tuned for specific purposes, like healthcare or insurance, but are available to everyone equally. Customization in Azure Speech becomes part of *your unique* competitive advantage that's unavailable to any other user or customer. In other words, your models are private and custom-tuned for your use case only.
-| Speech Service | Platform | Description |
+| Speech service | Platform | Description |
| -- | -- | -- |
-| Speech-to-Text | [Custom Speech](./custom-speech-overview.md) | Customize speech recognition models to your needs and available data. Overcome speech recognition barriers such as speaking style, vocabulary and background noise. |
-| Text-to-Speech | [Custom Voice](https://aka.ms/customvoice) | Build a recognizable, one-of-a-kind neural voice for your Text-to-Speech apps with your speaking data available. You can further fine-tune the neural voice outputs by adjusting a set of neural voice parameters. |
+| Speech-to-text | [Custom Speech](./custom-speech-overview.md) | Customize speech recognition models to your needs and available data. Overcome speech recognition barriers such as speaking style, vocabulary, and background noise. |
+| Text-to-speech | [Custom Voice](https://aka.ms/customvoice) | Build a recognizable, one-of-a-kind neural voice for your text-to-speech apps with your speaking data available. You can further fine-tune the neural voice outputs by adjusting a set of neural voice parameters. |
-## Deploy on premises using Docker containers
+## Deploy on-premises by using Docker containers
-[Use Speech service containers](speech-container-howto.md) to deploy API features on-premises. These Docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons. The Speech service offers the following containers:
+[Use Speech service containers](speech-container-howto.md) to deploy API features on-premises. By using these Docker containers, you can bring the service closer to your data for compliance, security, or other operational reasons. The Speech service offers the following containers:
* Standard Speech-to-Text * Custom Speech-to-Text
Other products offer speech models tuned for specific purposes like healthcare o
- [REST API: Text-to-speech](rest-text-to-speech.md) - [REST API: Batch transcription and customization](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) - ## Next steps > [!div class="nextstepaction"]
-> [Get started with Speech-to-Text](./get-started-speech-to-text.md)
-> [Get started with Text-to-Speech](get-started-text-to-speech.md)
+> * [Get started with speech-to-text](./get-started-speech-to-text.md)
+> * [Get started with text-to-speech](get-started-text-to-speech.md)
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
# About the Speech SDK
-The Speech software development kit (SDK) exposes many of the Speech service capabilities, to empower you to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
+The Speech software development kit (SDK) exposes many of the Speech service capabilities you can use to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
[!INCLUDE [Speech SDK Platforms](../../../includes/cognitive-services-speech-service-speech-sdk-platforms.md)] ## Scenario capabilities
-The Speech SDK exposes many features from the Speech service, but not all of them. The capabilities of the Speech SDK are often associated with scenarios. The Speech SDK is ideal for both real-time and non-real-time scenarios, using local devices, files, Azure blob storage, and even input and output streams. When a scenario is not achievable with the Speech SDK, look for a REST API alternative.
+The Speech SDK exposes many features from the Speech service, but not all of them. The capabilities of the Speech SDK are often associated with scenarios. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and even input and output streams. When a scenario can't be achieved with the Speech SDK, look for a REST API alternative.
### Speech-to-text
-[Speech-to-text](speech-to-text.md) (also known as *speech recognition*) transcribes audio streams to text that your applications, tools, or devices can consume or display. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. Use [Speech Translation](speech-translation.md) to translate speech input to a different language with a single call. For more information, see [Speech-to-text basics](./get-started-speech-to-text.md).
+[Speech-to-text](speech-to-text.md) transcribes audio streams to text that your applications, tools, or devices can consume or display. Speech-to-text is also known as *speech recognition*. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. Use [speech translation](speech-translation.md) to translate speech input to a different language with a single call. For more information, see [Speech-to-text basics](./get-started-speech-to-text.md).
-**Speech-Recognition (SR), Phrase List, Intent, Translation, and On-premises containers** are available on the following platforms:
+**Speech recognition, phrase list, intent, translation, and on-premises containers** are available on the following platforms:
- - C++/Windows & Linux & macOS
- - C# (Framework & .NET Core)/Windows & UWP & Unity & Xamarin & Linux & macOS
+ - C++/Windows and Linux and macOS
+ - C# (Framework and .NET Core)/Windows and UWP and Unity and Xamarin and Linux and macOS
- Java (Jre and Android)
- - JavaScript (Browser and NodeJS)
+ - JavaScript (browser and NodeJS)
- Python - Swift
- - Objective-C
- - Go (SR only)
+ - Objective-C
+ - Go (speech recognition only)
### Text-to-speech
-[Text-to-speech](text-to-speech.md) (also known as *speech synthesis*) converts text into human-like synthesized speech. The input text is either string literals or using the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). For more information on standard or neural voices, see [Text-to-speech language and voice support](language-support.md#text-to-speech).
+[Text-to-speech](text-to-speech.md) converts text into humanlike synthesized speech. Text-to-speech is also known as *speech synthesis*. The input text is either string literals or uses the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). For more information on standard or neural voices, see [Text-to-speech language and voice support](language-support.md#text-to-speech).
-**Text-to-speech (TTS)** is available on the following platforms:
+**Text-to-speech** is available on the following platforms:
- - C++/Windows & Linux & macOS
- - C# (Framework & .NET Core)/Windows & UWP & Unity & Xamarin & Linux & macOS
+ - C++/Windows and Linux and macOS
+ - C# (Framework and .NET Core)/Windows and UWP and Unity and Xamarin and Linux and macOS
- Java (Jre and Android)
- - JavaScript (Browser and NodeJS)
+ - JavaScript (browser and NodeJS)
- Python - Swift - Objective-C - Go
- - TTS REST API can be used in every other situation.
+ - Text-to-speech REST API can be used in every other situation
### Voice assistants
-[Voice assistants](voice-assistants.md) using the Speech SDK enable you to create natural, human-like conversational interfaces for your applications and experiences. The Speech SDK provides fast, reliable interaction that includes speech-to-text, text-to-speech, and conversational data on a single connection. Your implementation can use the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. Additionally, voice assistants can use custom voices created in the [Custom Voice Portal](https://aka.ms/customvoice) to add a unique voice output experience.
+[Voice assistants](voice-assistants.md) using the Speech SDK enable you to create natural, humanlike conversational interfaces for your applications and experiences. The Speech SDK provides fast, reliable interaction that includes speech-to-text, text-to-speech, and conversational data on a single connection. Your implementation can use the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. Also, voice assistants can use custom voices created in the [Custom Voice portal](https://aka.ms/customvoice) to add a unique voice output experience.
**Voice assistant** support is available on the following platforms:
- - C++/Windows & Linux & macOS
+ - C++/Windows and Linux and macOS
- C#/Windows
- - Java/Windows & Linux & macOS & Android (Speech Devices SDK)
+ - Java/Windows and Linux and macOS and Android (Speech Devices SDK)
- Go #### Keyword recognition
The concept of [keyword recognition](custom-keyword-basics.md) is supported in t
**Keyword recognition** is available on the following platforms:
- - C++/Windows & Linux
- - C#/Windows & Linux
- - Python/Windows & Linux
- - Java/Windows & Linux & Android
+ - C++/Windows and Linux
+ - C#/Windows and Linux
+ - Python/Windows and Linux
+ - Java/Windows and Linux and Android
### Meeting scenarios
-The Speech SDK is perfect for transcribing meeting scenarios, whether from a single device or multi-device conversation.
+The Speech SDK is perfect for transcribing meeting scenarios, whether from a single device or multidevice conversation.
-#### Conversation Transcription
+#### Conversation transcription
-[Conversation Transcription](conversation-transcription.md) enables real-time (and asynchronous) speech recognition, speaker identification, and sentence attribution to each speaker (also known as *diarization*). It's perfect for transcribing in-person meetings with the ability to distinguish speakers.
+[Conversation transcription](conversation-transcription.md) enables real-time, and asynchronous, speech recognition, speaker identification, and sentence attribution to each speaker. This process is also known as *diarization*. It's perfect for transcribing in-person meetings with the ability to distinguish speakers.
-**Conversation Transcription** is available on the following platforms:
+**Conversation transcription** is available on the following platforms:
- - C++/Windows & Linux
- - C# (Framework & .NET Core)/Windows & UWP & Linux
- - Java/Windows & Linux & Android
+ - C++/Windows and Linux
+ - C# (Framework and .NET Core)/Windows and UWP and Linux
+ - Java/Windows and Linux and Android
-#### Multi-device Conversation
+#### Multidevice conversation
-With [Multi-device Conversation](multi-device-conversation.md), connect multiple devices or clients in a conversation to send speech-based or text-based messages, with easy support for transcription and translation.
+With [multidevice conversation](multi-device-conversation.md), you can connect multiple devices or clients in a conversation to send speech-based or text-based messages, with easy support for transcription and translation.
-**Multi-device Conversation** is available on the following platforms:
+**Multidevice conversation** is available on the following platforms:
- C++/Windows
- - C# (Framework & .NET Core)/Windows
+ - C# (Framework and .NET Core)/Windows
-### Custom / agent scenarios
+### Custom/agent scenarios
The Speech SDK can be used for transcribing call center scenarios, where telephony data is generated.
-#### Call Center Transcription
+#### Call center transcription
-[Call Center Transcription](call-center-transcription.md) is common scenario for speech-to-text for transcribing large volumes of telephony data that may come from various systems, such as Interactive Voice Response (IVR). The latest speech recognition models from the Speech service excel at transcribing this telephony data, even in cases when the data is difficult for a human to understand.
+[Call center transcription](call-center-transcription.md) is a common scenario for speech-to-text for transcribing large volumes of telephony data that might come from various systems, such as interactive voice response. The latest speech recognition models from the Speech service excel at transcribing this telephony data, even in cases when the data is difficult for a human to understand.
-**Call Center Transcription** is available through the Batch Speech Service via its REST API and can be used in any situation.
+**Call center transcription** is available through the batch Speech service via its REST API and can be used in any situation.
-### Codec compressed audio input
+### Codec-compressed audio input
-Several of the Speech SDK programming languages support codec compressed audio input streams. For more information, see <a href="/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams" target="_blank">use compressed audio input formats </a>.
+Several of the Speech SDK programming languages support codec-compressed audio input streams. For more information, see <a href="/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams" target="_blank">Use compressed audio input formats</a>.
-**Codec compressed audio input** is available on the following platforms:
+**Codec-compressed audio input** is available on the following platforms:
- C++/Linux - C#/Linux
Several of the Speech SDK programming languages support codec compressed audio i
## REST API
-While the Speech SDK covers many feature capabilities of the Speech Service, for some scenarios you might want to use the REST API.
+The Speech SDK covers many feature capabilities of the Speech service, but for some scenarios you might want to use the REST API.
### Batch transcription
-[Batch transcription](batch-transcription.md) enables asynchronous speech-to-text transcription of large volumes of data. Batch transcription is only possible from the REST API. In addition to converting speech audio to text, batch speech-to-text also allows for diarization and sentiment-analysis.
+[Batch transcription](batch-transcription.md) enables asynchronous speech-to-text transcription of large volumes of data. Batch transcription is only possible from the REST API. In addition to converting speech audio to text, batch speech-to-text also allows for diarization and sentiment analysis.
## Customization
-The Speech Service delivers great functionality with its default models across speech-to-text, text-to-speech, and speech-translation. Sometimes you may want to increase the baseline performance to work even better with your unique use case. The Speech Service has a variety of no-code customization tools that make it easy, and allow you to create a competitive advantage with custom models based on your own data. These models will only be available to you and your organization.
+The Speech service delivers great functionality with its default models across speech-to-text, text-to-speech, and speech translation. Sometimes you might want to increase the baseline performance to work even better with your unique use case. The Speech service has various no-code customization tools that make it easy. You can use them to create a competitive advantage with custom models based on your own data. These models will only be available to you and your organization.
-### Custom Speech-to-text
+### Custom speech-to-text
-When using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. The creation and management of no-code Custom Speech models is available through the [Custom Speech Portal](./custom-speech-overview.md). Once the Custom Speech model is published, it can be consumed by the Speech SDK.
+When you use speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. The creation and management of no-code Custom Speech models is available through the [Custom Speech portal](./custom-speech-overview.md). After the Custom Speech model is published, it can be consumed by the Speech SDK.
-### Custom Text-to-speech
+### Custom text-to-speech
-Custom text-to-speech, also known as Custom Voice is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. The creation and management of no-code Custom Voice models is available through the [Custom Voice Portal](https://aka.ms/customvoice). Once the Custom Voice model is published, it can be consumed by the Speech SDK.
+Custom text-to-speech, also known as Custom Voice, is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. The creation and management of no-code Custom Voice models is available through the [Custom Voice portal](https://aka.ms/customvoice). After the Custom Voice model is published, it can be consumed by the Speech SDK.
## Get the Speech SDK
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Title: Speech Synthesis Markup Language (SSML) - Speech service
-description: Using the Speech Synthesis Markup Language to control pronunciation and prosody in text-to-speech.
+description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text-to-speech.
# Improve synthesis with Speech Synthesis Markup Language (SSML)
-Speech Synthesis Markup Language (SSML) is an XML-based markup language that lets developers specify how input text is converted into synthesized speech using the Text-to-Speech service. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the Text-to-Speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled.
+Speech Synthesis Markup Language (SSML) is an XML-based markup language that lets developers specify how input text is converted into synthesized speech by using text-to-speech. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled.
-The Speech service implementation of SSML is based on World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/).
+The Speech service implementation of SSML is based on the World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/).
> [!IMPORTANT]
-> Each Chinese characters are counted as two characters for billing, including Kanji used in Japanese, Hanja used in Korean, or Hanzi used in other languages. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Chinese characters are counted as two characters for billing, including Kanji used in Japanese, Hanja used in Korean, or Hanzi used in other languages. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-## Prebuilt neural voice and custom neural voice
-
-Use a human-like neural voice, or create your own custom neural voice unique to your product or brand. For a complete list of supported languages, locales, and voices, see [language support](language-support.md). To learn more about prebuilt neural voice and custom neural voice, see [Text-to-Speech overview](text-to-speech.md).
+## Prebuilt neural voices and custom neural voices
+Use a humanlike neural voice or create your own custom neural voice unique to your product or brand. For a complete list of supported languages, locales, and voices, see [Language support](language-support.md). To learn more about using a prebuilt neural voice and a custom neural voice, see [Text-to-speech overview](text-to-speech.md).
> [!NOTE]
-> You can hear voices in different styles and pitches reading example text using [the Text to Speech page](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
-
+> You can hear voices in different styles and pitches reading example text by using this [text-to-speech website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
## Special characters
-While using SSML, keep in mind that special characters, such as quotation marks, apostrophes, and brackets must be escaped. For more information, see [Extensible Markup Language (XML) 1.0: Appendix D](https://www.w3.org/TR/xml/#sec-entexpand).
+When you use SSML, keep in mind that special characters, such as quotation marks, apostrophes, and brackets, must be escaped. For more information, see [Extensible Markup Language (XML) 1.0: Appendix D](https://www.w3.org/TR/xml/#sec-entexpand).
## Supported SSML elements
-Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used, and when an element is required or optional.
+Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used and when an element is required or optional.
> [!IMPORTANT]
-> Don't forget to use double quotes around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML might not recognize attribute values that are not in quotes.
+> Don't forget to use double quotation marks around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML might not recognize attribute values that aren't in double quotation marks.
## Create an SSML document
-`speak` is the root element, and is **required** for all SSML documents. The `speak` element contains important information, such as version, language, and the markup vocabulary definition.
+The `speak` element is the root element. It's *required* for all SSML documents. The `speak` element contains important information, such as version, language, and the markup vocabulary definition.
**Syntax**
Each SSML document is created with SSML elements (or tags). These elements are u
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-|| | `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is 1.0. | Required |
-| `xml:lang` | Specifies the language of the root document. The value can contain a lowercase, two-letter language code (for example, `en`), or the language code and uppercase country/region (for example, `en-US`). | Required |
+| `xml:lang` | Specifies the language of the root document. The value can contain a lowercase, two-letter language code, for example, `en`. Or the value can contain the language code and uppercase country/region, for example, `en-US`. | Required |
| `xmlns` | Specifies the URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is http://www.w3.org/2001/10/synthesis. | Required |
-## Choose a voice for Text-to-Speech
+## Choose a voice for text-to-speech
-The `voice` element is required. It's used to specify the voice that is used for Text-to-Speech.
+The `voice` element is required. It's used to specify the voice that's used for text-to-speech.
**Syntax**
The `voice` element is required. It's used to specify the voice that is used for
</voice> ```
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `name` | Identifies the voice used for Text-to-Speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
**Example**
The `voice` element is required. It's used to specify the voice that is used for
## Use multiple voices
-Within the `speak` element, you can specify multiple voices for Text-to-Speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
+Within the `speak` element, you can specify multiple voices for text-to-speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `name` | Identifies the voice used for Text-to-Speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
**Example**
Within the `speak` element, you can specify multiple voices for Text-to-Speech o
## Adjust speaking styles
-By default, the Text-to-Speech service synthesizes text using a neutral speaking style for neural voices. You can adjust the speaking style, style degree, and role at the sentence level.
+By default, text-to-speech synthesizes text by using a neutral speaking style for neural voices. You can adjust the speaking style, style degree, and role at the sentence level.
-Styles, style degree, and roles are supported for a subset of neural voices. If a style or role isn't supported, the service will use the default neutral speech. There are multiple ways to determine what styles and roles are supported for each voice.
-- The [Voice styles and roles](language-support.md#voice-styles-and-roles) table-- The [voice list API](rest-text-to-speech.md#get-a-list-of-voices)-- The code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) portal
+Styles, style degree, and roles are supported for a subset of neural voices. If a style or role isn't supported, the service uses the default neutral speech. To determine what styles and roles are supported for each voice, use:
-| Attribute | Description | Required / Optional |
-|--|-||
-| `style` | Specifies the speaking style. Speaking styles are voice-specific. | Required if adjusting the speaking style for a neural voice. If using `mstts:express-as`, then style must be provided. If an invalid value is provided, this element will be ignored. |
-| `styledegree` | Specifies the intensity of speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute the `styledegree` will be ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.|
-| `role` | Specifies the speaking role-play. The voice will act as a different age and gender, but the voice name won't be changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural` and `zh-CN-YunyeNeural`.|
+- The [Voice styles and roles](language-support.md#voice-styles-and-roles) table.
+- The [Voice List API](rest-text-to-speech.md#get-a-list-of-voices).
+- The code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) portal.
+| Attribute | Description | Required or optional |
+|--|-||
+| `style` | Specifies the speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored. |
+| `styledegree` | Specifies the intensity of the speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute, the `styledegree` attribute is ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.|
+| `role` | Specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`.|
### Style
-You use the `mstts:express-as` element to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscast, and voice assistant.
+You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
**Syntax**
This SSML snippet illustrates how the `<mstts:express-as>` element is used to ch
</speak> ```
-The table below has descriptions of each supported style.
+The following table has descriptions of each supported style.
|Style|Description| |--|-| |`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.| |`style="angry"`|Expresses an angry and annoyed tone.| |`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
-|`style="calm"`|Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech.|
+|`style="calm"`|Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, and prosody are more uniform compared to other types of speech.|
|`style="chat"`|Expresses a casual and relaxed tone.| |`style="cheerful"`|Expresses a positive and happy tone.| |`style="customerservice"`|Expresses a friendly and helpful tone for customer support.|
The table below has descriptions of each supported style.
|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.| |`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.|
-|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness.|
+|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.|
|`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.| |`style="lyrical"`|Expresses emotions in a melodic and sentimental way.| |`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
The table below has descriptions of each supported style.
|`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.| - ### Style degree
-The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
+The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with the `styledegree` attribute to make the speech more expressive or subdued. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
**Syntax**
This SSML snippet illustrates how the `styledegree` attribute is used to change
### Role
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
* `zh-CN-XiaomoNeural` * `zh-CN-XiaoxuanNeural`
This SSML snippet illustrates how the `role` attribute is used to change the rol
</speak> ```
-The table below has descriptions of each supported role.
+The following table has descriptions of each supported role.
|Role | Description | |-|-|
The table below has descriptions of each supported role.
|`role="SeniorFemale"` | The voice imitates to a senior female.| |`role="SeniorMale"` | The voice imitates to a senior male.| - ## Adjust speaking languages You can adjust speaking languages for neural voices at the sentence level and word level.
-Enable one voice to speak different languages fluently (like English, Spanish, and Chinese) using the `<lang xml:lang>` element. This is an optional element unique to the Speech service. Without this element, the voice will speak its primary language.
+Enable one voice to speak different languages fluently (like English, Spanish, and Chinese) by using the `<lang xml:lang>` element. This optional element is unique to the Speech service. Without this element, the voice speaks its primary language.
-Speaking language adjustments are only supported for the `en-US-JennyMultilingualNeural` neural voice. Above changes are applied at the sentence level and word level. If a language isn't supported, the service will return no audio stream.
+Speaking language adjustments are only supported for the `en-US-JennyMultilingualNeural` neural voice. The preceding changes are applied at the sentence level and word level. If a language isn't supported, the service won't return an audio stream.
> [!NOTE]
-> The `<lang xml:lang>` element is incompatible with `prosody` and `break` element, you cannot adjust pause and prosody like pitch, contour, rate, or volume in this element.
+> The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
**Syntax**
Speaking language adjustments are only supported for the `en-US-JennyMultilingua
<lang xml:lang="string"></lang> ```
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `lang` | Specifies the speaking languages. Speaking different languages are voice-specific. | Required if adjusting the speaking language for a neural voice. If using `lang xml:lang`, then locale must be provided. |
+| `lang` | Specifies the speaking languages. Speaking different languages are voice specific. | Required if adjusting the speaking language for a neural voice. If you're using `lang xml:lang`, the locale must be provided. |
-Use this table to determine which speaking languages are supported for each neural voice. If a language isn't supported, the service will return no audio stream.
+Use this table to determine which speaking languages are supported for each neural voice. If a language isn't supported, the service won't return an audio stream.
| Voice | Locale language | Description | |-||-|
This SSML snippet shows how to use `<lang xml:lang>` to change the speaking lang
</speak> ```
-## Add or remove a break/pause
+## Add or remove a break or pause
-Use the `break` element to insert pauses (or breaks) between words, or prevent pauses automatically added by the Text-to-Speech service.
+Use the `break` element to insert pauses or breaks between words. You can also use it to prevent pauses that are automatically added by text-to-speech.
> [!NOTE]
-> Use this element to override the default behavior of Text-to-Speech (TTS) for a word or phrase if the synthesized speech for that word or phrase sounds unnatural. Set `strength` to `none` to prevent a prosodic break, which is automatically inserted by the Text-to-Speech service.
+> Use this element to override the default behavior of text-to-speech for a word or phrase if the synthesized speech for that word or phrase sounds unnatural. Set `strength` to `none` to prevent a prosodic break, which is automatically inserted by text-to-speech.
**Syntax**
Use the `break` element to insert pauses (or breaks) between words, or prevent p
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `strength` | Specifies the relative duration of a pause using one of the following values:<ul><li>none</li><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul> | Optional |
-| `time` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Optional |
+| `strength` | Specifies the relative duration of a pause by using one of the following values:<ul><li>none</li><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul> | Optional |
+| `time` | Specifies the absolute duration of a pause in seconds or milliseconds (ms). This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Optional |
| Strength | Description | |-|-|
Use the `break` element to insert pauses (or breaks) between words, or prevent p
| X-weak | 250 ms | | Weak | 500 ms | | Medium | 750 ms |
-| Strong | 1000 ms |
-| X-strong | 1250 ms |
+| Strong | 1,000 ms |
+| X-strong | 1,250 ms |
**Example**
Use the `break` element to insert pauses (or breaks) between words, or prevent p
</voice> </speak> ```+ ## Add silence
-Use the `mstts:silence` element to insert pauses before or after text, or between the 2 adjacent sentences.
+Use the `mstts:silence` element to insert pauses before or after text, or between two adjacent sentences.
> [!NOTE]
->The difference between `mstts:silence` and `break` is that `break` can be added to any place in the text, but silence only works at the beginning or end of input text, or at the boundary of 2 adjacent sentences.
+>The difference between `mstts:silence` and `break` is that `break` can be added any place in the text. Silence only works at the beginning or end of input text or at the boundary of two adjacent sentences.
**Syntax**
Use the `mstts:silence` element to insert pauses before or after text, or betwee
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `type` | Specifies the location of silence be added: <ul><li>`Leading` ΓÇô at the beginning of text </li><li>`Tailing` ΓÇô in the end of text </li><li>`Sentenceboundary` ΓÇô between adjacent sentences </li></ul> | Required |
-| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5000 ms. Examples of valid values are `2s` and `500ms` | Required |
+| `type` | Specifies the location of silence to be added: <ul><li>`Leading` ΓÇô At the beginning of text </li><li>`Tailing` ΓÇô At the end of text </li><li>`Sentenceboundary` ΓÇô Between adjacent sentences </li></ul> | Required |
+| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Required |
**Example**+ In this example, `mtts:silence` is used to add 200 ms of silence between two sentences.+ ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural">
A good place to start is by trying out the slew of educational apps that are hel
## Specify paragraphs and sentences
-`p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, the Text-to-Speech service automatically determines the structure of the SSML document.
+The `p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, text-to-speech automatically determines the structure of the SSML document.
The `p` element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `sub`, `mstts:express-as`, and `s`.
The `s` element can contain text and the following elements: `audio`, `break`, `
## Use phonemes to improve pronunciation
-The `ph` element is used to for phonetic pronunciation in SSML documents. The `ph` element can only contain text, no other elements. Always provide human-readable speech as a fallback.
+The `ph` element is used for phonetic pronunciation in SSML documents. The `ph` element can contain only text but no other elements. Always provide human-readable speech as a fallback.
-Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different pronunciations of the letter "c" in the words "candy" and "cease", or the different pronunciations of the letter combination "th" in the words "thing" and "those".
+Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different pronunciations of the letter "c" in the words "candy" and "cease" or the different pronunciations of the letter combination "th" in the words "thing" and "those."
> [!NOTE]
-> Phonemes tag is not supported for these 5 voices (et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural and mt-MT-GarceNeural) at the moment.
+> At this time, the phonemes tag isn't supported for five voices: et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural, and mt-MT-GarceNeural.
**Syntax**
Phonetic alphabets are composed of phones, which are made up of letters, numbers
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; [International Phonetic Alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`ups` &ndash; [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
-| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, the Text-to-Speech (TTS) service rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes. |
+| `alphabet` | Specifies the phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; [International Phonetic Alphabet (IPA)](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`sapi` &ndash; [Speech service phonetic alphabet ](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`ups` &ndash; [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
+| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes |
**Examples**
Phonetic alphabets are composed of phones, which are made up of letters, numbers
## Use custom lexicon to improve pronunciation
-Sometimes the Text-to-Speech service can't accurately pronounce a word. For example, the name of a company, a medical term or an emoji. Developers can define how single entities are read in SSML using the `phoneme` and `sub` tags. However, if you need to define how multiple entities are read, you can create a custom lexicon using the `lexicon` tag.
+Sometimes text-to-speech can't accurately pronounce a word. Examples might be the name of a company, a medical term, or an emoji. You can define how single entities are read in SSML by using the `phoneme` and `sub` tags. If you need to define how multiple entities are read, you can create a custom lexicon by using the `lexicon` tag.
-> [!NOTE]
-> Custom lexicon currently supports UTF-8 encoding.
+The custom lexicon currently supports UTF-8 encoding.
> [!NOTE]
-> Custom lexicon is not supported for these 5 voices (et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural and mt-MT-GarceNeural) at the moment.
+> At this time, the custom lexicon isn't supported for five voices: et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural, and mt-MT-GarceNeural.
**Syntax**
Sometimes the Text-to-Speech service can't accurately pronounce a word. For exam
<lexicon uri="string"/> ```
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `uri` | The address of the external PLS document. | Required. |
+| `uri` | The address of the external PLS document | Required |
**Usage**
-To define how multiple entities are read, you can create a custom lexicon, which is stored as an .xml or .pls file. Below is a sample .xml file.
+To define how multiple entities are read, you can create a custom lexicon, which is stored as an .xml or .pls file. The following code is a sample .xml file.
```xml <?xml version="1.0" encoding="UTF-8"?>
To define how multiple entities are read, you can create a custom lexicon, which
</lexicon> ```
-The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
+The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text that describes the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text that describes how the `lexeme` is pronounced. When the `alias` and `phoneme` elements are provided with the same `grapheme` element, `alias` has higher priority.
> [!IMPORTANT]
-> The `lexeme` element is case sensitive in custom lexicon. For example, if you only provide a phoneme for `lexeme` 'Hello', it will not work for `lexeme` 'hello'.
+> The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello," it won't work for the `lexeme` "hello."
-Lexicon contains necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so apply it for a different locale it won't work.
+Lexicon contains the necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so if you apply it for a different locale, it won't work.
-It's important to note, that you can't directly set the pronunciation of a phrase using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, then associate the `phoneme` with that `alias`. For example:
+You can't directly set the pronunciation of a phrase by using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, and then associate the `phoneme` with that `alias`. For example:
```xml <lexeme>
It's important to note, that you can't directly set the pronunciation of a phras
``` > [!NOTE]
-> The syllable boundary is '.' in the International Phonetic Alphabet.
+> The syllable boundary is '.' in the IPA.
You could also directly provide your expected `alias` for the acronym or abbreviated term. For example:+ ```xml <lexeme> <grapheme>Scotland MV</grapheme>
You could also directly provide your expected `alias` for the acronym or abbrevi
``` > [!IMPORTANT]
-> The `phoneme` element cannot contain white spaces when using IPA.
+> The `phoneme` element can't contain white spaces when you use the IPA.
-For more information about custom lexicon file, see [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/).
+For more information about the custom lexicon file, see [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/).
-Next, publish your custom lexicon file. While we don't have restrictions on where this file can be stored, we do recommend using [Azure Blob Storage](../../storage/blobs/storage-quickstart-blobs-portal.md).
+Next, publish your custom lexicon file. We don't have restrictions on where this file can be stored, but we recommend that you use [Azure Blob Storage](../../storage/blobs/storage-quickstart-blobs-portal.md).
After you've published your custom lexicon, you can reference it from your SSML.
After you've published your custom lexicon, you can reference it from your SSML.
</speak> ```
-When using this custom lexicon, "BTW" will be read as "By the way". "Benigni" will be read with the provided IPA "bɛˈniːnji".
+When you use this custom lexicon, "BTW" is read as "By the way." "Benigni" is read with the provided IPA "bɛˈniːnji."
-Since it's easy to make mistakes in custom lexicon, Microsoft has provided [validation tool for custom lexicon](https://github.com/jiajzhan/Custom-Lexicon-Validation). It provides detailed error messages that help you find errors. Before you send SSML with custom lexicon to the Speech service, you should check your custom lexicon with this tool.
+It's easy to make mistakes in the custom lexicon, so Microsoft provides a [validation tool for the custom lexicon](https://github.com/jiajzhan/Custom-Lexicon-Validation). It provides detailed error messages that help you find errors. Before you send SSML with the custom lexicon to the Speech service, check your custom lexicon with this tool.
**Limitations**-- File size: custom lexicon file size maximum limit is 100 KB, if beyond this size, synthesis request will fail.-- Lexicon cache refresh: custom lexicon will be cached with URI as key on TTS Service when it's first loaded. Lexicon with same URI won't be reloaded within 15 mins, so custom lexicon change needs to wait at most 15 mins to take effect.+
+- **File size**: The custom lexicon file size maximum limit is 100 KB. If a file is beyond this size, the synthesis request fails.
+- **Lexicon cache refresh**: The custom lexicon is cached with the URI as the key on text-to-speech when it's first loaded. The lexicon with the same URI won't be reloaded within 15 minutes, so the custom lexicon change needs to wait 15 minutes at the most to take effect.
**Speech service phonetic sets**
-In the sample above, we're using the International Phonetic Alphabet, also known as the IPA phone set. We suggest developers use the IPA, because it's the international standard. For some IPA characters, they've the 'precomposed' and 'decomposed' version when being represented with Unicode. Custom lexicon only supports the decomposed Unicode.
+In the preceding sample, we're using the IPA, which is also known as the IPA phone set. We suggest that you use the IPA because it's the international standard. For some IPA characters, they're the "precomposed" and "decomposed" version when they're being represented with Unicode. The custom lexicon only supports the decomposed Unicode.
-Considering that the IPA isn't easy to remember, the Speech service defines a phonetic set for seven languages (`en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`).
+The IPA isn't easy to remember, so the Speech service defines a phonetic set for seven languages: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`.
-You can use the `x-microsoft-sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated below:
+You can use the `x-microsoft-sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated here:
```xml <?xml version="1.0" encoding="UTF-8"?>
For more information on the detailed Speech service phonetic alphabet, see the [
## Adjust prosody
-The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the Text-to-Speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
+The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the text-to-speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
-Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. The Text-to-Speech service limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
+Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. Text-to-speech limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
**Syntax**
Because prosodic attribute values can vary over a wide range, the speech recogni
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st", that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
-| `contour` |Contour now supports neural voice. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch, using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `range` | A value that represents the range of pitch for the text. You can express `range` using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
+| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `contour` |Contour now supports neural voice. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
+| `range` | A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. For example, 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. For example, +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
### Change speaking rate
-Speaking rate can be applied at the word or sentence-level.
+Speaking rate can be applied at the word or sentence level.
**Example**
Pitch changes can be applied at the sentence level.
``` ## say-as element
-`say-as` is an optional element that indicates the content type (such as number or date) of the element's text. This provides guidance to the speech synthesis engine about how to pronounce the text.
+The `say-as` element is optional. It indicates the content type, such as number or date, of the element's text. This element provides guidance to the speech synthesis engine about how to pronounce the text.
**Syntax**
Pitch changes can be applied at the sentence level.
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `interpret-as` | Indicates the content type of element's text. For a list of types, see the table below. | Required |
-| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them (see table below). | Optional |
+| `interpret-as` | Indicates the content type of an element's text. For a list of types, see the following table. | Required |
+| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional |
| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
-The following are the supported content types for the `interpret-as` and `format` attributes. Include the `format` attribute only if `interpret-as` is set to date and time.
+The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `interpret-as` is set to date and time.
| interpret-as | format | Interpretation | |--|--|-|
-| `address` | | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th court north east redmond washington." |
+| `address` | | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington." |
| `cardinal`, `number` | | The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">3</say-as> alternatives`<br /><br />As "There are three alternatives." | | `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." | | `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." | | `digits`, `number_digit` | | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." | | `fraction` | | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
-| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option". |
-| `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. For example, "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format`. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
-| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. The following are valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
-| `name` | | The text is spoken as a person name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />as [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> as [qiú] instead of [chóu]. |
+| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option." |
+| `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
+| `name` | | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
**Usage**
The speech synthesis engine speaks the following example as "Your first request
## Add recorded audio
-`audio` is an optional element that allows you to insert pre-recorded audio into an SSML document. The body of the audio element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. Additionally, the `audio` element can contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
+The `audio` element is optional. You can use it to insert prerecorded audio into an SSML document. The body of the audio element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
Any audio included in the SSML document must meet these requirements:
-* The audio must be hosted on an Internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend putting the audio file into a Blob Storage in the same Azure region as the TTS (Text-to-Speech) endpoint for minimizing the latency.
+* The audio must be hosted on an internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend that you put the audio file into Blob Storage in the same Azure region as the text-to-speech endpoint to minimize the latency.
* The audio file must be valid *.mp3, *.wav, *.opus, *.ogg, *.flac, or *.wma files.
-* The combined total time for all text and audio files in a single response cannot exceed 600 seconds.
+* The combined total time for all text and audio files in a single response can't exceed 600 seconds.
* The audio must not contain any customer-specific or other sensitive information. **Syntax**
Any audio included in the SSML document must meet these requirements:
<audio src="string"/></audio> ```
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|--|| | `src` | Specifies the location/URL of the audio file. | Required if using the audio element in your SSML document. |
Any audio included in the SSML document must meet these requirements:
## Add background audio
-The `mstts:backgroundaudio` element allows you to add background audio to your SSML documents (or mix an audio file with Text-to-Speech). With `mstts:backgroundaudio` you can loop an audio file in the background, fade in at the beginning of Text-to-Speech, and fade out at the end of Text-to-Speech.
+You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text-to-speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text-to-speech, and fade out at the end of text-to-speech.
-If the background audio provided is shorter than the Text-to-Speech or the fade out, it will loop. If it is longer than the Text-to-Speech, it will stop when the fade out has finished.
+If the background audio provided is shorter than the text-to-speech or the fade out, it loops. If it's longer than the text-to-speech, it stops when the fade out has finished.
-Only one background audio file is allowed per SSML document. However, you can intersperse `audio` tags within the `voice` element to add additional audio to your SSML document.
+Only one background audio file is allowed per SSML document. You can intersperse `audio` tags within the `voice` element to add more audio to your SSML document.
**Syntax**
Only one background audio file is allowed per SSML document. However, you can in
**Attributes**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|-||
-| `src` | Specifies the location/URL of the background audio file. | Required if using background audio in your SSML document. |
+| `src` | Specifies the location/URL of the background audio file. | Required if using background audio in your SSML document |
| `volume` | Specifies the volume of the background audio file. **Accepted values**: `0` to `100` inclusive. The default value is `1`. | Optional |
-| `fadein` | Specifies the duration of the background audio "fade in" as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
-| `fadeout` | Specifies the duration of the background audio fade out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional |
+| `fadein` | Specifies the duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
+| `fadeout` | Specifies the duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional |
**Example**
Only one background audio file is allowed per SSML document. However, you can in
## Bookmark element
-The bookmark element allows you to insert custom markers in SSML to get the offset of each marker in the audio stream. We will not read out the bookmark elements. The bookmark element can be used to reference a specific location in the text or tag sequence. Bookmark is available for all languages and voices.
+You can use the `bookmark` element to insert custom markers in SSML to get the offset of each marker in the audio stream. We won't read out the `bookmark` elements. The `bookmark` element can be used to reference a specific location in the text or tag sequence. Bookmark is available for all languages and voices.
**Syntax**
The bookmark element allows you to insert custom markers in SSML to get the offs
<bookmark mark="string"/> ```
-**Attributes**
+**Attribute**
-| Attribute | Description | Required / Optional |
+| Attribute | Description | Required or optional |
|--|--||
-| `mark` | Specifies the reference text of the `bookmark` element. | Required. |
+| `mark` | Specifies the reference text of the `bookmark` element. | Required |
**Example**
-As an example, you might want to know the time offset of each flower word as following
+As an example, you might want to know the time offset of each flower word in the following snippet:
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
As an example, you might want to know the time offset of each flower word as fol
</speak> ```
-### Get bookmark using Speech SDK
+### Get a bookmark by using the Speech SDK
-You can subscribe to the `BookmarkReached` event in Speech SDK to get the bookmark offsets.
+You can subscribe to the `BookmarkReached` event in the Speech SDK to get the bookmark offsets.
> [!NOTE]
-> `BookmarkReached` event is only available since Speech SDK version 1.16.
+> The `BookmarkReached` event is only available since the Speech SDK Version 1.16.
-`BookmarkReached` events are raised as the output audio data becomes available, which will be faster than playback to an output device.
+The `BookmarkReached` events are raised as the output audio data becomes available, which will be faster than playback to an output device.
-* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the bookmark element. This is measured in hundred-nanosecond units (HNS) with 10,000 HNS equivalent to 1 millisecond.
-* `Text` is the reference text of the bookmark element, which is the string you set in the `mark` attribute.
+* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the `bookmark` element. The time is measured in hundred-nanosecond units (HNS) with 10,000 HNS equivalent to 1 millisecond.
+* `Text` is the reference text of the `bookmark` element, which is the string you set in the `mark` attribute.
# [C#](#tab/csharp)
synthesizer.BookmarkReached += (s, e) =>
}; ```
-For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `BookmarkReached` event will be triggered twice, and the console output will be:
+ ```text Bookmark reached. Audio offset: 825ms, bookmark text: flower_1. Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
synthesizer->BookmarkReached += [](const SpeechSynthesisBookmarkEventArgs& e)
}; ```
-For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `BookmarkReached` event will be triggered twice, and the console output will be:
+ ```text Bookmark reached. Audio offset: 825ms, bookmark text: flower_1. Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
synthesizer.BookmarkReached.addEventListener((o, e) -> {
}); ```
-For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `BookmarkReached` event will be triggered twice, and the console output will be:
+ ```text Bookmark reached. Audio offset: 825ms, bookmark text: flower_1. Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
speech_synthesizer.bookmark_reached.connect(lambda evt: print(
"Bookmark reached: {}, audio offset: {}ms, bookmark text: {}.".format(evt, evt.audio_offset / 10000, evt.text))) ```
-For the example SSML above, the `bookmark_reached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `bookmark_reached` event will be triggered twice, and the console output will be:
+ ```text Bookmark reached, audio offset: 825ms, bookmark text: flower_1. Bookmark reached, audio offset: 1462.5ms, bookmark text: flower_2.
synthesizer.bookmarkReached = function (s, e) {
} ```
-For the example SSML above, the `bookmarkReached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `bookmarkReached` event will be triggered twice, and the console output will be:
+ ```text (Bookmark reached), Audio offset: 825ms, bookmark text: flower_1. (Bookmark reached), Audio offset: 1462.5ms, bookmark text: flower_2.
For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cogniti
}]; ```
-For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+For the preceding example SSML, the `BookmarkReached` event will be triggered twice, and the console output will be:
+ ```text Bookmark reached. Audio offset: 825ms, bookmark text: flower_1. Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cogniti
## Next steps
-* [Language support: voices, locales, languages](language-support.md)
+[Language support: Voices, locales, languages](language-support.md)
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
Use the Speech SDK when:
* Speech recognition - Convert speech-to-text either from audio files or directly from a microphone, or transcribe a recorded conversation.
-* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voice-and-custom-neural-voice).
+* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voices-and-custom-neural-voices).
* Speech translation - Translate audio in a source language to text or audio in a target language.
cognitive-services Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/reference/document-format-guidelines.md
+
+ Title: Import document format guidelines - question answering
+description: Use these guidelines for importing documents to get the best results for your content with question answering.
+++++ Last updated : 01/23/2022++
+# Format guidelines for question answering
+
+Review these formatting guidelines to get the best results for your content.
+
+## Formatting considerations
+
+After importing a file or URL, question answering converts and stores your content in the [markdown format](https://en.wikipedia.org/wiki/Markdown). The conversion process adds new lines in the text, such as `\n\n`. A knowledge of the markdown format helps you to understand the converted content and manage your knowledge base content.
+
+If you add or edit your content directly in your knowledge base, use **markdown formatting** to create rich text content or change the markdown format content that is already in the answer. Question answering supports much of the markdown format to bring rich text capabilities to your content. However, the client application, such as a chat bot may not support the same set of markdown formats. It is important to test the client application's display of answers.
+
+## Basic document formatting
+
+Question answering identifies sections and subsections and relationships in the file based on visual clues like:
+
+* font size
+* font style
+* numbering
+* colors
+
+> [!NOTE]
+> We don't support extraction of images from uploaded documents currently.
+
+### Product manuals
+
+A manual is typically guidance material that accompanies a product. It helps the user to set up, use, maintain, and troubleshoot the product. When question answering processes a manual, it extracts the headings and subheadings as questions and the subsequent content as answers. See an example [here](https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf).
+
+Below is an example of a manual with an index page, and hierarchical content
+
+> [!div class="mx-imgBorder"]
+> ![Product Manual example for a knowledge base](../../../qnamaker/media/qnamaker-concepts-datasources/product-manual.png)
+
+> [!NOTE]
+> Extraction works best on manuals that have a table of contents and/or an index page, and a clear structure with hierarchical headings.
+
+### Brochures, guidelines, papers, and other files
+
+Many other types of documents can also be processed to generate question answer pairs, provided they have a clear structure and layout. These include: Brochures, guidelines, reports, white papers, scientific papers, policies, books, etc. See an example [here](https://qnamakerstore.blob.core.windows.net/qnamakerdata/docs/Manage%20Azure%20Blob%20Storage.docx).
+
+Below is an example of a semi-structured doc, without an index:
+
+> [!div class="mx-imgBorder"]
+> ![Azure Blob storage semi-structured Doc](../../../qnamaker/media/qnamaker-concepts-datasources/semi-structured-doc.png)
+
+### Unstructured document support
+
+Custom question answering now supports unstructured documents. A document that does not have its content organized in a well-defined hierarchical manner, is missing a set structure or has its content free flowing can be considered as an unstructured document.
+
+Below is an example of an unstructured PDF document:
+
+> [!div class="mx-imgBorder"]
+> ![Unstructured document example for a knowledge base](../../../qnamaker/media/qnamaker-concepts-datasources/unstructured-qna-pdf.png)
+
+ Currently this functionality is available only via document upload and only for PDF and DOC file formats.
+
+> [!IMPORTANT]
+> Support for unstructured file/content is available only in question answering.
+
+### Structured question answering document
+
+The format for structured question-answers in DOC files, is in the form of alternating questions and answers per line, one question per line followed by its answer in the following line, as shown below:
+
+```text
+Question1
+
+Answer1
+
+Question2
+
+Answer2
+```
+
+Below is an example of a structured question answering word document:
+
+> [!div class="mx-imgBorder"]
+> ![Structured question answering document example for a knowledge base](../../../qnamaker/media/qnamaker-concepts-datasources/structured-qna-doc.png)
+
+### Structured *TXT*, *TSV* and *XLS* Files
+
+Question answering in the form of structured *.txt*, *.tsv* or *.xls* files can also be uploaded to question answering to create or augment a knowledge base. These can either be plain text, or can have content in RTF or HTML. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories.
+
+| Question | Answer | Metadata (1 key: 1 value) |
+|--||-|
+| Question1 | Answer1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | `Key:Value` |
+
+Any additional columns in the source file are ignored.
+
+### Structured data format through import
+
+Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured .tsv file that contains data source information. This information helps group the question-answer pairs and attribute them to a particular data source. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories.
+
+| Question | Answer | Source| Metadata (1 key: 1 value) |
+|--||-||
+| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | Editorial| `Key:Value` |
+
+<a href="#formatting-considerations"></a>
+
+### Multi-turn document formatting
+
+* Use headings and subheadings to denote hierarchy. For example, You can h1 to denote the parent question answer and h2 to denote the question answer that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Do not use style, color, or some other mechanism to imply structure in your document, question answering will not extract the multi-turn prompts.
+* First character of heading must be capitalized.
+* Do not end a heading with a question mark, `?`.
+
+**Sample documents**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)
+
+## FAQ URLs
+
+Question answering can support FAQ web pages in three different forms:
+
+* Plain FAQ pages
+* FAQ pages with links
+* FAQ pages with a Topics Homepage
+
+### Plain FAQ pages
+
+This is the most common type of FAQ page, in which the answers immediately follow the questions in the same page.
+
+### FAQ pages with links
+
+In this type of FAQ page, questions are aggregated together and are linked to answers that are either in different sections of the same page, or in different pages.
+
+Below is an example of an FAQ page with links in sections that are on the same page:
+
+> [!div class="mx-imgBorder"]
+> ![Section Link FAQ page example for a knowledge base](../../../qnamaker/media/qnamaker-concepts-datasources/sectionlink-faq.png)
+
+### Parent Topics page links to child answers pages
+
+This type of FAQ has a Topics page where each topic is linked to a corresponding set of questions and answers on a different page. Question answer crawls all the linked pages to extract the corresponding questions & answers.
+
+Below is an example of a Topics page with links to FAQ sections in different pages.
+
+> [!div class="mx-imgBorder"]
+> ![Deep link FAQ page example for a knowledge base](../../../qnamaker/media/qnamaker-concepts-datasources/topics-faq.png)
+
+### Support URLs
+
+Question answering can process semi-structured support web pages, such as web articles that would describe how to perform a given task, how to diagnose and resolve a given problem, and what are the best practices for a given process. Extraction works best on content that has a clear structure with hierarchical headings.
+
+> [!NOTE]
+> Extraction for support articles is a new feature and is in early stages. It works best for simple pages, that are well structured, and do not contain complex headers/footers.
+
+## Import and export knowledge base
+
+**TSV and XLS files**, from exported knowledge bases, can only be used by importing the files from the **Settings** page in the language studio. They cannot be used as data sources during knowledge base creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
+
+When you import the knowledge base through these **TSV and XLS files**, the question answer pairs get added to the editorial source and not the sources from which the question and answers were extracted in the exported knowledge base.
+
+## Next steps
+
+* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
Azure Cognitive Services are cloud-based services with REST APIs and client libr
## Categories of Cognitive Services
-The catalog of cognitive services that provide cognitive understanding is categorized into five main pillars:
+The catalog of cognitive services that provide cognitive understanding is categorized into four main pillars:
* Vision * Speech * Language * Decision
-The following sections in this article provide a list of services that are part of these five pillars.
+The following sections in this article provide a list of services that are part of these four pillars.
## Vision APIs
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-http.md
For *stateless* workflows in single-tenant Azure Logic Apps, HTTP-based actions
## Disable asynchronous operations
-Sometimes, you might want to the HTTP action's asynchronous behavior in specific scenarios, for example, when you want to:
+Sometimes, you might want to disable the HTTP action's asynchronous behavior in specific scenarios, for example, when you want to:
* [Avoid HTTP timeouts for long-running tasks](#avoid-http-timeouts) * [Disable checking location headers](#disable-location-header-check)
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/cassandra-adoption.md
Title: From Apache Cassandra to Cassandra API
-description: Learn best practices and ways to adopt Azure Cosmos DB Cassandra API successfully.
+description: Learn best practices and ways to successfully use the Azure Cosmos DB Cassandra API to use with Apache Cassandra applications.
Last updated 11/30/2021
# From Apache Cassandra to Cassandra API
-Azure Cosmos DB Cassandra API provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility makes it possible to run applications designed to connect to Apache Cassandra with Cassandra API, with minimal changes. However, there are some important differences between Apache Cassandra and Azure Cosmos DB.
-This article is aimed at users who are familiar with native [Apache Cassandra](https://cassandra.apache.org/), and are considering moving to Azure Cosmos DB Cassandra API. Consider this article a checklist to help you adopt Cassandra API successfully.
+The Azure Cosmos DB Cassandra API provides wire protocol compatibility with existing Cassandra SDKs and tools. You can run applications that are designed to connect to Apache Cassandra by using the Cassandra API with minimal changes.
+When you use the Cassandra API, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. This article is a checklist to help users who are familiar with native [Apache Cassandra](https://cassandra.apache.org/) successfully begin to use the Azure Cosmos DB Cassandra API.
## Feature support
-While Cassandra API supports a large surface area of Apache Cassandra features, there are some features which are not supported (or have limitations). Review our article [features supported by Azure Cosmos DB Cassandra API](cassandra-support.md) to ensure the features you need are supported.
+The Cassandra API supports a large number of Apache Cassandra features, but some features aren't supported, or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB Cassandra API features](cassandra-support.md) you need are supported.
+
+## Replication
-## Replication (migration)
+When you plan for replication, it's important to look at both migration and consistency.
-Although you can communicate with Cassandra API through the CQL Binary Protocol v4 wire protocol, Cosmos DB implements its own internal replication protocol. This means that live migration/replication cannot be achieved through the Cassandra gossip protocol. Review our article on how to [live migrate from Apache Cassandra to Cassandra API using dual-writes](migrate-data-dual-write-proxy.md). For offline migration, review our article: [Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks](migrate-data-databricks.md).
+### Migration
-## Replication (consistency)
+Although you can communicate with the Cassandra API through the Cassandra Query Language (CQL) binary protocol v4 wire protocol, Azure Cosmos DB implements its own internal replication protocol. You can't use the Cassandra gossip protocol for live migration or replication. For more information, see [Live-migrate from Apache Cassandra to the Cassandra API by using dual writes](migrate-data-dual-write-proxy.md).
- Although there are many similarities between Apache Cassandra's approach to replication consistency, there are also important differences. We have provided a [mapping document](apache-cassandra-consistency-mapping.md), which attempts to draw analogs between the two. However, we highly recommend that you take time to review and understand Azure Cosmos DB consistency settings in our [documentation](../consistency-levels.md) from scratch, or watch this short [video](https://www.youtube.com/watch?v=t1--kZjrG-o) guide to understanding consistency settings in the Azure Cosmos DB platform.
+For information about offline migration, see [Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks](migrate-data-databricks.md).
+### Consistency
+
+Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they are different. A [mapping document](apache-cassandra-consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://www.youtube.com/watch?v=t1--kZjrG-o).
## Recommended client configurations
-While you should not need to make any substantial code changes to existing apps using Apache Cassandra, there are some approaches and configuration settings that we recommend for Cassandra API in Cosmos DB that may improve the experience. We highly recommend reviewing our blog post [Cassandra API Recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/) for more details.
+When you use the Cassandra API, you don't need to make substantial code changes to existing applications that run Apache Cassandra. For the best experience, we recommend some approaches and configuration settings for the Cassandra API in Azure Cosmos DB. For details, review the blog post [Cassandra API recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/).
## Code samples
-Your existing application code should work with Cassandra API. However, if you encounter any connectivity related errors, we highly recommend referring to our [Quick Start samples](manage-data-java-v4-sdk.md) as a starting point to determine any minor differences in setup with your existing code. In addition, we have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement the recommended client configurations mentioned above. We also have samples for Java [Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git).
+The Cassandra API is designed to work with your existing application code. However, if you encounter any connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover any minor setup changes you might need to make in your existing code. We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations.
+You also can use samples for [Java Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Java Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git).
## Storage
-Cassandra API is ultimately backed by Azure Cosmos DB, which is a document-oriented NoSQL engine. Cosmos DB maintains metadata, which may result in a difference between the amount of physical storage for a given workload between native Apache Cassandra and Cassandra API. The difference is most noticeable in the case of small row sizes. In some cases, this may be offset by the fact that Cosmos DB does not implement compaction or tombstones. However, this will depend significantly on the workload. We recommend carrying out a POC if you are uncertain about storage requirements.
+The Cassandra API ultimately is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload.
-## Multi-region deployments
+The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. However, this factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept.
-Native Apache Cassandra is a multi-master system by default, and does not provide an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is therefore redundant in Apache Cassandra as all nodes are independent and there is no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single master, or multi-master regions for writes. One of the advantages of having a single master region for writes is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions, while still maintaining a level of high availability.
+## Multi-region deployments
-> [!NOTE]
-> Strong consistency across regions (RPO of zero) is not possible for native Apache Cassandra as all nodes are capable of serving writes. Cosmos DB can be configured for strong consistency across regions in *single write region* configuration. However, as with native Apache Cassandra, Cosmos DB accounts configured with multiple write regions cannot be configured for strong consistency as it is not possible for a distributed system to provide an RPO of zero and an RTO of zero.
+Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there is no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes.
-We recommend reviewing the [Load balancing policy section](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) from our blog [Cassandra API Recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java), and [failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4), for more detail.
+An advantage of having a single-master region for writes is avoiding cross-region conflict scenarios. It gives you the option to maintain strong consistency across multiple regions while maintaining a level of high availability.
+> [!NOTE]
+> Strong consistency across regions and a Recovery Point Objective (RPO) of zero isn't possible for native Apache Cassandra because all nodes are capable of serving writes. You can configure Azure Cosmos DB for strong consistency across regions in a *single write region* configuration. However, like with native Apache Cassandra, you can't configure an Azure Cosmos DB account that's configured with multiple write regions for strong consistency. A distributed system can't provide an RPO of zero *and* a Recovery Time Objective (RTO) of zero.
+For more information, we recommend that you review [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [Cassandra API recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java) and [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4).
-## Request Units
+## Request units
-One of the major differences between running a native Apache Cassandra cluster, and provisioning an Azure Cosmos DB account, is the way in which database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPs. However, Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed using a single normalized metric known as [request units](../request-units.md) (RU/s). Every request sent to the database has an "RU cost", and each request can be profiled to determine its cost.
+One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. However, Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost), and each request can be profiled to determine its cost.
-The benefit of this is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. Request units make it possible to associate the capacity you need to provision directly with the number of requests sent to the database (once you have profiled the cost of each request). The challenge with this way of provisioning capacity is that, in order to maximize the extent to which you can benefit from it, you need to have a more solid understanding of the throughput characteristics of your workload than you may have been used to.
+The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that, to maximize the benefit, you need to have a solid understanding of the throughput characteristics of your workload, maybe more than you have been used to.
-We highly recommend profiling your requests and using this information to help you to accurately estimate the number of request units you will need to provision. Here are some useful articles to help:
+We highly recommend that you profile your requests and use the information you gain to help you accurately estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate:
-- [Request Units in Azure Cosmos DB](../request-units.md)-- [Find the request unit charge for operations executed in Azure Cosmos DB Cassandra API](find-request-unit-charge-cassandra.md).
+- [Request units in Azure Cosmos DB](../request-units.md)
+- [Find the request unit charge for operations executed in the Azure Cosmos DB Cassandra API](find-request-unit-charge-cassandra.md)
- [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md) - ## Capacity provisioning models
-Traditional database provisioning is based on a fixed capacity that has to be provisioned up front in order to cope with the anticipated throughput. Cosmos DB offers a capacity-based model known as [provisioned throughput](../set-throughput.md). However, as a multi-tenant service, it is also able to offer *consumption-based* models, in the form of [autoscale](../provision-throughput-autoscale.md) and [serverless](../serverless.md). The extent to which your workload will benefit from each type depends on the predictability of throughput.
+In traditional database provisioning, a fixed capacity is provisioned up front to handle the anticipated throughput. Azure Cosmos DB offers a capacity-based model called [provisioned throughput](../set-throughput.md). As a multi-tenant service, Azure Cosmos DB also offers *consumption-based* models in [autoscale](../provision-throughput-autoscale.md) mode and [serverless](../serverless.md) mode. The extent to which a workload might benefit from either of these consumption-based provisioning models depends on the predictability of throughput for the workload.
-Generally speaking, workloads with large periods of dormancy will benefit from serverless. Steady state workloads with predictable throughput benefit most from provisioned throughput. Workloads, which have a continuous level of minimal throughput, but with unpredictable spikes, will benefit most from autoscale. We recommend reviewing the links below to help you understand the best capacity model for your throughput needs:
+In general, steady-state workloads that have predictable throughput benefit most from provisioned throughput. Workloads that have large periods of dormancy benefit from serverless mode. Workloads that have a continuous level of minimal throughput, but with unpredictable spikes, benefit most from autoscale mode. We recommend that you review the following articles for a clear understanding of the best capacity model for your throughput needs:
- [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md) - [Create Azure Cosmos containers and databases with autoscale throughput](../provision-throughput-autoscale.md)
Generally speaking, workloads with large periods of dormancy will benefit from s
## Partitioning
-Partitioning in Cosmos DB functions in a very similar way to Apache Cassandra. One of the main differences is that Cosmos DB is more optimized for *horizontal scale*. As such, there are limits placed on the amount of *vertical throughput* capacity available in any given *physical partition*. The effect of this is most noticeable where there is significant throughput skew in an existing data model.
+Partitioning in Azure Cosmos DB is similar to partitioning in Apache Cassandra. One of the main differences is that Azure Cosmos DB is more optimized for *horizontal scale*. In Azure Cosmos DB, limits are placed on the amount of *vertical throughput* capacity that's available in any physical partition. The effect of this optimization is most noticeable when an existing data model has significant throughput skew.
-Take steps to ensure that your partition key design will result in a relatively uniform distribution of requests. We also recommend that you review our article on [Partitioning in Azure Cosmos DB Cassandra API](cassandra-partitioning.md) for more information on how logical and physical partitioning works, and limits on throughput capacity (request units) per partition.
+Take steps to ensure that your partition key design will result in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity (request units per second, measured as *RU/s*) per partition, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
## Scaling
-In native Apache Cassandra, increasing capacity and scale involves adding new nodes to a cluster and ensuring they are properly added to the Cassandra ring. In Cosmos DB, this is completely transparent and automatic, and scaling is a function of how many [request units](../request-units.md) are provisioned for your keyspace or table. As implied in partitioning above, the scaling of physical machines occurs when either physical storage or required throughput reaches the limits allowed for a logical/physical partition. Review our article on [Partitioning in Azure Cosmos DB Cassandra API](cassandra-partitioning.md) for more information.
+In native Apache Cassandra, increasing capacity and scale involves adding new nodes to a cluster and ensuring that the nodes are properly added to the Cassandra ring. In Azure Cosmos DB, adding nodes is transparent and automatic. Scaling is a function of how many [request units](../request-units.md) are provisioned for your keyspace or table. Scaling in physical machines occurs when either physical storage or required throughput reaches limits allowed for a logical or a physical partition. For more information, see [Partitioning in the Azure Cosmos DB Cassandra API](cassandra-partitioning.md).
## Rate limiting
-One of the challenges of provisioning [request units](../request-units.md), particularly if [provisioned throughput](../set-throughput.md) is chosen, can be rate limiting. Azure Cosmos DB will return rate-limited (429) errors if clients consume more resources (RU/s) than the amount that you have provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. Review our article [Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations](prevent-rate-limiting-errors.md) for information on how to avoid rate limiting in your application.
-
-## Using Apache Spark
+A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited (429) errors if clients consume more resources (RU/s) than the amount you provisioned. The Cassandra API in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB API for Cassandra operations](prevent-rate-limiting-errors.md).
-Many Apache Cassandra users also use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to Cassandra API in the same way, using the same connector. However, we highly recommend reviewing our article on how to [Connect to Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md), and in particular the section for [Optimizing Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration), before doing so.
+## Apache Spark connector
-## Troubleshooting common issues
+Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the Cassandra API the same way and by using the same connector. Before you connect to the Cassandra API, we recommend that you review [Connect to the Azure Cosmos DB Cassandra API from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration).
-Review our [trouble shooting](troubleshoot-common-issues.md) article, which documents solutions to common problems faced with the service.
+## Troubleshoot common issues
+For solutions to common issues, see [Troubleshoot common issues in the Azure Cosmos DB Cassandra API](troubleshoot-common-issues.md).
## Next steps
-* Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
-* Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).
-* Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
+- Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
+- Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).
+- Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md).
cosmos-db Partners Migration Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partners-migration-cosmosdb.md
From NoSQL migration to application development, you can choose from a variety o
|[Tallan](https://www.tallan.com/) | App development | USA | | [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden| |[VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA |
-| [White Duck GmbH](https://whiteducksoftware.com/) |New app development, App Backend, Storage for document-based data| Germany |
+| [White Duck GmbH](https://whiteduck.de/en/) |New app development, App Backend, Storage for document-based data| Germany |
| [Xpand IT](https://www.xpand-it.com/) | New app development | Portugal, UK| | [Hanu](https://hanu.com/) | IoT, App development | USA| | [Incycle Software](https://www.incyclesoftware.com/) | NoSQL migration, Serverless architecture, App development| USA|
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips.md
Previously updated : 07/08/2021 Last updated : 01/24/2022 ms.devlang: csharp
If you're testing at high throughput levels (more than 50,000 RU/s), the client
> [!NOTE] > High CPU usage can cause increased latency and request timeout exceptions.
+## <a id="logging-and-tracing"></a> Logging and tracing
+
+Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
+
+Latest SDK versions (greater than 2.16.2) automatically remove it when they detect it, with older versions, you can remove it by:
+
+# [.NET 6 / .NET Core](#tab/trace-net-core)
+
+```csharp
+if (!Debugger.IsAttached)
+{
+ Type defaultTrace = Type.GetType("Microsoft.Azure.Documents.DefaultTrace,Microsoft.Azure.DocumentDB.Core");
+ TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
+ traceSource.Listeners.Remove("Default");
+ // Add your own trace listeners
+}
+```
+
+# [.NET Framework](#tab/trace-net-fx)
+
+Edit your `app.config` or `web.config` files:
+
+```xml
+<configuration>
+ <system.diagnostics>
+ <sources>
+ <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
+ <listeners>
+ <remove name="Default" />
+ <!--Add your own trace listeners-->
+ <add name="myListener" ... />
+ </listeners>
+ </source>
+ </sources>
+ </system.diagnostics>
+<configuration>
+```
+++ ## <a id="networking"></a> Networking **Connection policy: Use direct connection mode**
data-catalog Data Catalog Migration To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-catalog/data-catalog-migration-to-azure-purview.md
+
+ Title: Migrate from Azure Data Catalog to Azure Purview
+description: Steps to migrate from Azure Data Catalog to Microsoft's unified data governance service--Azure Purview.
++++ Last updated : 01/24/2022+
+#Customer intent: As an Azure Data Catalog user, I want to know why and how to migrate to Azure Purview so that I can use the best tools to manage my data.
++
+# Migrate from Azure Data Catalog to Azure Purview
+
+Microsoft launched a unified data governance service to help manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Azure Purview creates a map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Azure Purview enables data curators to manage and secure their data estate and empowers data consumers to find valuable, trustworthy data.
+
+The document shows you how to do the migration from Azure Data Catalog to Azure Purview.
+
+## Recommended approach
+
+To migrate from Azure Data Catalog to Azure Purview, we recommend the following approach:
+
+:heavy_check_mark: Step 1: [Assess readiness](#assess-readiness)
+
+:heavy_check_mark: Step 2: [Prepare to migrate](#prepare-to-migrate)
+
+:heavy_check_mark: Step 3: [Migrate to Azure Purview](#migrate-to-azure-purview)
+
+:heavy_check_mark: Step 4: [Cutover from Azure Data Catalog to Azure Purview](#cutover-from-azure-data-catalog-to-azure-purview)
+
+> [!NOTE]
+> Azure Data Catalog and Azure Purview are different services, so there is no in-place upgrade experience. Intentional migration effort required.
+
+## Assess readiness
+
+Look at [Azure Purview](https://azure.microsoft.com/services/purview/) and understand key differences of Azure Data Catalog and Azure Purview.
+
+||Azure Data Catalog |Azure Purview |
+||||
+|**Pricing** |[User based model](https://azure.microsoft.com/pricing/details/data-catalog/) |[Pay-As-You-Go model](https://azure.microsoft.com/pricing/details/azure-purview/) |
+|**Platform** |[Data catalog](overview.md) |[Unified governance platform for data discoverability, classification, lineage, and governance.](../purview/purview-connector-overview.md) |
+|**Extensibility** |N/A |[Extensible on Apache Atlas](../purview/tutorial-purview-tools.md)|
+|**SDK/PowerShell support** |N/A |[Supports REST APIs](/rest/api/purview/) |
+
+Compare [Azure Data Catalog supported sources](data-catalog-dsr.md) and [Azure Purview supported sources](../purview/purview-connector-overview.md), to confirm you can support your data landscape.
+
+## Prepare to migrate
+
+1. Identify data sources that you'll migrate.
+ Take this opportunity to identify logical and business connections between your data sources and assets. Azure Purview will allow you to create a map of your data landscape that reflects how your data is used and discovered in your organization.
+1. Review [Azure Purview best practices for deployment and architecture](../purview/deployment-best-practices.md) to develop a deployment strategy for Azure Purview.
+1. Determine the impact that a migration will have on your business.
+ For example: how will Azure Data catalog be used until the transition is complete?
+1. Create a migration plan.
+
+## Migrate to Azure Purview
+
+Manually migrate your data from Azure Data Catalog to Azure Purview.
+
+[Create an Azure Purview account](../purview/create-catalog-portal.md), [create collections](../purview/create-catalog-portal.md) in your data map, set up [permissions for your users](../purview/catalog-permissions.md), and onboard your data sources.
+
+We suggest you review the Azure Purview best practices documentation before deploying your Azure Purview account, so you can deploy the best environment for your data landscape.
+Here's a selection of articles that may help you get started:
+- [Azure Purview security best practices](../purview/concept-best-practices-security.md)
+- [Accounts architecture best practices](../purview/concept-best-practices-accounts.md)
+- [Collections architectures best practices](../purview/concept-best-practices-collections.md)
+- [Create a collection](../purview/quickstart-create-collection.md)
+- [Import Azure sources to Azure Purview at scale](../purview/tutorial-data-sources-readiness.md)
+- [Tutorial: Onboard an on-premises SQL Server instance](../purview/tutorial-register-scan-on-premises-sql-server.md)
+
+## Cutover from Azure Data Catalog to Azure Purview
+
+After the business has begun to use Azure Purview, cutover from Azure Data Catalog by deleting the Azure Data Catalog.
+
+## Next steps
+- Learn how [Azure Purview's data insights](../purview/concept-insights.md) can provide you up-to-date information on your data landscape.
+- Learn how [Azure Purview integrations with Azure security products](../purview/how-to-integrate-with-azure-security-products.md) to bring even more security to your data landscape.
+- Discover how [sensitivity labels in Azure Purview](../purview/create-sensitivity-label.md) help detect and protect your sensitive information.
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance-sinks.md
With Azure SQL Database, the default partitioning should work in most cases. The
### Best practice for deleting rows in sink based on missing rows in source
-Here is a video walk through of how to use data flows with exits, alter row, and sink transformations to achieve this common pattern: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5]
+Here is a video walk through of how to use data flows with exits, alter row, and sink transformations to achieve this common pattern:
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5]
### Impact of error row handling to performance
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 01/12/2022 Last updated : 01/22/2022 # Data transformation expressions in mapping data flow
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [addMonths](data-flow-expression-functions.md#addMonths) | Add months to a date or timestamp. You can optionally pass a timezone. | | [and](data-flow-expression-functions.md#and) | Logical AND operator. Same as &&. | | [asin](data-flow-expression-functions.md#asin) | Calculates an inverse sine value. |
+| [assertErrorMessages](data-flow-expression-functions.md#assertErrorMessages) | Returns map of all assert messages. |
| [atan](data-flow-expression-functions.md#atan) | Calculates a inverse tangent value. | | [atan2](data-flow-expression-functions.md#atan2) | Returns the angle in radians between the positive x-axis of a plane and the point given by the coordinates. | | [between](data-flow-expression-functions.md#between) | Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared |
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [greatest](data-flow-expression-functions.md#greatest) | Returns the greatest value among the list of values as input skipping null values. Returns null if all inputs are null. | | [hasColumn](data-flow-expression-functions.md#hasColumn) | Checks for a column value by name in the stream. You can pass a optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions. | | [hour](data-flow-expression-functions.md#hour) | Gets the hour value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [hasError](data-flow-expression-functions.md#hasError) | Checks if the assert with provided ID is marked as error. |
| [hours](data-flow-expression-functions.md#hours) | Duration in milliseconds for number of hours. | | [iif](data-flow-expression-functions.md#iif) | Based on a condition applies one value or the other. If other is unspecified it is considered NULL. Both the values must be compatible(numeric, string...). | | [iifNull](data-flow-expression-functions.md#iifNull) | Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value is not null. |
Creates an array of items. All items should be of the same type. If no items are
* ``'Washington'`` ___
+<a name="assertErrorMessages" ></a>
+
+### <code>assertErrorMessages</code>
+<code><b>assertErrorMessages() => map</b></code><br/><br/>
+Returns a map of all error messages for the row with assert ID as the key.
+
+Examples
+* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+
+___
+ <a name="asin" ></a>
Checks for a column value by name in the stream. You can pass a optional stream
___
+<a name="hasError" ></a>
+
+### <code>hasError</code>
+<code><b>hasError([<i>&lt;value1&gt;</i> : string]) => boolean</b></code><br/><br/>
+Checks if the assert with provided ID is marked as error.
+
+Examples
+* ``hasError('assert1')``
+* ``hasError('assert2')``
+
+___
+ <a name="hasPath" ></a> ### <code>hasPath</code>
Checks of the string value is a double value given an optional format according
* ``isDouble('icecream') -> false`` ___ - <a name="isError" ></a> ### <code>isError</code>
Checks if the row is marked as error. For transformations taking more than one i
* ``isError()`` * ``isError(1)`` ___--
+
<a name="isFloat" ></a> ### <code>isFloat</code>
___
Left pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it is trimmed to the length. * ``lpad('dumbo', 10, '-') -> '--dumbo'`` * ``lpad('dumbo', 4, '-') -> 'dumb'``
-* ``lpad('dumbo', 8, '<>') -> '<><dumbo'``
+ ___
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-connector-format.md
Title: Troubleshoot connector and format issues in mapping data flows description: Learn how to troubleshoot data flow problems related to connector and format in Azure Data Factory.--++ Previously updated : 12/06/2021 Last updated : 01/21/2022
You use the Azure Blob Storage as the staging linked service to link to a storag
#### Recommendation Create an Azure Data Lake Gen2 linked service for the storage, and select the Gen2 storage as the staging linked service in data flow activities.
+### Failed with an error: "shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: User does not have permission to perform this action."
+
+#### Symptoms
+
+When you use Azure Synapse Analytics as a source/sink and use PolyBase staging in data flows, you meet the following error: <br/>
+
+`shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: User does not have permission to perform this action.`
+
+#### Cause
+
+PolyBase requires certain permissions in your Synapse SQL server to work.
+
+#### Recommendation
+
+Grant the permissions below in your Synapse SQL server when you use PolyBase:
+
+**ALTER ANY SCHEMA**<br/>
+**ALTER ANY EXTERNAL DATA SOURCE**<br/>
+**ALTER ANY EXTERNAL FILE FORMAT**<br/>
+**CONTROL DATABASE**<br/>
+ ## Common Data Model format ### Model.json files with special characters
databox-online Azure Stack Edge Gpu Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-data-residency.md
The single region data residency option is available only for Southeast Asia (Si
The data residency posture of the Azure Stack Edge services can be summarized for the following aspects of the service: - Existing Azure Stack Edge ordering and management service.-- New Azure Edge Hardware Center (Preview) that will be used for new orders going forward.
+- New Azure Edge Hardware Center that will be used for new orders going forward.
<! Telemetry for the device and the service. - Proactive Support log collection where any logs that the service generates are stored in a single region and are not replicated to the paired region.-->
Azure Stack Edge service also integrates with the following dependent services a
- Azure IoT Hub and Azure IoT Edge <! Azure Key Vault -->
+> [!NOTE]
+> - If you provide a support package with a crash dump for the Azure Stack Edge device, it can contain End User Identifiable Information (EUII) or End User Pseudonymous Information (EUPI) which will be processed and stored outside South East Asia.
## Azure Stack Edge classic ordering and management resource
If you are creating a new Azure Stack Edge resource, you have the option to enab
## Azure Edge Hardware Center ordering and management resource
-The new Azure Edge Hardware Center service (Preview) is now available and allows you to create and manage Azure Stack Edge resources. When placing an order in Southeast Asia region, you can select the option to have your data resides only within Singapore and not be replicated.
+The new Azure Edge Hardware Center service is now available and allows you to create and manage Azure Stack Edge resources. When placing an order in Southeast Asia region, you can select the option to have your data resides only within Singapore and not be replicated.
In the event of region-wide outages, you wonΓÇÖt be able to access the order resources. You will not be able to return, cancel, or delete the resources. If you request for updates on your order status or need to initiate a device return urgently during the service outage, Microsoft Support will handle those requests.
If you choose to store and process the data only in Singapore region, then the s
## Next steps -- Learn more about [Azure data residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
+- Learn more about [Azure data residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
databox Data Box Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-limits.md
Previously updated : 01/13/2022 Last updated : 01/21/2022 # Azure Data Box limits
ddos-protection Inline Protection Glb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/inline-protection-glb.md
Enabling Azure DDoS Protection Standard on the VNet of the Standard Public Load
5. Azure DDoS Protection Standard on the gamer servers Load Balancer protects from L3/4 DDoS attacks and the DDoS protection policies are automatically tuned for game servers traffic profile and application scale. ## Next steps-- Learn more about [inline L7 DDoS protection partners](https://aka.ms/inlineddospartners)
+- Learn more about our launch partner [A10 Networks](https://www.a10networks.com/blog/introducing-l3-7-ddos-protection-for-microsoft-azure-tenants/)
- Learn more about [Azure DDoS Protection Standard](./ddos-protection-overview.md) - Learn more about [Gateway Load Balancer](../load-balancer/gateway-overview.md)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 01/13/2022 Last updated : 01/24/2022 # Security alerts - a reference guide
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Antimalware Action Taken** | Microsoft Antimalware for Azure has taken an action to protect this machine from malware or other potentially unwanted software. | - | Medium | | **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | | **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High |
-| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
+| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High | | **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High | | **Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium |
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | | **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High | | **Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
+| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
| **Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | | **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | | **Custom script extension with suspicious command in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium |
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Access of htaccess file detected**<br>(VM_SuspectHtaccessFileAccess)|Analysis of host data on %{Compromised Host} detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running the Apache Web software including basic redirect functionality, or for more advanced functions such as basic password protection. Attackers will often modify htaccess files on machines they have compromised to gain persistence.|Persistence, Defense Evasion, Execution|Medium| |**Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | |**Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High |
-|**Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
+|**Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
|**Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High | |**Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High | |**Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium |
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | |**Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High | |**Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-|**Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
+|**Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
|**Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | |**Attempt to stop apt-daily-upgrade.timer service detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected an attempt to stop apt-daily-upgrade.timer service. In some recent attacks, attackers have been observed stopping this service, to download malicious files and granting execution privileges for their attack. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low| |**Attempt to stop apt-daily-upgrade.timer service detected**<br>(VM_TimerServiceDisabled)|Analysis of host data on %{Compromised Host} detected an attempt to stop apt-daily-upgrade.timer service. In some recent attacks, attackers have been observed stopping this service, to download malicious files and granting execution privileges for their attack.|Defense Evasion|Low|
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Behavior similar to ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of files that have resemblance of known ransomware that can prevent users from accessing their system or personal files, and demands ransom payment in order to regain access. This behavior was seen [x] times today on the following machines: [Machine names]|-|High| |**Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | |**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that run an image associated with a digital currency mining. | Execution | High |
+|**Crypto coin miner execution** <br> (VM_CryptoCoinMinerExecution) | Analysis of host/device data detected a process being started in a way very similar to a coin mining process. | Execution | Medium |
|**Custom script extension with suspicious command in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious entry-point in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Shellcode detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected shellcode being generated from the command line. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**SSH server is running inside a container**<br>(VM_ContainerSSH)| Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached.|Execution|Medium| |**Successful SSH brute force attack**<br>(VM_SshBruteForceSuccess)|Analysis of host data has detected a successful brute force attack. The IP %{Attacker source IP} was seen making multiple login attempts. Successful logins were made from that IP with the following user(s): %{Accounts used to successfully sign in to host}. This means that the host may be compromised and controlled by a malicious actor.|Exploitation|High|
+|**Suspect Password File Access** <br> (VM_SuspectPasswordFileAccess) | Analysis of host data has detected suspicious access to encrypted user passwords. | Persistence | Informational |
|**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium| |**Suspicious compilation detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Suspicious compilation detected**<br>(VM_SuspectCompilation)|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they have compromised to escalate privileges.|Privilege Escalation, Exploitation|Medium|
+|**Suspicious DNS Over Https** <br> (VM_SuspiciousDNSOverHttps) | Analysis of host data indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
|**Suspicious failed execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Such failures may be associated with malicious scripts run by this extension. | Execution | Medium | |**Suspicious kernel module detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a shared object file being loaded as a kernel module. This could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Suspicious password access [seen multiple times]**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Informational|
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Suspicious PHP execution detected**<br>(VM_SuspectPhp)|Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run OS commands or PHP code from the command line using the PHP process. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.|Execution|Medium| |**Suspicious request to Kubernetes API**<br>(VM_KubernetesAPI)|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|Execution|Medium| |**Suspicious request to the Kubernetes Dashboard**<br>(VM_KubernetesDashboard) | Machine logs indicate that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | Lateral movement | Medium |
+|**Threat Intel Command Line Suspect Domain** <br> (VM_ThreatIntelCommandLineSuspectDomain) | The process 'PROCESSNAME' on 'HOST' connected to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred.| Initial Access | Medium |
|**Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | |**Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](defender-for-kubernetes-introduction.md)
-|Alert (alert type)|Description|MITRE tactics<br>([Learn more](#intentions))|Severity|
-|-|-|:-:|--|
-| **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
-| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
-| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
-| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
-| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
-| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low |
-| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
-| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
-| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
-| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
-| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
-| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
-| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
-| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
-| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
-| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
-| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
-| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
-| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
-| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Medium |
-| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
-| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
-| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
-| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
-| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
-| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container indicates a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
-| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container detected the download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
-| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
-| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container detected common files as a way to obfuscate their actions or for persistence. | Persistence | Medium |
-| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container detected the initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
-| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
-| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
-| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
-| **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low |
-| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
-| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
-| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
-| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
-| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
-| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| | | | |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+|--|--|:-:|--|
+| **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
+| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
+| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
+| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
+| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
+| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
+| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
+| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low |
+| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
+| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
+| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
+| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
+| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
+| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
+| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
+| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
+| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
+| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
+| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
+| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
+| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
+| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
+| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Medium |
+| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
+| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
+| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
+| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
+| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
+| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
+| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
+| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container indicates a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
+| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container detected the download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
+| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
+| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
+| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
+| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container detected common files as a way to obfuscate their actions or for persistence. | Persistence | Medium |
+| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container detected the initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
+| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
+| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
+| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
+| **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low |
+| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
+| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
+| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
+| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
+| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
+| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
+| **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
+| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
+| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational |
+| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
+| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
+| | | | |
## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-send-twin-to-twin-events.md
To subscribe your Azure function, you'll create an **Event Grid subscription** t
Use the following CLI command, filling in placeholders for your subscription ID, resource group, function app, and function name. ```azurecli-interactive
-az eventgrid event-subscription create --name <name-for-your-event-subscription> --source-resource-id /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.EventGrid/topics/<your-event-grid-topic> \ --endpoint-type azurefunction --endpoint /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app-name>/functions/<function-name>
+az eventgrid event-subscription create --name <name-for-your-event-subscription> --source-resource-id /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.EventGrid/topics/<your-event-grid-topic> --endpoint-type azurefunction --endpoint /subscriptions/<subscription-ID>/resourceGroups/<your-resource-group>/providers/Microsoft.Web/sites/<your-function-app-name>/functions/<function-name>
``` Now, your function can receive events through your Event Grid topic. The data flow setup is complete.
education-hub Create Lab Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/create-lab-education-hub.md
Last updated 12/21/2021
-<!-- 1. H1
-Required. Start your H1 with a verb. Pick an H1 that clearly conveys the task the
-user will complete.
>- # Create a lab in Azure Education Hub through REST APIs. This article will walk you through how to create a lab, add students to that lab and verify that the lab has been created.
education-hub Delete Lab Education Hub Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/delete-lab-education-hub-apis.md
+
+ Title: Delete a lab in Azure Education Hub through REST APIs
+description: Learn how to delete a lab in Azure Education Hub using REST APIs
++++ Last updated : 1/24/2022+++
+# Delete a lab in Education Hub through REST APIs
+
+This article will walk you through how to delete a lab with REST APIs that has been created in Education Hub. Note, all students must be deleted from the lab in order for the lab to be able to be deleted.
+
+## Prerequisites
+
+- Know billing account ID, Billing profile ID, and Invoice Section ID
+- Have an Edu approved Azure account
+- Have a lab already created in Education Hub
+
+## Delete students from a lab
+
+As mentioned previously, before you delete a lab, you must delete every student in the lab first.
+
+To find all of the students that are in a lab, we can call the below API. Replace the text surrounded in the <>.
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students?includeDeleted=false&api-version=2021-12-01-preview
+```
+
+This will return the information about every student in the specified lab. Be sure to note down the ID of every student in the lab because that is what we will be using to delete the students.
+
+```json
+{
+ "value": [
+ {
+ "id": "string",
+ "name": "string",
+ "type": "string",
+ "systemData": {
+ "createdBy": "string",
+ "createdByType": "User",
+ "createdAt": "2021-12-22T17:17:07.542Z",
+ "lastModifiedBy": "string",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-12-22T17:17:07.542Z"
+ },
+ "properties": {
+ "firstName": "string",
+ "lastName": "string",
+ "email": "string",
+ "role": "Student",
+ "budget": {
+ "currency": "string",
+ "value": 0
+ },
+ "subscriptionId": "string",
+ "expirationDate": "2021-12-22T17:17:07.542Z",
+ "status": "Active",
+ "effectiveDate": "2021-12-22T17:17:07.542Z",
+ "subscriptionAlias": "string",
+ "subscriptionInviteLastSentDate": "string"
+ }
+ }
+ ],
+ "nextLink": "string"
+}
+```
+
+After we have the student IDs, we can begin deleting students from the lab. Replace the StudentID surrounded by <> in the below API call with the student ID obtained from the last step.
+
+```json
+DELETE https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students/<StudentID>?api-version=2021-12-01-preview
+```
+
+The API will respond that the student has been deleted:
+
+```json
+student deleted
+```
+
+## Delete the lab
+
+After all of the students have been deleted from a lab, we can delete the actual lab.
+
+Call the endpoint below and make sure to replace the sections that are surrounded by <>.
+
+```json
+DELETE https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default?api-version=2021-12-01-preview
+```
+
+The API will respond that the Lab has been deleted:
+
+```json
+Lab deleted
+```
+
+## Next steps
+In this article, you learned how to delete students from a lab and then delete the lab itself. Follow the tutorials below if you wish to create a new lab and read up on more documentation.
+
+- [Create a lab using REST APIs](create-lab-education-hub.md)
+
+- [Support options](educator-service-desk.md)
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
Previously updated : 12/07/2021 Last updated : 01/24/2022 ++ # ExpressRoute partners and peering locations
> * [Providers By Location](expressroute-locations-providers.md)
-The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers,and ExpressRoute System Integrators (SIs).
+The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers, and ExpressRoute System Integrators (SIs).
> [!Note] > Azure regions and ExpressRoute locations are two distinct and different concepts, understanding the difference between the two is critical to exploring Azure hybrid networking connectivity.
The tables in this article provide information on ExpressRoute geographical cove
> ## Azure regions
-Azure regions are global datacenters where Azure compute, networking and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
+Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
## ExpressRoute locations ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location does not need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
The following table shows connectivity locations and the service providers for e
| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, National Telecom UIH | | **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | 10G | Colt, Equinix, NTT Global DataCenters EMEA|
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | 10G | Equinix |
+| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | 10G | CenturyLink Cloud Connect, Equinix |
| **Busan** | [LG CNS](https://www.lgcns.com/En/Service/DataCenter) | 2 | Korea South | n/a | LG CNS | | **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | Ascenty | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix | | **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |
-| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
+| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
| **Chennai2** | Airtel | 2 | South India | 10G | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo | | **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | 10G, 100G | CoreSite | | **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | 10G, 100G | CoreSite, Megaport, PacketFabric, Zayo | | **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom |
-| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport |
+| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | Interxion |
-| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix |
+| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | DE-CIX, Deutsche Telekom AG, Equinix |
| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, InterCloud, Megaport, Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | 10G | NTT Communications, Telin, XL Axiata | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric |
-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, NTT Global DataCenters EMEA, SES, Sohonet, Telehouse - KDDI |
+| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Orange, SES, Sohonet, Telehouse - KDDI, Zayo |
| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion, Megaport |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | DE-CIX, Interxion, Megaport, Telefonica |
| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
-| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom |
+| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom |
| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks | | **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit | | **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix, Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, Cologix, Fibrenoire, Megaport, Telus, Zayo |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, CenturyLink Cloud Connect, Cologix, Fibrenoire, Megaport, Telus, Zayo |
| **Mumbai** | Tata Communications | 2 | West India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
-| **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Vodafone Idea |
+| **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Orange, Vodafone Idea |
| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | Colt, DE-CIX, Megaport |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Colt, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo |
| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | 10G, 100G | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data | | **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | 10G, 100G | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications | | **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | 10G, 100G | GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | 10G | Megaport, NextDC |
-| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | 10G, 100G | Cox Business Cloud Port, Megaport, Zayo |
+| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | 10G, 100G | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo |
| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| 10G | Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | 10G, 100G | Bell Canada, Equinix, Megaport, Telus | | **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | 10G | Transtelco| | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | 10G, 100G | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | 10G | Equinix |
-| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | 10G, 100G | CenturyLink Cloud Connect, Megaport |
+| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | 10G, 100G | CenturyLink Cloud Connect, Megaport, Zayo |
| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | 10G, 100G | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
-| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | 10G, 100G | |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | 10G, 100G | Aryaka Networks, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
+| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | 10G, 100G | Ascenty Data Centers |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | 10G, 100G | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | 10G, 100G | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
-| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | |
+| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | 10G, 100G | Colt, Coresite | | **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | CenturyLink Cloud Connect, China Unicom Global, Colt, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | 10G, 100G |GlobalConnect, Megaport | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | 10G | Equinix, Megaport, Telia Carrier | | **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | 10G, 100G | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | 10G, 100G | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | 10G | Chief Telecom, Chunghwa Telecom, FarEasTone | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | N/A | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> **We are currently unable to support new ExpressRoute circuits in Tokyo. Please create new circuits in Tokyo2 or Osaka.* |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, China Unicom Global, Megaport, Tokai Communications |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | 10G, 100G | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | 10G, 100G | |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Bell Canada, Cologix, Megaport, Telus |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Bell Canada, Cologix, Megaport, Telus, Zayo |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | 10G, 100G | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo |
-| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom |
+| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
**+** denotes coming soon ### National cloud environments
-Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others.
+Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud cannot connect to the Azure regions in the others.
### US Government cloud | **Location** | **Address** | **Local Azure regions**| **ER Direct** | **Service providers** |
If your connectivity provider is not listed in previous sections, you can still
* Follow steps in [Create an ExpressRoute circuit](expressroute-howto-circuit-classic.md) to set up connectivity. ## Connectivity through satellite operators
-If you are remote and don't have fiber connectivity or you want to explore other connectivity options you can check the following satellite operators.
+If you are remote and do not have fiber connectivity or want to explore other connectivity options, you can check the following satellite operators.
* Intelsat * [SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
Title: 'Connectivity providers and locations: Azure ExpressRoute | Microsoft Doc
description: This article provides a detailed overview of locations where services are offered and how to connect to Azure regions. Sorted by connectivity provider. - Previously updated : 08/26/2021 Last updated : 01/24/2022 + # ExpressRoute connectivity partners and peering locations
> * [Providers By Location](expressroute-locations-providers.md)
-The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers,and ExpressRoute System Integrators (SIs).
+The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers, and ExpressRoute System Integrators (SIs).
> [!Note] > Azure regions and ExpressRoute locations are two distinct and different concepts, understanding the difference between the two is critical to exploring Azure hybrid networking connectivity.
The tables in this article provide information on ExpressRoute geographical cove
> ## Azure regions
-Azure regions are global datacenters where Azure compute, networking and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
+Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
## ExpressRoute locations ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location does not need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
The following table shows locations by service provider. If you want to view ava
| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2, Mumbai2 | | **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok | | **[Aryaka Networks](https://www.aryaka.com/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, Sao Paulo, Seattle, Silicon Valley, Singapore, Tokyo, Washington DC |
-| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported | Campinas, Sao Paulo |
+| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported | Campinas, Sao Paulo, Sao Paulo2 |
| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 | | **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2, London2 |
The following table shows locations by service provider. If you want to view ava
| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai | | **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported |Miami | | **CDC** | Supported | Supported | Canberra, Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, New York, Paris, San Antonio, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
-| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported |Hong Kong, Taipei |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Bogota, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
+| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong, Taipei |
| **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore |
-| **China Telecom Global** |Supported |Supported |Hong Kong, Hong Kong2 |
-| **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** |Supported |Supported | Hong Kong, Singapore2, Tokyo2 |
-| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported |Taipei |
-| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported |Miami |
-| **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported |Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
-| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
-| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported |Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
+| **China Telecom Global** |Supported |Supported | Hong Kong, Hong Kong2 |
+| **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** |Supported |Supported | Frankfurt, Hong Kong, Singapore2, Tokyo2 |
+| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported | Taipei |
+| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami |
+| **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
+| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago, Silicon Valley, Washington DC |
+| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas, Phoenix, Silicon Valley, Washington DC |
-| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported |Amsterdam2, Dubai2, Frankfurt, Marseille, Mumbai, Munich, New York |
+| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Dubai2, Frankfurt, Frankfurt2, Madrid, Marseille, Mumbai, Munich, New York, Singapore2 |
| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney |
-| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported |Frankfurt |
-| **[Deutsche Telekom AG](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported |Frankfurt2 |
+| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |
+| **[Deutsche Telekom AG](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt2 |
| **du datamena** |Supported |Supported | Dubai2 | | **eir** |Supported |Supported |Dublin|
-| **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported |Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported | Singapore, Singapore2 |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported |Dubai|
-| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported |Amsterdam, Amsterdam2, Dublin, Frankfurt, London |
-| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported |Taipei|
-| **[Fastweb](https://www.fastweb.it/grandi-aziende/cloud/scheda-prodotto/fastcloud-interconnect/)** | Supported |Supported |Milan|
-| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported |Montreal|
-| **[GBI](https://www.gbiinc.com/microsoft-azure/)** |Supported |Supported |Dubai2, Frankfurt|
-| **[GÉANT](https://www.geant.org/Networks)** |Supported |Supported |Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille |
+| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London |
+| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
+| **[Fastweb](https://www.fastweb.it/grandi-aziende/cloud/scheda-prodotto/fastcloud-interconnect/)** | Supported |Supported | Milan |
+| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported | Montreal, Toronto2 |
+| **[GBI](https://www.gbiinc.com/microsoft-azure/)** |Supported |Supported | Dubai2, Frankfurt |
+| **[GÉANT](https://www.geant.org/Networks)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille |
| **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Oslo, Stavanger | | **GTT** |Supported |Supported |London2 | | **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 |
-| **Intelsat** | Supported | Supported | Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
+| **Intelsat** | Supported | Supported | London2, Washington DC2 |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, Madrid, Marseille, Paris, Zurich |
-| **[IRIDEOS](https://irideos.it/)** |Supported |Supported |Milan |
-| **Iron Mountain** | Supported |Supported |Washington DC |
-| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Toronto, Washington DC |
+| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Zurich |
+| **[IRIDEOS](https://irideos.it/)** |Supported |Supported | Milan |
+| **Iron Mountain** | Supported |Supported | Washington DC |
+| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Tokyo2, Toronto, Washington DC |
| **Jaguar Network** |Supported |Supported |Marseille, Paris |
-| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported |London, Newport(Wales) |
-| **[KINX](https://www.kinx.net/service/cloudhub/ms-expressroute/?lang=en)** |Supported |Supported |Seoul |
-| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported |Supported |Auckland, Sydney |
+| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported | London, Newport(Wales) |
+| **[KINX](https://www.kinx.net/service/cloudhub/ms-expressroute/?lang=en)** |Supported |Supported | Seoul |
+| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported |Supported | Auckland, Sydney |
| **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam |
-| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul |
-| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported |Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC |
-| **LG CNS** |Supported |Supported |Busan, Seoul |
-| **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported |Cape Town, Johannesburg |
-| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported |Seoul |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
+| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 |
+| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported | Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC |
+| **LG CNS** |Supported |Supported | Busan, Seoul |
+| **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported | Cape Town, Johannesburg |
+| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London |
-| **MTN Global Connect** |Supported |Supported |Cape Town,Johannesburg|
+| **MTN Global Connect** |Supported |Supported |Cape Town, Johannesburg|
| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported |Bangkok | | **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
-| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported |Newport(Wales) |
-| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported |Melbourne, Perth, Sydney, Sydney2 |
-| **NL-IX** |Supported |Supported |Amsterdam2 |
-| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported |Amsterdam2 |
-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported |Amsterdam, Hong Kong SAR, Jakarta, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC |
-| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported |Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported |Amsterdam2, Berlin, Frankfurt, London2 |
-| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported |Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported |Marseille |
-| **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported |Melbourne, Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported |Amsterdam, Amsterdam2, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported | Newport(Wales) |
+| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** |Supported |Supported | Melbourne, Perth, Sydney, Sydney2 |
+| **NL-IX** |Supported |Supported | Amsterdam2 |
+| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** |Supported |Supported | Amsterdam2 |
+| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** |Supported |Supported | Amsterdam, Hong Kong SAR, Jakarta, London, Los Angeles, Osaka, Singapore, Sydney, Tokyo, Washington DC |
+| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo |
+| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2, Berlin, Frankfurt, London2 |
+| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka |
+| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Marseille |
+| **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported | Melbourne, Sydney |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam, Amsterdam2, Dallas, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
| **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported |Chicago, Dallas, Denver, Las Vegas, Silicon Valley, Washington DC |
-| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported |Chicago, Hong Kong, Hong Kong2, London, Singapore2 |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Chicago, Dallas, Denver, Las Vegas, Silicon Valley, Washington DC |
+| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore2, Tokyo2 |
| **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan | | **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** |Supported |Supported |Seoul | | **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported |Supported | London2, Washington DC | | **[SIFY](http://telecom.sify.com/azure-expressroute.html)** |Supported |Supported |Chennai, Mumbai2 |
-| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported |Hong Kong2, Singapore, Singapore2 |
-| **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** |Supported |Supported |Seoul |
-| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported |Osaka, Tokyo |
+| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 |
+| **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** |Supported |Supported | Seoul |
+| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka, Tokyo |
| **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported |London2 | | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported |Auckland, Sydney | | **[Sprint](https://business.sprint.com/solutions/cloud-networking/)** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich |
-| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported |Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC |
-| **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported |Amsterdam, Sao Paulo |
-| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2, Tokyo |
-| **Telenor** |Supported |Supported |Amsterdam, London, Oslo |
-| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported |Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC |
+| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported | Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC |
+| **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported | Amsterdam, Sao Paulo, Madrid |
+| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported | London, London2, Singapore2, Tokyo |
+| **Telenor** |Supported |Supported | Amsterdam, London, Oslo |
+| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC |
| **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported |Jakarta | | **Telmex Uninet**| Supported | Supported | Dallas |
-| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** |Supported |Supported |Melbourne, Singapore, Sydney |
-| **[Telus](https://www.telus.com)** |Supported |Supported |Montreal, Seattle, Quebec City, Toronto, Vancouver |
-| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** |Supported |Supported |Cape Town, Johannesburg |
+| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** |Supported |Supported | Melbourne, Singapore, Sydney |
+| **[Telus](https://www.telus.com)** |Supported |Supported | Montreal, Seattle, Quebec City, Toronto, Vancouver |
+| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** |Supported |Supported | Cape Town, Johannesburg |
| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 |
-| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported |Dallas, Queretaro(Mexico)|
-| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported |Frankfurt|
+| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)|
+| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported |Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok |
-| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
+| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
| **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 | | **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney | | **Vodacom** |Supported |Supported |Cape Town, Johannesburg|
-| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported |Amsterdam2, London, Singapore |
+| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, London, Singapore |
| **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Mumbai2 | | **XL Axiata** | Supported | Supported | Jakarta |
-| **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported |Amsterdam, Chicago, Dallas, Denver, London, Los Angeles, Montreal, New York, Paris, Phoenix, Seattle, Silicon Valley, Toronto, Washington DC, Washington DC2 |
+| **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
**+** denotes coming soon ### National cloud environment
-Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others.
+Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud cannot connect to the Azure regions in the others.
### US Government cloud
If your connectivity provider is not listed in previous sections, you can still
* Follow steps in [Create an ExpressRoute circuit](expressroute-howto-circuit-classic.md) to set up connectivity. ## Connectivity through satellite operators
-If you are remote and don't have fiber connectivity or you want to explore other connectivity options you can check the following satellite operators.
+If you are remote and do not have fiber connectivity or you want to explore other connectivity options you can check the following satellite operators.
* Intelsat * [SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-performance.md
Previously updated : 01/11/2022 Last updated : 01/24/2022
Azure Firewall has two versions: Standard and Premium.
- Azure Firewall Standard
- Azure Firewall Standard has been generally available since September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
+ Azure Firewall Standard has been generally available since September 2018. It is cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
- Azure Firewall Premium
- Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS inspection and IDPS (Intrusion Detection and Prevention).
+ Azure Firewall Premium is a next generation firewall. It has capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS (Transport Layer Security) inspection and IDPS (Intrusion Detection and Prevention).
For more information about Azure Firewall, see [What is Azure Firewall?](overview.md) ## Performance testing
-Before deploying Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It's recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+Before deploying Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It is recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
## Performance data
The following set of performance results demonstrates the maximal Azure Firewall
> [!NOTE] > IPS (Intrusion Prevention System) takes place when one or more signatures are configured to *Alert and Deny* mode.
-Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public preview and provides you with enhancements to the overall firewall performance as shown below:
+Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public preview and provides you with the following enhancements to the overall firewall performance:
|Firewall use case |Without performance boost (Gbps) |With performance boost (Gbps) |
Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public
Performance values are calculated with Azure Firewall at full scale and with Premium performance boost enabled. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
-## How to configure Premium performance boost (preview)
-
-As more applications are moved to the cloud, the network element performance becomes a bottleneck. As a result, Premium performance boost (preview) for Azure Firewall Premium is available to allow more scalability for those deployments.
-
-To enable the Azure Firewall Premium performance boost, run the following Azure PowerShell commands. This feature is applied at the **subscription** level for all Firewalls (VNet Firewalls and SecureHub Firewalls). Currently, Azure Firewall Premium Performance boost is not recommended SecureHub Firewalls. Check back here for the latest updates as we work to change this recommendation. Also, this setting does not have any effect on standard Firewalls.
-
-After you run the Azure PowerShell commands, an update operation needs to be run on the Azure Firewall for the feature to immediately take effect. This update operation can be a rule change (least intrusive), a setting configuration, or a Stop/Start operation. Otherwise, the firewall/s will update with the feature within several days.
-
-Run the following Azure PowerShell to configure the Azure Firewall Premium performance boost:
-
-```azurepowershell
-Connect-AzAccount
-
-Select-AzSubscription -Subscription "subscription_id or subscription_name"
-
-Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
-```
-
-Run the following Azure PowerShell to turn it off:
-
-```azurepowershell
-Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
-```
-
+To enable the Azure Firewall Premium performance boost, see [Azure Firewall preview features](firewall-preview.md#azure-firewall-premium-performance-boost-preview).
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-preview.md
Previously updated : 01/21/2022 Last updated : 01/24/2022
The following Azure Firewall preview features are available publicly for you to deploy and test. Some of the preview features are available on the Azure portal, and some are only visible using a feature flag.
+> [!IMPORTANT]
+> These features are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Feature flags As new features are released to preview, some of them will be behind a feature flag. To enable the functionality in your environment, you must enable the feature flag on your subscription. These features are applied at the subscription level for all firewalls (VNet firewalls and SecureHub firewalls).
frontdoor Front Door Ddos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-ddos.md
Azure Front Door has several features and characteristics that can help to prevent distributed denial of service (DDoS) attacks. These features can prevent attackers from reaching your application and affecting your application's availability and performance.
-## Integration with Azure DDoS Protection Basic
+## Infrastructure DDoS protection
-Front Door is protected by Azure DDoS Protection Basic. It is integrated into the Front Door platform by default and at no additional cost. The full scale and capacity of Front Door's globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. Basic DDoS protection also defends against the most common, frequently occurring layer 7 DNS query floods and layer 3 and 4 volumetric attacks that target public endpoints. This service also has a proven track record in protecting Microsoft's enterprise and consumer services from large-scale attacks. For more information, see [Azure DDoS Protection](../security/fundamentals/ddos-best-practices.md).
+Front Door is protected by the default Azure infrastructure DDoS protection. The full scale and capacity of Front Door's globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. This infrastructure DDoS protection has a proven track record in protecting Microsoft's enterprise and consumer services from large-scale attacks.
## Protocol blocking
If you require further protection, then you can enable [Azure DDoS Protection St
- Learn how to configure a [WAF profile on Front Door](front-door-waf.md). - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn [how Front Door works](front-door-routing-architecture.md).
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/access-healthcare-apis.md
+
+ Title: Access Azure Healthcare APIs
+description: This article describes the different ways for accessing the services in your applications using tools and programming languages.
+++++ Last updated : 01/06/2022+++
+# Access Healthcare APIs
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, you'll learn about the different ways to access the services in your applications. After you've provisioned a FHIR service, DICOM service, or IoT connector, you can then access them in your applications using tools like Postman, cURL, REST Client in Visual Studio Code, and with programming languages such as Python and C#.
+
+## Access the FHIR service
+
+- [Access the FHIR service using Postman](././fhir/use-postman.md)
+- [Access the FHIR service using cURL](././fhir/using-curl.md)
+- [Access the FHIR service using REST Client](././fhir/using-rest-client.md)
+
+## Access the DICOM service
+
+- [Access the DICVOM service using Python](dicom/dicomweb-standard-apis-python.md)
+- [Access the DICOM service using cURL](dicom/dicomweb-standard-apis-curl.md)
+- [Access the DICOM service using C#](dicom/dicomweb-standard-apis-c-sharp.md)
+
+## Access IoT connector
+
+The IoT connector works with the IoT Hub and Event Hubs in your subscription to receive message data, and the FHIR service to persist the data.
+
+- [Receive device data through Azure IoT Hub](iot/device-data-through-iot-hub.md)
+- [Access the FHIR service using Postman](fhir/use-postman.md)
+- [Access the FHIR service using cURL](fhir/using-curl.md)
+- [Access the FHIR service using REST Client](fhir/using-rest-client.md)
++
+## Next steps
+
+In this document, you learned about the tools and programming languages that you can use to access the services in your applications. To learn how to deploy an instance of the Healthcare APIs service using the Azure portal, see
+
+>[!div class="nextstepaction"]
+>[Deploy Healthcare APIs (preview) workspace using Azure portal](healthcare-apis-quickstart.md)
+++
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/device-data-through-iot-hub.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
Use your device (real or simulated) to send the sample heart rate message shown
## View device data in Azure API for FHIR
-You can view the FHIR Observation resource(s) created by Azure IoT Connector for FHIR using Postman. For more information, see [Access the FHIR service using Postman](./../use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with the heart rate value submitted in the above sample message.
+You can view the FHIR Observation resource(s) created by Azure IoT Connector for FHIR using Postman. For more information, see [Access the FHIR service using Postman](./../fhir/use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with the heart rate value submitted in the above sample message.
> [!TIP] > Ensure that your user has appropriate access to Azure API for FHIR data plane. Use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign required data plane roles.
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
curl -X GET --header "Authorization: Bearer $token" https://<FHIR ACCOUNT NAME>.
In this article, you've learned how to obtain an access token for the Azure API for FHIR using the Azure CLI. To learn how to access the FHIR API using Postman, proceed to the Postman tutorial. >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](./../use-postman.md)
+>[Access the FHIR service using Postman](./../fhir/use-postman.md)
healthcare-apis Iot Azure Resource Manager Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/iot-azure-resource-manager-template-quickstart.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
Once you've deployed your IoT Central application, your two out-of-the-box simul
## View device data in Azure API for FHIR
-You can view the FHIR-based Observation resource(s) created by Azure IoT Connector for FHIR on your FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../use-postman.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value.
+You can view the FHIR-based Observation resource(s) created by Azure IoT Connector for FHIR on your FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value.
> [!TIP] > Ensure that your user has appropriate access to Azure API for FHIR data plane. Use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign required data plane roles.
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/iot-fhir-portal-quickstart.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
Create a new data export:
## View device data in Azure API for FHIR
-You can view the FHIR-based Observation resource(s) created by Azure IoT Connector for FHIR on Azure API for FHIR using Postman. For information, see [Access the FHIR service using Postman](./../use-postman.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value.
+You can view the FHIR-based Observation resource(s) created by Azure IoT Connector for FHIR on Azure API for FHIR using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value.
> [!TIP] > Ensure that your user has appropriate access to Azure API for FHIR data plane. Use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign required data plane roles.
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
Permissions for Azure API for FHIR are managed through RBAC. For more details, v
In this article, you were guided through the steps of how to register a confidential client application in the Azure AD. You were also guided through the steps of how to add API permissions to the Azure Healthcare API. Lastly, you were shown how to create an application secret. Furthermore, you can learn how to access your FHIR server using Postman. >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](./../use-postman.md)
+>[Access the FHIR service using Postman](./../fhir/use-postman.md)
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
If you configure your client application in a different Azure AD tenant from you
In this article, you've learned how to register a public client application in Azure Active Directory. Next, test access to your FHIR server using Postman. >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](./../use-postman.md)
+>[Access the FHIR service using Postman](./../fhir/use-postman.md)
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Previously updated : 09/10/2021 Last updated : 01/06/2022
The service client needs a secret (password) to obtain a token.
In this article, you've learned how to register a service client application in Azure Active Directory. Next, test access to your FHIR server using Postman. >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](./../use-postman.md)
+>[Access the FHIR service using Postman](./../fhir/use-postman.md)
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Previously updated : 09/10/2019 Last updated : 01/06/2022 # Tutorial: Azure Active Directory SMART on FHIR proxy
Add the reply URL to the public client application that you created earlier for
## Get a test patient
-To test the Azure API for FHIR and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../use-postman.md) to load a patient. Make a note of the ID of a specific patient.
+To test the Azure API for FHIR and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
## Download the SMART on FHIR app launcher
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Previously updated : 12/10/2021 Last updated : 01/06/2022
az role assignment create --assignee-object-id $spid --assignee-principal-type S
Alternatively, you can send a Put request to the role assignment REST API directly. For more information, see [Assign Azure roles using the REST API](./../role-based-access-control/role-assignments-rest.md). >[!Note]
->The REST API scripts in this article are based on the [REST Client](using-rest-client.md) extension. You'll need to revise the variables if you are in a different environment.
+>The REST API scripts in this article are based on the [REST Client](./fhir/using-rest-client.md) extension. You'll need to revise the variables if you are in a different environment.
The API requires the following values:
Now that you've granted proper permissions to the client application, you can ac
In this article, you learned how to grant permissions to client applications using Azure CLI and REST API. For information on how to access Healthcare APIs, see >[!div class="nextstepaction"]
->[Access using Rest Client](using-rest-client.md)
+>[Access using Rest Client](./fhir/using-rest-client.md)
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/configure-azure-rbac.md
description: This article describes how to configure Azure RBAC for FHIR.
Previously updated : 12/08/2021 Last updated : 01/06/2022
In the **Select** box, search for a user, service principal, or group that you w
In this article, you've learned how to assign Azure roles for the FHIR service and DICOM service. To learn how to access the Healthcare APIs using Postman, see -- [Access using Postman](use-postman.md)-- [Access using the REST Client](using-rest-client.md)-- [Access using cURL](using-curl.md)
+- [Access using Postman](./fhir/use-postman.md)
+- [Access using the REST Client](./fhir/using-rest-client.md)
+- [Access using cURL](./fhir/using-curl.md)
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/dicom/get-started-with-dicom.md
Previously updated : 11/24/2021 Last updated : 01/06/2022
You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CLI, o
#### Access using existing tools -- [Postman](../use-postman.md)-- [REST Client](../using-rest-client.md)
+- [Postman](../fhir/use-postman.md)
+- [REST Client](../fhir/using-rest-client.md)
- [.NET C#](dicomweb-standard-apis-c-sharp.md) - [cURL](dicomweb-standard-apis-curl.md) - [Python](dicomweb-standard-apis-python.md)
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
Previously updated : 12/16/2021 Last updated : 01/06/2022
In this article, you'll learn how to bulk import data into the FHIR service in Healthcare APIs. The tools described in this article are freely available at GitHub and can be modified to meet your business needs. Technical support for the tools is available through GitHub and the open-source community.
-While tools such as [Postman](../use-postman.md), [cURL](../using-curl.md), and [REST Client](../using-rest-client.md) to ingest data to the FHIR service, they're not typically used to bulk load FHIR data.
+While tools such as [Postman](../fhir/use-postman.md), [cURL](../fhir/using-curl.md), and [REST Client](../fhir/using-rest-client.md) to ingest data to the FHIR service, they're not typically used to bulk load FHIR data.
>[!Note] >The [bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) feature is currently available in the open source FHIR server. It's not available in Healthcare APIs yet.
healthcare-apis Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-portal-quickstart.md
description: This article teaches users how to deploy a FHIR service in the Azur
Previously updated : 09/10/2021 Last updated : 01/06/2022
To validate that the new FHIR API account is provisioned, fetch a capability sta
## Next steps >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](../use-postman.md)
+>[Access the FHIR service using Postman](../fhir/use-postman.md)
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/get-started-with-fhir.md
Previously updated : 11/24/2021 Last updated : 01/06/2022
You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CCI, o
#### Access using existing tools -- [Postman](../use-postman.md)-- [Rest Client](../using-rest-client.md)-- [cURL](../using-curl.md)
+- [Postman](../fhir/use-postman.md)
+- [Rest Client](../fhir/using-rest-client.md)
+- [cURL](../fhir/using-curl.md)
#### Load data
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/use-postman.md
+
+ Title: Access the Azure Healthcare APIs FHIR service using Postman
+description: This article describes how to access the Azure Healthcare APIs FHIR service with Postman.
++++ Last updated : 01/18/2022+++
+# Access using Postman
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, we will walk through the steps of accessing the Healthcare APIs FHIR service (hear by called the FHIR service) with [Postman](https://www.getpostman.com/).
+
+## Prerequisites
+
+* The FHIR service deployed in Azure. For information about how to deploy the FHIR service, see [Deploy a FHIR service](fhir-portal-quickstart.md).
+* A registered client application to access the FHIR service. For information about how to register a client application, see [Register a service client application in Azure Active Directory](./../register-application.md).
+* Permissions granted to the client application and your user account, for example, "FHIR Data Contributor", to access the FHIR service.
+* Postman installed locally. For more information about Postman, see [Get Started with Postman](https://www.getpostman.com/).
+
+## Using Postman: create workspace, collection, and environment
+
+If you are new to Postman, follow the steps below. Otherwise, you can skip this step.
+
+Postman introduces the workspace concept to enable you and your team to share APIs, collections, environments, and other components. You can use the default ΓÇ£My workspaceΓÇ¥ or ΓÇ£Team workspaceΓÇ¥ or create a new workspace for you or your team.
+
+[ ![Screenshot of create a new workspace in Postman.](media/postman/postman-create-new-workspace.png) ](media/postman/postman-create-new-workspace.png#lightbox)
+
+Next, create a new collection where you can group all related REST API requests. In the workspace, select **Create Collections**. You can keep the default name **New collection** or rename it. The change is saved automatically.
+
+[ ![Screenshot of create a new collection.](media/postman/postman-create-a-new-collection.png) ](media/postman/postman-create-a-new-collection.png#lightbox)
+
+You can also import and export Postman collections. For more information, see [the Postman documentation](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/).
+
+[ ![Screenshot of import data.](media/postman/postman-import-data.png) ](media/postman/postman-import-data.png#lightbox)
+
+## Create or update environment variables
+
+While you can use the full URL in the request, it is recommended that you store the URL and other data in variables and use them.
+
+To access the FHIR service, we'll need to create or update the following variables.
+
+* **tenantid** ΓÇô Azure tenant where the FHIR service is deployed in. It's located from the **Application registration overview** menu option.
+* **subid** ΓÇô Azure subscription where the FHIR service is deployed in. It's located from the **FHIR service overview** menu option.
+* **clientid** ΓÇô Application client registration ID.
+* **clientsecret** ΓÇô Application client registration secret.
+* **fhirurl** ΓÇô The FHIR service full URL. For example, `https://xxx.azurehealthcareapis.com`. It's located from the **FHIR service overview** menu option.
+* **bearerToken** ΓÇô The variable to store the Azure Active Directory (Azure AD) access token in the script. Leave it blank.
+
+> [!NOTE]
+> Ensure that you've configured the redirect URL, `https://www.getpostman.com/oauth2/callback`, in the client application registration.
+
+[ ![Screenshot of environments variable.](media/postman/postman-environments-variable.png) ](media/postman/postman-environments-variable.png#lightbox)
+
+## Connect to the FHIR server
+
+Open Postman, select the **workspace**, **collection**, and **environment** you want to use. Select the `+` icon to create a new request.
+
+[ ![Screenshot of create a new request.](media/postman/postman-create-new-request.png) ](media/postman/postman-create-new-request.png#lightbox)
+
+## Get capability statement
+
+Enter `{{fhirurl}}/metadata` in the `GET`request, and select `Send`. You should see the capability statement of the FHIR service.
+
+[ ![Screenshot of capability statement parameters.](media/postman/postman-capability-statement.png) ](media/postman/postman-capability-statement.png#lightbox)
+
+[ ![Screenshot of save request.](media/postman/postman-save-request.png) ](media/postman/postman-save-request.png#lightbox)
+
+## Get Azure AD access token
+
+The FHIR service is secured by Azure AD. The default authentication can't be disabled. To access the FHIR service, you must get an Azure AD access token first. For more information, see [Microsoft identity platform access tokens](../../active-directory/develop/access-tokens.md).
+
+Create a new `POST` request:
+
+1. Enter in the request header:
+ `https://login.microsoftonline.com/{{tenantid}}/oauth2/token`
+
+2. Select the **Body** tab and select **x-www-form-urlencoded**. Enter the following values in the key and value section:
+ - **grant_type**: `Client_Credentials`
+ - **client_id**: `{{clientid}}`
+ - **client_secret**: `{{clientsecret}}`
+ - **resource**: `{{fhirurl}}`
+
+3. Select the **Test** tab and enter in the text section: `pm.environment.set("bearerToken", pm.response.json().access_token);` To make the value available to the collection, use the pm.collectionVariables.set method. For more information on the set method and its scope level, see [Using variables in scripts](https://learning.postman.com/docs/sending-requests/variables/#defining-variables-in-scripts).
+4. Select **Save** to save the settings.
+5. Select **Send**. You should see a response with the Azure AD access token, which is saved to the variable `bearerToken` automatically. You can then use it in all FHIR service API requests.
+
+ [ ![Screenshot of send button.](media/postman/postman-send-button.png) ](media/postman/postman-send-button.png#lightbox)
+
+You can examine the access token using online tools such as [https://jwt.ms](https://jwt.ms). Select the **Claims** tab to see detailed descriptions for each claim in the token.
+
+[ ![Screenshot of access token claims.](media/postman/postman-access-token-claims.png) ](media/postman/postman-access-token-claims.png#lightbox)
+
+## Get FHIR resource
+
+After you've obtained an Azure AD access token, you can access the FHIR data. In a new `GET` request, enter `{{fhirurl}}/Patient`.
+
+Select **Bearer Token** as authorization type. Enter `{{bearerToken}}` in the **Token** section. Select **Send**. As a response, you should see a list of patients in your FHIR resource.
+
+[ ![Screenshot of select bearer token.](media/postman/postman-select-bearer-token.png) ](media/postman/postman-select-bearer-token.png#lightbox)
+
+## Create or update your FHIR resource
+
+After you've obtained an Azure AD access token, you can create or update the FHIR data. For example, you can create a new patient or update an existing patient.
+
+Create a new request, change the method to ΓÇ£PostΓÇ¥, and enter the value in the request section.
+
+`{{fhirurl}}/Patient`
+
+Select **Bearer Token** as the authorization type. Enter `{{bearerToken}}` in the **Token** section. Select the **Body** tab. Select the **raw** option and **JSON** as body text format. Copy and paste the text to the body section.
++
+```
+{
+ "resourceType": "Patient",
+ "active": true,
+ "name": [
+ {
+ "use": "official",
+ "family": "Kirk",
+ "given": [
+ "James",
+ "Tiberious"
+ ]
+ },
+ {
+ "use": "usual",
+ "given": [
+ "Jim"
+ ]
+ }
+ ],
+ "gender": "male",
+ "birthDate": "1960-12-25"
+}
+```
+Select **Send**. You should see a new patient in the JSON response.
+
+[ ![Screenshot of send button to create a new patient.](media/postman/postman-send-create-new-patient.png) ](media/postman/postman-send-create-new-patient.png#lightbox)
+
+## Export FHIR data
+
+After you've obtained an Azure AD access token, you can export FHIR data to an Azure storage account.
+
+Create a new `GET` request: `{{fhirurl}}/$export?_container=export`
+
+Select **Bearer Token** as authorization type. Enter `{{bearerToken}}` in the **Token** section. Select **Headers** to add two new headers:
+
+- **Accept**: `application/fhir+json`
+- **Prefer**: `respond-async`
+
+Select **Send**. You should notice a `202 Accepted` response. Select the **Headers** tab of the response and make a note of the value in the **Content-Location**. You can use the value to query the export job status.
+
+[ ![Screenshot of post to create a new patient 202 accepted response.](media/postman/postman-202-accepted-response.png) ](media/postman/postman-202-accepted-response.png#lightbox)
+
+## Next steps
+
+In this article, you learned how to access the FHIR service in Azure Healthcare APIs with Postman. For information about the FHIR service in Azure Healthcare APIs, see
+
+>[!div class="nextstepaction"]
+>[What is FHIR service?](overview.md)
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/using-curl.md
+
+ Title: Access the Azure Healthcare APIs with cURL
+description: This article explains how to access the Healthcare APIs with cURL
++++ Last updated : 01/06/2022+++
+# Access the Healthcare APIs (preview) with cURL
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, you will learn how to access the Azure Healthcare APIs with cURL.
+
+## Prerequisites
+
+### PowerShell
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, install [PowerShell](/powershell/module/powershellget/) and [Azure Az PowerShell](/powershell/azure/install-az-ps).
+* Optionally, you can run the scripts in Visual Studio Code with the Rest Client extension. For more information, see [Make a link to the Rest Client doc](using-rest-client.md).
+* Download and install [cURL](https://curl.se/download.html).
+
+### CLI
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, install [Azure CLI](/cli/azure/install-azure-cli).
+* Optionally, install a Bash shell, such as Git Bash, which it is included in [Git for Windows](https://gitforwindows.org/).
+* Optionally, run the scripts in Visual Studio Code with the Rest Client extension. For more information, see [Make a link to the Rest Client doc](using-rest-client.md).
+* Download and install [cURL](https://curl.se/download.html).
+
+## Obtain Azure Access Token
+
+Before accessing the Healthcare APIs, you must grant the user or client app with proper permissions. For more information on how to grant permissions, see [Healthcare APIs authorization](../authentication-authorization.md).
+
+There are several different ways to obtain an Azure access token for the Healthcare APIs.
+
+> [!NOTE]
+> Make sure that you have logged into Azure and that you are in the Azure subscription and tenant where you have deployed the Healthcare APIs instance.
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell-interactive
+### check Azure environment and PowerShell versions
+Get-AzContext
+Set-AzContext -Subscription <subscriptionid>
+$PSVersionTable.PSVersion
+Get-InstalledModule -Name Az -AllVersions
+curl --version
+
+### get access token for the FHIR service
+$fhirservice="https://<fhirservice>.fhir.azurehealthcareapis.com"
+$token=(Get-AzAccessToken -ResourceUrl $fhirservice).Token
+
+### Get access token for the DICOM service
+$dicomtokenurl= "https://dicom.healthcareapis.azure.com/"
+$token=$( Get-AzAccessToken -ResourceUrl $dicomtokenurl).Token
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+### check Azure environment and CLI versions
+az account show --output table
+az account set --subscription <subscriptionid>
+cli ΓÇôversion
+curl --version
+
+### get access token for the FHIR service
+$fhirservice=https://<fhirservice>.fhir.azurehealthcareapis.com
+token=$(az account get-access-token --resource=$fhirservice --query accessToken --output tsv)
+
+### get access token for the DICOM service
+dicomtokenurl= https://dicom.healthcareapis.azure.com/
+token=$(az account get-access-token --resource=$dicomtokenurl --query accessToken --output tsv)
+```
+++
+## Access data in the FHIR service
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell-interactive
+$fhirservice="https://<fhirservice>.fhir.azurehealthcareapis.com"
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+fhirservice="https://<fhirservice>.fhir.azurehealthcareapis.com"
+```
+++
+`curl -X GET --header "Authorization: Bearer $token" $fhirservice/Patient`
+
+[ ![Access data in the FHIR service with curl script.](media/curl-fhir.png) ](media/curl-fhir.png#lightbox)
+
+## Access data in the DICOM service
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell-interactive
+$dicomservice="https://<dicomservice>.dicom.azurehealthcareapis.com"
+```
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+dicomservice="https://<dicomservice>.dicom.azurehealthcareapis.com"
+```
++
+`curl -X GET --header "Authorization: Bearer $token" $dicomservice/changefeed?includemetadata=false`
+
+[ ![Access data in the DICOM service with curl script.](media/curl-dicom.png) ](media/curl-dicom.png#lightbox)
+
+## Next steps
+
+In this article, you learned how to access the Healthcare APIs data using cURL.
+
+To learn about how to access the Healthcare APIs data using REST Client extension in Visual Studio Code, see
+
+>[!div class="nextstepaction"]
+>[Access the Healthcare APIs using REST Client](using-rest-client.md)
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/using-rest-client.md
+
+ Title: Access the Azure Healthcare APIs using REST Client
+description: This article explains how to access the Healthcare APIs using the REST Client extension in VSCode
++++ Last updated : 01/06/2022+++
+# Accessing the Healthcare APIs (preview) using the REST Client Extension in Visual Studio Code
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, you will learn how to access the Healthcare APIs using [REST Client extension in Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
+
+## Install REST Client extension
+
+Select the Extensions icon on the left side panel of your Visual Studio Code, and search for "REST Client". Find the [REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) and install.
+
+[ ![REST Client VSCode extension](media/rest-install.png) ](media/rest-install.png#lightbox)
+
+## Create a `.http` file and define variables
+
+Create a new file in Visual Studio Code. Enter a `GET` request command line in the file, and save it as `test.http`. The file suffix `.http` automatically activates the REST Client environment. Click on `Send Request` to get the metadata.
+
+[ ![Send Request](media/rest-send-request.png) ](media/rest-send-request.png#lightbox)
+
+## Get client application values
+
+> [!Important]
+> Before calling the FHIR server REST API (other than getting the metadata), you must complete [application registration](../register-application.md). Make a note of your Azure **tenant ID**, **client ID**, **client secret** and the **service URL**.
+
+While you can use values such as the client ID directly in calls to the REST API, it's a good practice that you define a few variables for these values and use the variables instead.
+
+In your `test.http` file, include the following information obtained from registering your application:
+
+```
+### REST Client
+@fhirurl =https://xxx.azurehealthcareapis.com
+@clientid =xxx....
+@clientsecret =xxx....
+@tenantid =xxx....
+```
+
+## Get Azure AD Access Token
+
+After including the information below in your `test.http` file, hit `Send Request`. You will see an HTTP response that contains your access token.
+
+The line starting with `@name` contains a variable that captures the HTTP response containing the access token. The variable, `@token`, is used to store the access token.
+
+>[!Note]
+>The `grant_type` of `client_credentials` is used to obtain an access token.
+
+```
+### Get access token
+@name getAADToken
+POST https://login.microsoftonline.com/{{tenantid}}/oauth2/token
+Content-Type: application/x-www-form-urlencoded
+
+grant_type=client_credentials
+&resource={{fhirurl}}
+&client_id={{clientid}}
+&client_secret={{clientsecret}}
+
+### Extract access token from getAADToken request
+@token = {{getAADToken.response.body.access_token}}
+```
+
+[ ![Get access token](media/rest-config.png) ](media/rest-config.png#lightbox)
+
+## `GET` FHIR Patient data
+
+You can now get a list of patients or a specific patient with the `GET` request. The line with `Authorization` is the header info for the `GET` request. You can also send `PUT` or `POST` requests to create/update FHIR resources.
+
+```
+### GET Patient
+GET {{fhirurl}}/Patient/<patientid>
+Authorization: Bearer {{token}}
+```
+
+[ ![GET Patient](media/rest-patient.png) ](media/rest-patient.png#lightbox)
+
+## Run PowerShell or CLI
+
+You can run PowerShell or CLI scripts within Visual Studio Code. Press `CTRL` and the `~` key and select PowerShell or Bash. You can find more details on [Integrated Terminal](https://code.visualstudio.com/docs/editor/integrated-terminal).
+
+### PowerShell in Visual Studio Code
+[ ![running PowerShell](media/rest-powershell.png) ](media/rest-powershell.png#lightbox)
+
+### CLI in Visual Studio Code
+[ ![running CLI](media/rest-cli.png) ](media/rest-cli.png#lightbox)
+
+## Troubleshooting
+
+If you are unable to get the metadata, which does not require access token based on the HL7 specification, check that your FHIR server is running properly.
+
+If you are unable to get an access token, make sure that the client application is registered properly and you are using the correct values from the application registration step.
+
+If you are unable to get data from the FHIR server, make sure that the client application (or the service principal) has been granted access permissions such as "FHIR Data Contributor" to the FHIR server.
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/get-access-token.md
Previously updated : 12/14/2021 Last updated : 01/06/2022
Invoke-WebRequest -Method GET -Headers $headers -Uri 'https://<workspacename-dic
In this article, you learned how to obtain an access token for the FHIR service and DICOM service using CLI and Azure PowerShell. For more details about accessing the FHIR service and DICOM service, see >[!div class="nextstepaction"]
->[Access FHIR service using Postman](use-postman.md)
+>[Access FHIR service using Postman](./fhir/use-postman.md)
>[!div class="nextstepaction"]
->[Access FHIR service using Rest Client](using-rest-client.md)
+>[Access FHIR service using Rest Client](./fhir/using-rest-client.md)
>[!div class="nextstepaction"] >[Access DICOM service using cURL](dicom/dicomweb-standard-apis-curl.md)
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/device-data-through-iot-hub.md
This message will get routed to IoT connector, where the message will be transfo
## View device data in FHIR service
-You can view the FHIR Observation resource(s) created by IoT connector on the FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
+You can view the FHIR Observation resource(s) created by IoT connector on the FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
> [!TIP] > Ensure that your user has appropriate access to FHIR service data plane. Use [Azure role-based access control (Azure RBAC)](../azure-api-for-fhir/configure-azure-rbac.md) to assign required data plane roles.
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/register-application.md
Previously updated : 11/17/2021 Last updated : 01/06/2022
The following steps are required for the DICOM service, but optional for the FHI
[ ![Select permissions scopes.](dicom/media/dicom-select-scopes.png) ](dicom/media/dicom-select-scopes.png#lightbox) >[!NOTE]
->Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or Rest Client. For more details, visit [Access using Postman](use-postman.md) and [Accessing the Healthcare APIs using the REST Client Extension in Visual Studio Code](using-rest-client.md).
+>Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or Rest Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing the Healthcare APIs using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
>>Use grant_type of client_credentials or authentication_doe when trying to otain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md). Your application registration is now complete.
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
In the previous screenshot you can see:
The deployment manifest doesn't include information about the telemetry the **SimulatedTemperatureSensor** module sends or the commands it responds to. Add these definitions to the device template manually before you publish it.
-To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](tutorial-add-edge-as-leaf-device.md).
+To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/).
### Update a deployment manifest
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules.md
The following table summarizes the information sent to the supported action type
| Email | Standard IoT Central email template | | SMS | Azure IoT Central alert: ${applicationName} - "${ruleName}" triggered on "${deviceName}" at ${triggerDate} ${triggerTime} | | Voice | Azure I.O.T Central alert: rule "${ruleName}" triggered on device "${deviceName}" at ${triggerDate} ${triggerTime}, in application ${applicationName} |
-| Webhook | { "schemaId" : "AzureIoTCentralRuleWebhook", "data": {[regular webhook payload](howto-create-webhooks.md#payload)}} |
+| Webhook | { "schemaId" : "AzureIoTCentralRuleWebhook", "data": {[regular webhook payload](howto-configure-rules.md#payload)}} |
The following text is an example SMS message from an action group:
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
When you reassign a device to another organization, the device's data stays with
Devices can self-register with your IoT Central application without first being added to the device list. In this case, IoT Central adds the device to the root organization in the hierarchy. You can then reassign the device to a different organization.
-Instead, you can use the CSV import feature to bulk register devices with your application and assign them to organizations. To learn more, see [Import devices](howto-manage-devices.md#import-devices).
+Instead, you can use the CSV import feature to bulk register devices with your application and assign them to organizations. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices).
### Gateways
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
To learn more, see [Create an IoT Central organization](howto-create-organizatio
Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. The administrator manages the group certificates or keys that the device credentials are derived from.
-To learn more, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment), [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment), and [How to roll X.509 device certificates](how-to-roll-x509-certificates.md).
+To learn more, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment), [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment), and [How to roll X.509 device certificates](how-to-connect-devices-x509.md).
The administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
An IoT Edge device connects directly to IoT Central. An IoT Edge device can send
IoT Central only sees the IoT Edge device, not the downstream devices connected to the IoT Edge device.
-To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central application](./tutorial-add-edge-as-leaf-device.md).
+To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/).
### Gateways
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
This article outlines, for IoT Central:
[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Start with a generic _application template_ or with one of the industry-focused application templates: -- [Retail](../retail/overview-iot-central-retail.md)-- [Energy](../energy/overview-iot-central-energy.md)-- [Government](../government/overview-iot-central-government.md)-- [Healthcare](../healthcare/overview-iot-central-healthcare.md).
+- [Retail](concepts-app-templates.md).
+- [Energy](concepts-app-templates.md).
+- [Government](concepts-app-templates.md).
+- [Healthcare](concepts-app-templates.md).
See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application.
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
In this tutorial, you learned how to:
Next you can learn how to: > [!div class="nextstepaction"]
-> [Add an Azure IoT Edge device to your Azure IoT Central application](tutorial-add-edge-as-leaf-device.md)
+> [Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/)
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
DPS is available in many regions. The updated list of existing and newly announc
> [!NOTE] > DPS is global and not bound to a location. However, you must specify a region in which the metadata associated with your DPS profile will reside.
-## Availability
+## High availability
There is a 99.9% Service Level Agreement for DPS, and you can [read the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
iot-dps How To Revoke Device Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-revoke-device-access-portal.md
Title: Disenroll device from Azure IoT Hub Device Provisioning Service
description: How to disenroll a device to prevent provisioning through Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 04/05/2018 Last updated : 01/24/2022 - # How to disenroll a device from Azure IoT Hub Device Provisioning Service
-Proper management of device credentials is crucial for high-profile systems like IoT solutions. A best practice for such systems is to have a clear plan of how to revoke access for devices when their credentials, whether a shared access signatures (SAS) token or an X.509 certificate, might be compromised.
+Proper management of device credentials is crucial for high-profile systems like IoT solutions. A best practice for such systems is to have a clear plan of how to revoke access for devices when their credentials, whether a shared access signatures (SAS) token or an X.509 certificate, might be compromised.
-Enrollment in the Device Provisioning Service enables a device to be [provisioned](about-iot-dps.md#provisioning-process). A provisioned device is one that has been registered with IoT Hub, allowing it to receive its initial [device twin](~/articles/iot-hub/iot-hub-devguide-device-twins.md) state and begin reporting telemetry data. This article describes how to disenroll a device from your provisioning service instance, preventing it from being provisioned again in the future.
+Enrollment in the Device Provisioning Service enables a device to be [provisioned](about-iot-dps.md#provisioning-process). A provisioned device is one that has been registered with IoT Hub, allowing it to receive its initial [device twin](~/articles/iot-hub/iot-hub-devguide-device-twins.md) state and begin reporting telemetry data. This article describes how to disenroll a device from your provisioning service instance, preventing it from being provisioned again in the future. To learn how to deprovision a device that has already been provisioned to an IoT hub, see [Manage deprovisioning](how-to-unprovision-devices.md).
> [!NOTE] > Be aware of the retry policy of devices that you revoke access for. For example, a device that has an infinite retry policy might continuously try to register with the provisioning service. That situation consumes service resources and possibly affects performance. ## Disallow devices by using an individual enrollment entry
-Individual enrollments apply to a single device and can use X.509 certificates, TPM endorsement keys (in a real or virtual TPM), or SAS tokens as the attestation mechanism. To disallow a device that has an individual enrollment, you can either disable or delete its enrollment entry.
+Individual enrollments apply to a single device and can use X.509 certificates, TPM endorsement keys (in a real or virtual TPM), or SAS tokens as the attestation mechanism. To disallow a device that has an individual enrollment, you can either disable or delete its enrollment entry.
-To temporarily disallow the device by disabling its enrollment entry:
+To temporarily disallow the device by disabling its enrollment entry:
1. Sign in to the Azure portal and select **All resources** from the left menu. 2. In the list of resources, select the provisioning service that you want to disallow your device from. 3. In your provisioning service, select **Manage enrollments**, and then select the **Individual Enrollments** tab.
-4. Select the enrollment entry for the device that you want to disallow.
+4. Select the enrollment entry for the device that you want to disallow.
![Select your individual enrollment](./media/how-to-revoke-device-access-portal/select-individual-enrollment.png)
To permanently disallow the device by deleting its enrollment entry:
1. Sign in to the Azure portal and select **All resources** from the left menu. 2. In the list of resources, select the provisioning service that you want to disallow your device from. 3. In your provisioning service, select **Manage enrollments**, and then select the **Individual Enrollments** tab.
-4. Select the check box next to the enrollment entry for the device that you want to disallow.
-5. Select **Delete** at the top of the window, and then select **Yes** to confirm that you want to remove the enrollment.
+4. Select the check box next to the enrollment entry for the device that you want to disallow.
+5. Select **Delete** at the top of the window, and then select **Yes** to confirm that you want to remove the enrollment.
![Delete individual enrollment entry in the portal](./media/how-to-revoke-device-access-portal/delete-individual-enrollment.png) - After you finish the procedure, you should see your entry removed from the list of individual enrollments. ## Disallow an X.509 intermediate or root CA certificate by using an enrollment group
-X.509 certificates are typically arranged in a certificate chain of trust. If a certificate at any stage in a chain becomes compromised, trust is broken. The certificate must be disallowed to prevent Device Provisioning Service from provisioning devices downstream in any chain that contains that certificate. To learn more about X.509 certificates and how they are used with the provisioning service, see [X.509 certificates](./concepts-x509-attestation.md#x509-certificates).
+X.509 certificates are typically arranged in a certificate chain of trust. If a certificate at any stage in a chain becomes compromised, trust is broken. The certificate must be disallowed to prevent Device Provisioning Service from provisioning devices downstream in any chain that contains that certificate. To learn more about X.509 certificates and how they are used with the provisioning service, see [X.509 certificates](./concepts-x509-attestation.md#x509-certificates).
An enrollment group is an entry for devices that share a common attestation mechanism of X.509 certificates signed by the same intermediate or root CA. The enrollment group entry is configured with the X.509 certificate associated with the intermediate or root CA. The entry is also configured with any configuration values, such as twin state and IoT hub connection, that are shared by devices with that certificate in their certificate chain. To disallow the certificate, you can either disable or delete its enrollment group.
-To temporarily disallow the certificate by disabling its enrollment group:
+To temporarily disallow the certificate by disabling its enrollment group:
1. Sign in to the Azure portal and select **All resources** from the left menu. 2. In the list of resources, select the provisioning service that you want to disallow the signing certificate from.
To temporarily disallow the certificate by disabling its enrollment group:
![Disable enrollment group entry in the portal](./media/how-to-revoke-device-access-portal/disable-enrollment-group.png)
-
To permanently disallow the certificate by deleting its enrollment group: 1. Sign in to the Azure portal and select **All resources** from the left menu. 2. In the list of resources, select the provisioning service that you want to disallow your device from. 3. In your provisioning service, select **Manage enrollments**, and then select the **Enrollment Groups** tab.
-4. Select the check box next to the enrollment group for the certificate that you want to disallow.
-5. Select **Delete** at the top of the window, and then select **Yes** to confirm that you want to remove the enrollment group.
+4. Select the check box next to the enrollment group for the certificate that you want to disallow.
+5. Select **Delete** at the top of the window, and then select **Yes** to confirm that you want to remove the enrollment group.
![Delete enrollment group entry in the portal](./media/how-to-revoke-device-access-portal/delete-enrollment-group.png)
After you finish the procedure, you should see your entry removed from the list
## Disallow specific devices in an enrollment group
-Devices that implement the X.509 attestation mechanism use the device's certificate chain and private key to authenticate. When a device connects and authenticates with Device Provisioning Service, the service first looks for an individual enrollment that matches the device's credentials. The service then searches enrollment groups to determine whether the device can be provisioned. If the service finds a disabled individual enrollment for the device, it prevents the device from connecting. The service prevents the connection even if an enabled enrollment group for an intermediate or root CA in the device's certificate chain exists.
+Devices that implement the X.509 attestation mechanism use the device's certificate chain and private key to authenticate. When a device connects and authenticates with Device Provisioning Service, the service first looks for an individual enrollment with a registration ID that matches the common name (CN) of the device (end-entity) certificate. The service then searches enrollment groups to determine whether the device can be provisioned. If the service finds a disabled individual enrollment for the device, it prevents the device from connecting. The service prevents the connection even if an enabled enrollment group for an intermediate or root CA in the device's certificate chain exists.
To disallow an individual device in an enrollment group, follow these steps: 1. Sign in to the Azure portal and select **All resources** from the left menu. 2. From the list of resources, select the provisioning service that contains the enrollment group for the device that you want to disallow. 3. In your provisioning service, select **Manage enrollments**, and then select the **Individual Enrollments** tab.
-4. Select the **Add individual enrollment** button at the top.
-5. On the **Add Enrollment** page, select **X.509** as the attestation **Mechanism** for the device.
+4. Select the **Add individual enrollment** button at the top.
+5. Follow the appropriate step depending on whether you have the device (end-entity) certificate.
+
+ - If you have the device certificate, on the **Add Enrollment** page select:
+
+ **Mechanism**: X.509
+
+ **Primary .pem or .cer file**: Upload the device certificate. For the certificate, use the signed end-entity certificate installed on the device. The device uses the signed end-entity certificate for authentication.
+
+ **IoT Hub Device ID**: Leave this blank. For devices provisioned through X.509 enrollment groups, the device ID is set by the device certificate CN and is the same as the registration ID.
+
+ :::image type="content" source="./media/how-to-revoke-device-access-portal/add-enrollment-x509.png" alt-text="Screenshot of properties for the disallowed device in an X.509 enrollment entry.":::
+
+ - If you don't have the device certificate, on the **Add Enrollment** page select:
+
+ **Mechanism**: Symmetric Key
+
+ **Auto-generate keys**: Make sure this is selected. The keys don't matter for this scenario.
+
+ **Registration ID**: If the device has already been provisioned, use its IoT Hub device ID. You can find this in the registration records of the enrollment group, or in the IoT hub that the device was provisioned to. If the device has not yet been provisioned, enter the device certificate CN. (In this latter case, you don't need the device certificate, but you will need to know the CN.)
- Upload the device certificate, and enter the device ID of the device to be disallowed. For the certificate, use the signed end-entity certificate installed on the device. The device uses the signed end-entity certificate for authentication.
+ **IoT Hub Device ID**: Leave this blank. For devices provisioned through X.509 enrollment groups, the device ID is set by the device certificate CN and is the same as the registration ID.
- ![Set device properties for the disallowed device](./media/how-to-revoke-device-access-portal/disable-individual-enrollment-in-enrollment-group-1.png)
+ :::image type="content" source="./media/how-to-revoke-device-access-portal/add-enrollment-symmetric-key.png" alt-text="Screenshot of properties for the disallowed device in a symmetric key enrollment entry.":::
-6. Scroll to the bottom of the **Add Enrollment** page and select **Disable** on the **Enable entry** switch, and then select **Save**.
+6. Scroll to the bottom of the **Add Enrollment** page and select **Disable** on the **Enable entry** switch, and then select **Save**.
- [![Use disabled individual enrollment entry to disable device from group enrollment, in the portal](./media/how-to-revoke-device-access-portal/disable-individual-enrollment-in-enrollment-group.png)](./media/how-to-revoke-device-access-portal/disable-individual-enrollment-in-enrollment-group.png#lightbox)
+ :::image type="content" source="./media/how-to-revoke-device-access-portal/select-disable-on-indivdual-entry.png" alt-text="Screenshot of disabled individual enrollment entry to disable device from group enrollment in the portal.":::
-When you successfully create your enrollment, you should see your disabled device enrollment listed on the **Individual Enrollments** tab.
+When you successfully create your enrollment, you should see your disabled device enrollment listed on the **Individual Enrollments** tab.
## Next steps
iot-dps How To Unprovision Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-unprovision-devices.md
Title: Deprovision devices that were provisioned with Azure IoT Hub Device Provi
description: How to deprovision devices that have been provisioned with Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 05/11/2018 Last updated : 01/24/2022
-# How to deprovision devices that were previously auto-provisioned
+# How to deprovision devices that were previously auto-provisioned
-You may find it necessary to deprovision devices that were previously auto-provisioned through the Device Provisioning Service. For example, a device may be sold or moved to a different IoT hub, or it may be lost, stolen, or otherwise compromised.
+You may find it necessary to deprovision devices that were previously auto-provisioned through the Device Provisioning Service. For example, a device may be sold or moved to a different IoT hub, or it may be lost, stolen, or otherwise compromised.
In general, deprovisioning a device involves two steps: 1. Disenroll the device from your provisioning service, to prevent future auto-provisioning. Depending on whether you want to revoke access temporarily or permanently, you may want to either disable or delete an enrollment entry. For devices that use X.509 attestation, you may want to disable/delete an entry in the hierarchy of your existing enrollment groups.
-
+ - To learn how to disenroll a device, see [How to disenroll a device from Azure IoT Hub Device Provisioning Service](how-to-revoke-device-access-portal.md). - To learn how to disenroll a device programmatically using one of the provisioning service SDKs, see [Manage device enrollments with service SDKs](./quick-enroll-device-x509.md).
In general, deprovisioning a device involves two steps:
The exact steps you take to deprovision a device depend on its attestation mechanism and its applicable enrollment entry with your provisioning service. The following sections provide an overview of the process, based on the enrollment and attestation type. ## Individual enrollments
-Devices that use TPM attestation or X.509 attestation with a leaf certificate are provisioned through an individual enrollment entry.
-To deprovision a device that has an individual enrollment:
+Devices that use TPM attestation or X.509 attestation with a leaf certificate are provisioned through an individual enrollment entry.
+
+To deprovision a device that has an individual enrollment:
1. Disenroll the device from your provisioning service:
- - For devices that use TPM attestation, delete the individual enrollment entry to permanently revoke the device's access to the provisioning service, or disable the entry to temporarily revoke its access.
+ - For devices that use TPM attestation, delete the individual enrollment entry to permanently revoke the device's access to the provisioning service, or disable the entry to temporarily revoke its access.
- For devices that use X.509 attestation, you can either delete or disable the entry. Be aware, though, if you delete an individual enrollment for a device that uses X.509 and an enabled enrollment group exists for a signing certificate in that device's certificate chain, the device can re-enroll. For such devices, it may be safer to disable the enrollment entry. Doing so prevents the device from re-enrolling, regardless of whether an enabled enrollment group exists for one of its signing certificates.
-2. Disable or delete the device in the identity registry of the IoT hub that it was provisioned to.
-
+2. Disable or delete the device in the identity registry of the IoT hub that it was provisioned to.
## Enrollment groups
-With X.509 attestation, devices can also be provisioned through an enrollment group. Enrollment groups are configured with a signing certificate, either an intermediate or root CA certificate, and control access to the provisioning service for devices with that certificate in their certificate chain. To learn more about enrollment groups and X.509 certificates with the provisioning service, see [X.509 certificate attestation](concepts-x509-attestation.md).
-To see a list of devices that have been provisioned through an enrollment group, you can view the enrollment group's details. This is an easy way to understand which IoT hub each device has been provisioned to. To view the device list:
+With X.509 attestation, devices can also be provisioned through an enrollment group. Enrollment groups are configured with a signing certificate, either an intermediate or root CA certificate, and control access to the provisioning service for devices with that certificate in their certificate chain. To learn more about enrollment groups and X.509 certificates with the provisioning service, see [X.509 certificate attestation](concepts-x509-attestation.md).
+
+To see a list of devices that have been provisioned through an enrollment group, you can view the enrollment group's details. This is an easy way to understand which IoT hub each device has been provisioned to. To view the device list:
1. Log in to the Azure portal and click **All resources** on the left-hand menu. 2. Click your provisioning service in the list of resources.
To see a list of devices that have been provisioned through an enrollment group,
With enrollment groups, there are two scenarios to consider: - To deprovision all of the devices that have been provisioned through an enrollment group:
- 1. Disable the enrollment group to disallow its signing certificate.
- 2. Use the list of provisioned devices for that enrollment group to disable or delete each device from the identity registry of its respective IoT hub.
- 3. After disabling or deleting all devices from their respective IoT hubs, you can optionally delete the enrollment group. Be aware, though, that, if you delete the enrollment group and there is an enabled enrollment group for a signing certificate higher up in the certificate chain of one or more of the devices, those devices can re-enroll.
+ 1. Disable the enrollment group to disallow its signing certificate.
+ 2. Use the list of provisioned devices for that enrollment group to disable or delete each device from the identity registry of its respective IoT hub.
+ 3. After disabling or deleting all devices from their respective IoT hubs, you can optionally delete the enrollment group. Be aware, though, that, if you delete the enrollment group and there is an enabled enrollment group for a signing certificate higher up in the certificate chain of one or more of the devices, those devices can re-enroll.
- To deprovision a single device from an enrollment group:
- 1. Create a disabled individual enrollment for its leaf (device) certificate. This revokes access to the provisioning service for that device while still permitting access for other devices that have the enrollment group's signing certificate in their chain. Do not delete the disabled individual enrollment for the device. Doing so will allow the device to re-enroll through the enrollment group.
+ 1. Create a disabled individual enrollment for the device.
+
+ - If you have the device (end-entity) certificate, you can create a disabled X.509 individual enrollment.
+ - If you don't have the device certificate, you can create a disabled symmetric key individual enrollment based on the device ID in the registration record for that device.
+
+ To learn more, see [Disallow specific devices in an enrollment group](how-to-revoke-device-access-portal.md#disallow-specific-devices-in-an-enrollment-group).
+
+ The presence of a disabled individual enrollment for a device revokes access to the provisioning service for that device while still permitting access for other devices that have the enrollment group's signing certificate in their chain. Do not delete the disabled individual enrollment for the device. Doing so will allow the device to re-enroll through the enrollment group.
+ 2. Use the list of provisioned devices for that enrollment group to find the IoT hub that the device was provisioned to and disable or delete it from that hub's identity registry.
iot-edge How To Provision Devices At Scale Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-x509.md
You can use either PowerShell or Windows Admin Center to provision your IoT Edge
For PowerShell, run the following command with the placeholder values updated with your own values: ```powershell
-Provision-EflowVm -provisioningType DpsX509 -scopeId PASTE_YOUR_ID_SCOPE_HERE -registrationId PASTE_YOUR_REGISTRATION_ID_HERE -identityCertPath PASTE_ABSOLUTE_PATH_TO_IDENTITY_CERTIFICATE_HERE -identityPrivateKey PASTE_ABSOLUTE_PATH_TO_IDENTITY_PRIVATE_KEY_HERE
+Provision-EflowVm -provisioningType DpsX509 -scopeId PASTE_YOUR_ID_SCOPE_HERE -registrationId PASTE_YOUR_REGISTRATION_ID_HERE -identityCertPath PASTE_ABSOLUTE_PATH_TO_IDENTITY_CERTIFICATE_HERE -identityPrivKeyPath PASTE_ABSOLUTE_PATH_TO_IDENTITY_PRIVATE_KEY_HERE
``` # [Windows Admin Center](#tab/windowsadmincenter)
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-sdks.md
The Azure IoT service SDKs contain code to facilitate building applications that
| Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) | | Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples) | [Reference](/python/api/azure-iot-hub) |
-| Node.js | [npm](https://www.npmjs.com/package/azure-iot-common) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples/javascript) | [Reference](/javascript/api/azure-iothub/) |
-
-Azure IoT Hub service SDK for iOS:
-
-* Install from [CocoaPod](https://cocoapods.org/pods/AzureIoTHubServiceClient)
-* [Samples](https://github.com/Azure-Samples/azure-iot-samples-ios)
## Microsoft Azure Provisioning SDKs
The **Microsoft Azure Provisioning SDKs** enable you to provision devices to you
| Platform | Package | Source code | Reference | | --|--|--|--| | .NET|[Device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/), [Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
-| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
+| C|[Device SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
| Java|[Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) | | Node.js|[Device SDK](https://badge.fury.io/js/azure-iot-provisioning-device), [Service SDK](https://badge.fury.io/js/azure-iot-provisioning-service) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) | | Python|[Device SDK](https://pypi.org/project/azure-iot-device/), [Service SDK](https://pypi.org/project/azure-iothub-provisioningserviceclient/)|[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Device Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient), [Service Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid.md
Previously updated : 02/20/2019 Last updated : 01/22/2022
For non-telemetry events like DeviceConnected, DeviceDisconnected, DeviceCreated
When you subscribe to telemetry events via Event Grid, IoT Hub creates a default message route to send data source type device messages to Event Grid. For more information about message routing, see [IoT Hub message routing](iot-hub-devguide-messages-d2c.md). This route will be visible in the portal under IoT Hub > Message Routing. Only one route to Event Grid is created regardless of the number of EG subscriptions created for telemetry events. So, if you need several subscriptions with different filters, you can use the OR operator in these queries on the same route. The creation and deletion of the route is controlled through subscription of telemetry events via Event Grid. You cannot create or delete a route to Event Grid using IoT Hub Message Routing.
-To filter messages before telemetry data is sent, you can update your [routing query](iot-hub-devguide-routing-query-syntax.md). Note that routing query can be applied to the message body only if the body is JSON. You must also set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties).
+To filter messages before telemetry data are sent, you can update your [routing query](iot-hub-devguide-routing-query-syntax.md). Note that routing query can be applied to the message body only if the body is JSON. You must also set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties).
## Limitations for device connected and device disconnected events
-To receive device connection state events, a device must call either the *device-to-cloud send telemetry* or a *cloud-to-device receive message* operation with IoT Hub. However, if a device uses AMQP protocol to connect with IoT Hub, we recommend the device to call *cloud-to-device receive message* operation, otherwise their connection state notifications may be delayed by few minutes. If your device connects with MQTT protocol, IoT Hub keeps the cloud-to-device link open. To open the cloud-to-device link for AMQP, call the [Receive Async API](/rest/api/iothub/device/receivedeviceboundnotification).
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
-The device-to-cloud link stays open as long as the device sends telemetry.
-
-If a device connects and disconnects frequently, IoT Hub doesn't send every single connection state, but publishes the current connection state taken at a periodic snapshot of 60sec. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state.
+IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
## Tips for consuming events
Applications that handle IoT Hub events should follow these suggested practices:
* Don't assume that all events you receive are the types that you expect. Always check the eventType before processing the message.
-* Messages can arrive out of order or after a delay. Use the etag field to understand if your information about objects is up-to-date for device created or device deleted events.
+* Messages can arrive out of order or after a delay. Use the etag field to understand if your information about objects is up to date for device created or device-deleted events.
## Next steps
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`+
+## 2022-01-24
+
+### Azure Machine Learning SDK for Python v1.38.0
+ + **azureml-automl-core**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + Supporting getting primary metrics for Forecasting task in get_primary_metrics API.
+ + Renamed second optional parameter in v2 scoring scripts as GlobalParameters
+ + **azureml-automl-dnn-vision**
+ + Added the scoring metrics in the metrics UI
+ + **azureml-automl-runtime**
+ + Bug fix for cases where the algorithm name for NimbusML models may show up as empty strings, either on the ML Studio, or on the console outputs.
+ + **azureml-core**
+ + Added parameter blobfuse_enabled in azureml.core.webservice.aks.AksWebservice.deploy_configuration. When this parameter is true, models and scoring files will be downloaded with blobfuse instead of the blob storage API.
+ + **azureml-interpret**
+ + Updated azureml-interpret to interpret-community 0.24.0
+ + In azureml-interpret update scoring explainer to support latest version of lightgbm with sparse TreeExplainer
+ + Update azureml-interpret to interpret-community 0.23.*
+ + **azureml-pipeline-core**
+ + Add note in pipelinedata, recommend user to use pipeline output dataset instead.
+ + **azureml-pipeline-steps**
+ + Add `environment_variables` to ParallelRunConfig, runtime environment variables can be passed by this parameter and will be set on the process where the user script is executed.
+ + **azureml-train-automl-client**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + **azureml-train-automl-runtime**
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + **azureml-train-core**
+ + Enable support for early termination for Bayesian Optimization in Hyperdrive
+ + Bayesian and GridParameterSampling objects can now pass on properties
++ ## 2021-12-13 ### Azure Machine Learning SDK for Python v1.37.0
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-automated-ml.md
You can also [test any existing automated ML model (preview)](how-to-configure-a
Feature engineering is the process of using domain knowledge of the data to create features that help ML algorithms learn better. In Azure Machine Learning, scaling and normalization techniques are applied to facilitate feature engineering. Collectively, these techniques and feature engineering are referred to as featurization.
-For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included](how-to-configure-auto-features.md#featurization).
+For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included](how-to-configure-auto-features.md#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](concept-manage-ml-pitfalls.md) in your models.
> [!NOTE] > Automated machine learning featurization steps (feature normalization, handling missing data,
For automated machine learning experiments, featurization is applied automatical
> predictions, the same featurization steps applied during training are applied to > your input data automatically.
-### Automatic featurization (standard)
-
-In every automated machine learning experiment, your data is automatically scaled or normalized to help algorithms perform well. During model training, one of the following scaling or normalization techniques will be applied to each model. Learn how AutoML helps [prevent over-fitting and imbalanced data](concept-manage-ml-pitfalls.md) in your models.
-
-|Scaling&nbsp;&&nbsp;processing| Description |
-| - | - |
-| [StandardScaleWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) | Standardize features by removing the mean and scaling to unit variance |
-| [MinMaxScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) | Transforms features by scaling each feature by that column's minimum and maximum |
-| [MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html#sklearn.preprocessing.MaxAbsScaler) |Scale each feature by its maximum absolute value |
-| [RobustScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html) | Scales features by their quantile range |
-| [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) |Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space |
-| [TruncatedSVDWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html) |This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition, which means it can work with scipy.sparse matrices efficiently |
-| [SparseNormalizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html) | Each sample (that is, each row of the data matrix) with at least one non-zero component is rescaled independently of other samples so that its norm (l1 or l2) equals one |
- ### Customize featurization Additional feature engineering techniques such as, encoding and transforms are also available.
Deepen your expertise of SDK design patterns and class specifications with the [
> [!Note] > Automated machine learning capabilities are also available in other Microsoft solutions such as, [ML.NET](/dotnet/machine-learning/automl-overview),
-[HDInsight](../hdinsight/spark/apache-spark-run-machine-learning-automl.md), [Power BI](/power-bi/service-machine-learning-automated) and [SQL Server](https://cloudblogs.microsoft.com/sqlserver/2019/01/09/how-to-automate-machine-learning-on-sql-server-2019-big-data-clusters/)
+[HDInsight](../hdinsight/spark/apache-spark-run-machine-learning-automl.md), [Power BI](/power-bi/service-machine-learning-automated) and [SQL Server](https://cloudblogs.microsoft.com/sqlserver/2019/01/09/how-to-automate-machine-learning-on-sql-server-2019-big-data-clusters/)
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
To ensure you securely connect to your Azure storage service, Azure Machine Lear
### Virtual network
-Azure Machine Learning requires additional configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [allow list the IP address via the Azure portal](../storage/common/storage-network-security.md#managing-ip-network-rules).
+Azure Machine Learning requires additional configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](../storage/common/storage-network-security.md#managing-ip-network-rules) via the Azure portal.
-Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe, [use a private endpoint with your workspace](how-to-configure-private-link.md).
+Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](how-to-configure-private-link.md).
-**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage.
+**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute cluster in the same virtual network](how-to-secure-training-vnet.md?tabs=azure-studio%2Cipaddress#compute-cluster) or [use a compute instance in the same virtual network](how-to-secure-training-vnet.md?tabs=azure-studio%2Cipaddress#compute-instance).
-**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset; such as dataset previews, profiles and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
+**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset, such as dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
> [!NOTE] > If your data storage is an Azure SQL Database behind a virtual network, be sure to set *Deny public access* to **No** via the [Azure portal](https://ms.portal.azure.com/) to allow Azure Machine Learning to access the storage account.
Azure Data Factory provides efficient and resilient data transfer with more than
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md) * [Train a model](how-to-set-up-training-targets.md)
-* [Deploy a model](how-to-deploy-and-where.md)
+* [Deploy a model](how-to-deploy-and-where.md)
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-features.md
Previously updated : 10/21/2021 Last updated : 01/24/2022 # Data featurization in automated machine learning
The following table shows the accepted settings for `featurization` in the [Auto
## Automatic featurization
-The following table summarizes techniques that are automatically applied to your data. These techniques are applied for experiments that are configured by using the SDK or the studio. To disable this behavior, set `"featurization": 'off'` in your `AutoMLConfig` object.
+The following table summarizes techniques that are automatically applied to your data. These techniques are applied for experiments that are configured by using the SDK or the studio UI. To disable this behavior, set `"featurization": 'off'` in your `AutoMLConfig` object.
> [!NOTE] > If you plan to export your AutoML-created models to an [ONNX model](concept-onnx.md), only the featurization options indicated with an asterisk ("*") are supported in the ONNX format. Learn more about [converting models to ONNX](how-to-use-automl-onnx-model-dotnet.md).
The following table summarizes techniques that are automatically applied to your
|**Word embeddings**|A text featurizer converts vectors of text tokens into sentence vectors by using a pre-trained model. Each word's embedding vector in a document is aggregated with the rest to produce a document feature vector.| |**Cluster Distance**|Trains a k-means clustering model on all numeric columns. Produces *k* new features (one new numeric feature per cluster) that contain the distance of each sample to the centroid of each cluster.|
+In every automated machine learning experiment, your data is automatically scaled or normalized to help algorithms perform well. During model training, one of the following scaling or normalization techniques are applied to each model.
+
+|Scaling&nbsp;&&nbsp;processing| Description |
+| - | - |
+| [StandardScaleWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) | Standardize features by removing the mean and scaling to unit variance |
+| [MinMaxScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) | Transforms features by scaling each feature by that column's minimum and maximum |
+| [MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html#sklearn.preprocessing.MaxAbsScaler) |Scale each feature by its maximum absolute value |
+| [RobustScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html) | Scales features by their quantile range |
+| [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) |Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space |
+| [TruncatedSVDWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html) |This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition, which means it can work with scipy.sparse matrices efficiently |
+| [SparseNormalizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html) | Each sample (that is, each row of the data matrix) with at least one non-zero component is rescaled independently of other samples so that its norm (l1 or l2) equals one |
+ ## Data guardrails *Data guardrails* help you identify potential issues with your data (for example, missing values or [class imbalance](concept-manage-ml-pitfalls.md#identify-models-with-imbalanced-data)). They also help you take corrective actions for improved results.
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-identity-based-data-access.md
adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
filesystem='tabular', account_name='myadls2') ```+ ### Azure SQL database+ For an Azure SQL database, use [register_azure_sql_database()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-sql-database-workspace--datastore-name--server-name--database-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--endpoint-none--overwrite-false--username-none--password-none--subscription-id-none--resource-group-none--grant-workspace-access-false-kwargs-) to register a datastore that connects to an Azure SQL database storage. The following code creates and registers the `credentialless_sqldb` datastore to the `ws` workspace and assigns it to the variable, `sqldb_dstore`. This datastore accesses the database `mydb` in the `myserver` SQL DB server. ```python
-# createn sqldatabase datastore without credentials
+# Create a sqldatabase datastore without credentials
sqldb_dstore = Datastore.register_azure_sql_database(workspace=ws, datastore_name='credentialless_sqldb',
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-create-secure-workspace.md
When Azure Container Registry is behind the virtual network, Azure Machine Learn
1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use: ```azurecli-interactive
- az ml workspace update -g docs-ml-rg -w docs-ml-ws --image-build-compute cpu-cluster
+ az ml workspace update \
+ -g docs-ml-rg \
+ --name docs-ml-ws \
+ --image-build-compute cpu-cluster
``` > [!NOTE]
managed-instance-apache-cassandra Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/network-rules.md
Title: Required outbound network rules for Azure Managed Instance for Apache Cassandra description: Learn what are the required outbound network rules and FQDNs for Azure Managed Instance for Apache Cassandra-+ Last updated 11/02/2021-+
marketplace Anomaly Detection Service For Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/anomaly-detection-service-for-metered-billing.md
description: Describes how anomaly detection works, when notifications are sent
Previously updated : 11/22/2021 Last updated : 1/21/2022
If one of the following cases applies, you can adjust the usage amount in Partne
To submit a support ticket related to metered billing anomalies:
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account.
-1. On the Home page, select the **Help + support** tile.
-
- [ ![Illustrates the Help and Support tile on the Partner Center home page.](../media/workspaces/partner-center-help-support-tile.png) ](../media/workspaces/partner-center-help-support-tile.png#lightbox)
-
-1. Under **My support requests**, select **+ New request**.
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) and select the **Help** icon (?).
1. In the **Problem summary** box, enter **metered billing**.
+1. In the **Workspace list**, select **Marketplace Offers**.
1. In the **Problem type** box, select one of the following:
- - **Commercial Marketplace > Metered Billing > Wrong usage sent for Azure Applications offer**
- - **Commercial Marketplace > Metered Billing > Wrong usage sent for SaaS offer**
-1. Under **Next step**, select **Review solutions**.
-1. Review the recommended documents, if any or select **Provide issue details** to submit a support ticket.
+ - **Metered Billing > Wrong usage sent for Azure Applications offer**
+ - **Metered Billing > Wrong usage sent for SaaS offer**
+1. Select **Review solutions**.
+1. Review the recommended documents, if any or select **Contact Support** to submit a support ticket.
For more publisher support options, see [Support for the commercial marketplace program in Partner Center](../support.md).
marketplace Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/support.md
Previously updated : 01/12/2022
+reviewer: kimnich
Last updated : 1/24/2022 # Support for the commercial marketplace program in Partner Center
Microsoft provides support for a wide variety of products and services. Finding
## Get help or open a support ticket
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account. If you have not yet done so, you will need to [create a Partner Center account](create-account.md).
-
-1. On the Home page, select the **Help + support** tile.
-
- [ ![Illustrates the Partner Center Home page with the Help + support tile highlighted.](./media/workspaces/partner-center-help-support-tile.png) ](./media/workspaces/partner-center-help-support-tile.png#lightbox)
-
-1. Under **My support requests**, select **+ New request**.
-
-1. In the **Problem summary** box, enter a brief description of the issue.
-
-1. In the **Problem type** box, do one of the following:
-
- - **Option 1**: Enter keywords such as: Marketplace, Azure app, SaaS offer, account management, lead management, deployment issue, payout, or co-sell offer migration. Then select a problem type from the recommended list that appears.
-
- - **Option 2**: Select **Browse topics** from the **Category** list and then select **Commercial Marketplace**. Then select the appropriate **Topic** and **Subtopic**.
-
-1. After you have found the topic of your choice, select **Review Solutions**.
-
- ![Next step](./media/support/next-step.png)
-
-The following options are shown:
--- To select a different topic, click **Select a different issue**.-- To help solve the issue, review the recommended steps and documents, if available.-
- [ ![Illustrates the Recommended solutions page.](./media/support/recommended-solutions.png) ](./media/support/recommended-solutions.png#lightbox)
-
-If you cannot find your answer in the self help, select **Provide issue details**. Complete all required fields to speed up the resolution process, then select **Submit**.
-
->[!Note]
->If you have not signed in to Partner Center, you may be required to sign in before you can create a ticket.
+Any Partner Center user can create a support request. To learn how, see [Get help and contact support](/partner-center/report-problems-with-partner-center).
## Track your existing support requests 1. To review your open and closed tickets, sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) with your work account.
+1. In the top menu, select the **Help** icon (?).
+1. In the side-panel that appears, select **View my support requests**.
-1. On the Home page, select the **Help + support** tile.
-
- [ ![Illustrates the Partner Center Home page with the Help + support tile highlighted.](./media/workspaces/partner-center-help-support-tile.png) ](./media/workspaces/partner-center-help-support-tile.png#lightbox)
-
-## Record issue details with a HAR file
-
-To help support agents troubleshoot your issue, consider attaching an HTTP Archive format (HAR) file to your support ticket. HAR files are logs of network requests in a web browser.
-
-> [!WARNING]
-> HAR files may record sensitive data about your Partner Center account.
-
-### Microsoft Edge and Google Chrome
-
-To generate a HAR file using **Microsoft Edge** or **Google Chrome**:
-
-1. Go to the web page where youΓÇÖre experiencing the issue.
-1. In the top right corner of the window, select the ellipsis icon, then **More tools** > **Developer tools**. You can press F12 as a shortcut.
-1. In the Developer tools pane, select the **Network** tab.
-1. Select **Stop recording network log** and **Clear** to remove existing logs. The record icon will turn grey.
-
- ![How to remove existing logs in Microsoft Edge or Google Chrome](media/support/chromium-stop-clear-session.png)
-
-1. Select **Record network log** to start recording. When you start recording, the record icon will turn red.
-
- ![How to start recording in Microsoft Edge or Google Chrome](media/support/chromium-start-session.png)
-
-1. Reproduce the issue you want to troubleshoot.
-1. After youΓÇÖve reproduced the issue, select **Stop recording network log**.
-1. Select **Export HAR**, marked with a downward-arrow icon, and save the file.
-
- ![How to export a HAR file in Microsoft Edge or Google Chrome](media/support/chromium-network-export-har.png)
-
-### Mozilla Firefox
-
-To generate a HAR file using **Mozilla Firefox**:
-
-1. Go to the web page where youΓÇÖre experiencing the issue.
-1. In the top right corner of the window, select the ellipsis icon, then **Web Developer** > **Toggle Tools**. You can press F12 as a shortcut.
-1. Select the **Network** tab, then select **Clear** to remove existing logs.
-
- ![How to remove existing logs in Mozilla Firefox](media/support/firefox-clear-session.png)
-
-1. Reproduce the issue you want to troubleshoot.
-1. After youΓÇÖve reproduced the issue, select **HAR Export/Import** > **Save All As HAR**.
-
- ![How to export a HAR file in Mozilla Firefox](media/support/firefox-network-export-har.png)
-
-### Apple Safari
-
-To generate a HAR file using **Safari**:
-
-1. Enable the developer tools in Safari: select **Safari** > **Preferences**. Go to the **Advanced** tab, then select **Show Develop menu in menu bar**.
-1. Go to the web page where youΓÇÖre experiencing the issue.
-1. Select **Develop**, then select **Show Web Inspector**.
-1. Select the **Network** tab, then select **Clear Network Items** to remove existing logs.
-
- ![How to remove existing logs in Safari](media/support/safari-clear-session.png)
-
-1. Reproduce the issue you want to troubleshoot.
-1. After youΓÇÖve reproduced the issue, select **Export** and save the file.
-
- ![How to export a HAR file in Safari](media/support/safari-network-export-har.png)
+Your support tickets are shown under **My support requests**.
## Additional resources
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
To step through this how-to guide you need:
:::image type="content" source="./media/howto-configure-server-parameters-in-portal/7-reset-to-default-button.png" alt-text="Reset all to default"::: ## Working with time zone parameters
-If you plan to work with date and time data in PostgreSQL, youΓÇÖll want to ensure that youΓÇÖve set the correct time zone for your location. All timezone-aware dates and times are stored internally in Postgres in UTC. They are converted to local time in the zone specified by the **TimeZone** server parameter before being displayed to the client. This parameter can be edited on **Server parameters** page as explained above.
+If you plan to work with date and time data in PostgreSQL, youΓÇÖll want to ensure that youΓÇÖve set the correct time zone for your location. All timezone-aware dates and times are stored internally in PostgreSQL in UTC. They are converted to local time in the zone specified by the **TimeZone** server parameter before being displayed to the client. This parameter can be edited on **Server parameters** page as explained above.
PostgreSQL allows you to specify time zones in three different forms: 1. A full time zone name, for example America/New_York. The recognized time zone names are listed in the [**pg_timezone_names**](https://www.postgresql.org/docs/9.2/view-pg-timezone-names.html) view. Example to query this view in psql and get list of time zone names:
PostgreSQL allows you to specify time zones in three different forms:
Learn about: - [Overview of server parameters in Azure Database for PostgreSQL](concepts-server-parameters.md) - [Configure Azure Database for PostgreSQL - Flexible Server parameters via CLI](howto-configure-server-parameters-using-cli.md)
-
+
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/tutorial-private-endpoint-sql-portal.md
 Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Portal'
-description: Use this tutorial to learn how to create a Azure SQL server with a private endpoint using the Azure portal.
+description: Use this tutorial to learn how to create an Azure SQL server with a private endpoint using the Azure portal.
# Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a virtual network and bastion host. > * Create a virtual machine.
-> * Create a Azure SQL server and private endpoint.
+> * Create an Azure SQL server and private endpoint.
> * Test connectivity to the SQL server private endpoint. ## Prerequisites
In this section, you'll create a SQL server in Azure.
14. Select **Create**.
+> [!IMPORTANT]
+> When adding a Private endpoint connection, public routing to your Azure SQL logical server is not blocked by default. The setting "Deny public network access" under the "Firewall and virtual networks" blade is left unchecked by default. To disable public network access ensure this is checked.
+
+## Disable public access to Azure SQL logical server
+For this scenario, assume you would like to disable all public access to your Azure SQL Logical server, and only allow connections from your virtual network.
+
+1. Ensure your Private endpoint connection(s) are enabled and configured.
+2. Disable public access:
+ 1. Navigate to the "Firewalls and virtual network" blade of your Azure SQL Logical Server
+ 2. Click the box to check mark "Deny public network access"
+
+ :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/pec-deny-public-access.png" alt-text="Deny public network access option":::
+
+ 3. Click the Save icon to enable.
+ ## Test connectivity to private endpoint In this section, you'll use the virtual machine you created in the previous step to connect to the SQL server across the private endpoint.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/troubleshooting.md
na Previously updated : 01/07/2022 Last updated : 01/21/2022
This article answers some common questions about Azure role-based access control
Azure supports up to **2000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. If you get the error message "No more role assignments can be created (code: RoleAssignmentLimitExceeded)" when you try to assign a role, try to reduce the number of role assignments in the subscription. > [!NOTE]
-> Starting November 2021, the role assignments limit for a subscription is being increased from **2000** to **4000** over the next several months. Subscriptions that are near the limit will be prioritized first. The limit for the remaining subscriptions will be increased over time.
+> Starting November 2021, the role assignments limit for a subscription is being increased from **2000** to **4000** over the next several months. Subscriptions that are near the limit will be prioritized first. The limit for the remaining subscriptions will be increased over time. Once the limit increase process is started for a subscription, it still takes multiple weeks to increase the limit.
If you are getting close to this limit, here are some ways that you can reduce the number of role assignments:
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-file-storage-integration.md
This article supplements [**Create an indexer**](search-howto-create-indexers.md
## Prerequisites
-+ [Azure Files](https://azure.microsoft.com/services/storage/files/), Transaction Optimized tier.
++ [Azure Files](../storage/files/storage-how-to-use-files-portal.md), Transaction Optimized tier. + An [SMB file share](../storage/files/files-smb-protocol.md) providing the source content. [NFS shares](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) are not supported.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
Previously updated : 07/15/2021 Last updated : 01/24/2022
This article describes the security features in Azure Cognitive Search that protect data and operations.
-## Network traffic patterns
+## Data flow and points of entry
-A search service is hosted on Azure and typically accessed over public network connections. Understanding the service's access patterns can help you design a security strategy that effectively deters unauthorized access to searchable content.
+A search service is hosted on Azure and is typically accessed by client applications using public network connections. Understanding the search service's points of entry and network traffic patterns is useful background for setting up development and production environments.
Cognitive Search has three basic network traffic patterns:
Cognitive Search has three basic network traffic patterns:
+ Outbound requests issued by the search service to other services on Azure and elsewhere + Internal service-to-service requests over the secure Microsoft backbone network
-Inbound requests range from creating objects, loading data, and querying. For inbound access to data and operations, you can implement a progression of security measures, starting with API keys on the request. You can then supplement with either inbound rules in an IP firewall, or create private endpoints that fully shield your service from the public internet.
+Inbound requests range from creating objects, loading data, and querying. For inbound access to data and operations, you can implement a progression of security measures, starting with API keys on the request. You can then supplement with either inbound rules in an IP firewall, or create private endpoints that fully shield your service from the public internet. You can also use Azure Active Directory and role-based access control for data plane operations (currently in preview).
Outbound requests can include both read and write operations. The primary agent of an outbound call is an indexer and constituent skillsets. For indexers, read operations include [document cracking](search-indexer-overview.md#document-cracking) and data ingestion. An indexer can also write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. Finally, a skillset can also include custom skills that run external code, for example in Azure Functions or in a web app.
Inbound security features protect the search service endpoint through increasing
Optionally, you can implement additional layers of control by setting firewall rules that limit access to specific IP addresses. For advanced protection, you can enable Azure Private Link to shield your service endpoint from all internet traffic.
-### Connect over the public internet
+### Inbound connection over the public internet
By default, a search service endpoint is accessed through the public cloud, using key-based authentication for admin or query access to the search service endpoint. Keys are required. Submission of a valid key is considered proof the request originates from a trusted entity. Key-based authentication is covered in the next section. Without API keys, you'll get 401 and 404 responses on the request.
-### Connect through IP firewalls
+### Inbound connection through IP firewalls
To further control access to your search service, you can create inbound firewall rules that allow access to specific IP address or a range of IP addresses. All client connections must be made through an allowed IP address, or the connection is denied.
You can use the portal to [configure inbound access](service-configure-firewall.
Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/2020-08-01/services/create-or-update#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service.
-### Connect to a private endpoint (network isolation, no Internet traffic)
+### Inbound connection to a private endpoint (network isolation, no Internet traffic)
You can establish a [private endpoint](../private-link/private-endpoint-overview.md) for Azure Cognitive Search allows a client on a [virtual network](../virtual-network/virtual-networks-overview.md) to securely access data in a search index over a [Private Link](../private-link/private-link-overview.md).
The private endpoint uses an IP address from the virtual network address space f
:::image type="content" source="media/search-security-overview/inbound-private-link-azure-cog-search.png" alt-text="sample architecture diagram for private endpoint access":::
-While this solution is the most secure, using additional services is an added cost so be sure you have a clear understanding of the benefits before diving in. or more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/). For more information about how these components work together, watch the video at the top of this article. Coverage of the private endpoint option starts at 5:48 into the video. For instructions on how to set up the endpoint, see [Create a Private Endpoint for Azure Cognitive Search](service-create-private-endpoint.md).
+While this solution is the most secure, using additional services is an added cost so be sure you have a clear understanding of the benefits before diving in. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/). For more information about how these components work together, [watch this video](#watch-this-video). Coverage of the private endpoint option starts at 5:48 into the video. For instructions on how to set up the endpoint, see [Create a Private Endpoint for Azure Cognitive Search](service-create-private-endpoint.md).
### Outbound connections to external services
Indexers and skillsets are both objects that can make external connections. You'
+ Managed identity in the connection string
- You can set up a managed identity to make search a trusted service when accessing data from Azure Storage, Azure SQL, Cosmos DB, or other Azure data sources. A managed identity is a substitute for credentials or access keys on the connection. For more information about this capability, see [Connect to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
+ You can [set up a managed identity](search-howto-managed-identities-data-sources.md) to make search a trusted service when accessing data from Azure Storage, Azure SQL, Cosmos DB, or other Azure data sources. A managed identity is a substitute for credentials or access keys on the connection.
## Authentication
-For inbound requests to the search service, authentication is through an [API key](search-security-api-keys.md) (a string composed of randomly generated numbers and letters) that proves the request is from a trustworthy source. Alternatively, there is new support for Azure Active Directory authentication and role-based authorization, [currently in preview](search-security-rbac.md).
+For inbound requests to the search service, authentication is on the request (not the calling app or user) through an [API key](search-security-api-keys.md), where the key is a string composed of randomly generated numbers and letters)that proves the request is from a trustworthy source.
-Outbound requests made by an indexer are subject to authentication by the external service. The indexer subservice in Cognitive Search can be made a trusted service on Azure, connecting to other services using a managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
+Alternatively, there is new support for Azure Active Directory authentication and role-based authorization, [currently in preview](search-security-rbac.md), that establishes the caller (and not the request) as the authenticated identity.
+
+Outbound requests made by an indexer are subject to the authentication protocols supported by the external service. The indexer subservice in Cognitive Search can be made a trusted service on Azure, connecting to other services using a managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
## Authorization
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/network-overview.md
Learn more:
## Azure DDoS protection Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns facing customers that are moving their applications to the cloud. A DDoS attack attempts to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet.
-Microsoft provides DDoS protection known as **Basic** as part of the Azure Platform. This comes at no charge and includes always on monitoring and real-time mitigation of common network level attacks. In addition to the protections included with DDoS protection **Basic** you can enable the **Standard** option. DDoS Protection Standard features include:
+
+DDoS Protection Standard features include:
* **Native platform integration:** Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration. * **Turn-key protection:** Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required. DDoS Protection Standard instantly and automatically mitigates the attack, once it is detected.
Learn more:
Azure Front Door Service enables you to define, manage, and monitor the global routing of your web traffic. It optimizes your traffic's routing for best performance and high availability. Azure Front Door allows you to author custom web application firewall (WAF) rules for access control to protect your HTTP/HTTPS workload from exploitation based on client IP addresses, country code, and http parameters. Additionally, Front Door also enables you to create rate limiting rules to battle malicious bot traffic, it includes TLS offloading and per-HTTP/HTTPS request, application-layer processing.
-Front Door platform itself is protected by Azure DDoS Protection Basic. For further protection, Azure DDoS Protection Standard may be enabled at your VNETs and safeguard resources from network layer (TCP/UDP) attacks via auto tuning and mitigation. Front Door is a layer 7 reverse proxy, it only allows web traffic to pass through to back end servers and block other types of traffic by default.
+Front Door platform itself is protected by an Azure infrastructure-level DDoS protection. For further protection, Azure DDoS Protection Standard may be enabled at your VNETs and safeguard resources from network layer (TCP/UDP) attacks via auto tuning and mitigation. Front Door is a layer 7 reverse proxy, it only allows web traffic to pass through to back end servers and block other types of traffic by default.
Learn more:
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
Having already set up [data collection from your CEF sources](connect-common-eve
1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten. ```c
- sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable
+ sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'
``` ## Configure your device's logging settings
sentinel Kusto Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/kusto-overview.md
Microsoft Sentinel is built on top of the Azure Monitor service and it uses Azur
- data created by Microsoft Sentinel itself, resulting from the analyses it creates and performs - for example, alerts, incidents, and UEBA-related information. - data uploaded to Microsoft Sentinel to assist with detection and analysis - for example, threat intelligence feeds and watchlists.
-[Kusto Query Language](/data-explorer/kusto/query/) was developed as part of the [Azure Data Explorer](/data-explorer/) service, and itΓÇÖs therefore optimized for searching through big-data stores in a cloud environment. Inspired by famed undersea explorer Jacques Cousteau (and pronounced accordingly "koo-STOH"), itΓÇÖs designed to help you dive deep into your oceans of data and explore their hidden treasures.
+[Kusto Query Language](/azure/data-explorer/kusto/query/) was developed as part of the [Azure Data Explorer](/azure/data-explorer/) service, and itΓÇÖs therefore optimized for searching through big-data stores in a cloud environment. Inspired by famed undersea explorer Jacques Cousteau (and pronounced accordingly "koo-STOH"), itΓÇÖs designed to help you dive deep into your oceans of data and explore their hidden treasures.
Kusto Query Language is also used in Azure Monitor (and therefore in Microsoft Sentinel), including some additional Azure Monitor features, to retrieve, visualize, analyze, and parse data in Log Analytics data stores. In Microsoft Sentinel, you're using tools based on Kusto Query Language whenever youΓÇÖre visualizing and analyzing data and hunting for threats, whether in existing rules and workbooks, or in building your own.
Because Kusto Query Language is a part of nearly everything you do in Microsoft
## What is a query?
-A Kusto Query Language query is a read-only request to process data and return results ΓÇô it doesnΓÇÖt write any data. Queries operate on data that's organized into a hierarchy of [databases](/data-explorer/kusto/query/schema-entities/databases), [tables](/data-explorer/kusto/query/schema-entities/tables), and [columns](/data-explorer/kusto/query/schema-entities/columns), similar to SQL.
+A Kusto Query Language query is a read-only request to process data and return results ΓÇô it doesnΓÇÖt write any data. Queries operate on data that's organized into a hierarchy of [databases](/azure/data-explorer/kusto/query/schema-entities/databases), [tables](/azure/data-explorer/kusto/query/schema-entities/tables), and [columns](/azure/data-explorer/kusto/query/schema-entities/columns), similar to SQL.
Requests are stated in plain language and use a data-flow model designed to make the syntax easy to read, write, and automate. We'll see this in detail.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
-## January 2021
+## January 2022
- [SentinelHealth data table (Public preview)](#sentinelhealth-data-table-public-preview) - [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view)
service-bus-messaging Advanced Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/advanced-features-overview.md
Title: Azure Service Bus messaging - advanced features description: This article provides a high-level overview of advanced features in Azure Service Bus. Previously updated : 06/11/2021 Last updated : 01/24/2022 # Azure Service Bus - advanced features
You can submit messages to a queue or a topic for delayed processing, setting a
## Message deferral A queue or subscription client can defer retrieval of a received message until a later time. The message may have been posted out of an expected order and the client wants to wait until it receives another message. Deferred messages remain in the queue or subscription and must be reactivated explicitly using their service-assigned sequence number. For more information, see [Message deferral](message-deferral.md).
-## Batching
-Client-side batching enables a queue or topic client to accumulate a set of messages and transfer them together. It's often done to either save bandwidth or to increase throughput. For more information, see [Client-side batching](service-bus-performance-improvements.md#client-side-batching).
- ## Transactions A transaction groups two or more operations together into an execution scope. Service Bus allows you to group operations against multiple messaging entities within the scope of a single transaction. A message entity can be a queue, topic, or subscription. For more information, see [Overview of Service Bus transaction processing](service-bus-transactions.md).
service-bus-messaging Service Bus Messaging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-overview.md
Title: Azure Service Bus messaging overview | Microsoft Docs description: This article provides a high-level overview of Azure Service Bus, a fully managed enterprise integration message broker. It also explains concepts such as namespaces, queues, and topics in Service Bus. Previously updated : 11/11/2021 Last updated : 01/24/2022
You can submit messages to a queue or topic [for delayed processing](message-seq
When a queue or subscription client receives a message that it's willing to process, but for which processing isn't currently possible because of special circumstances within the application, the entity can [defer retrieval of the message](message-deferral.md) to a later point. The message remains in the queue or subscription, but it's set aside.
-### Batching
-
-[Client-side batching](service-bus-performance-improvements.md#client-side-batching) enables a queue or topic client to delay sending a message for a certain period of time. If the client sends more messages during this time period, it transmits the messages in a single batch.
- ### Transactions A [transaction](service-bus-transactions.md) groups two or more operations together into an execution scope. Service Bus supports grouping operations against a single messaging entity (queue, topic, subscription) within the scope of a transaction.
To get started using Service Bus messaging, see the following articles:
- [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) - Quickstarts: [.NET](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), or [JMS](service-bus-java-how-to-use-jms-api-amqp.md). - [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/). -- [Premium Messaging](service-bus-premium-messaging.md).
+- [Premium Messaging](service-bus-premium-messaging.md).
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-performance-improvements.md
Title: Best practices for improving performance using Azure Service Bus description: Describes how to use Service Bus to optimize performance when exchanging brokered messages. Previously updated : 08/30/2021 Last updated : 01/24/2021 ms.devlang: csharp
When setting the receive mode to `ReceiveAndDelete`, both steps are combined in
Service Bus doesn't support transactions for receive-and-delete operations. Also, peek-lock semantics are required for any scenarios in which the client wants to defer or [dead-letter](service-bus-dead-letter-queues.md) a message.
-## Client-side batching
-
-Client-side batching enables a queue or topic client to delay the sending of a message for a certain period of time. If the client sends additional messages during this time period, it transmits the messages in a single batch. Client-side batching also causes a queue or subscription client to batch multiple **Complete** requests into a single request. Batching is only available for asynchronous **Send** and **Complete** operations. Synchronous operations are immediately sent to the Service Bus service. Batching doesn't occur for peek or receive operations, nor does batching occur across clients.
-
-# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
-Batching functionality for the .NET Standard SDK doesn't yet expose a property to manipulate.
-
-# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
-
-Batching functionality for the .NET Standard SDK doesn't yet expose a property to manipulate.
-
-# [WindowsAzure.ServiceBus SDK](#tab/net-framework-sdk)
-
-By default, a client uses a batch interval of 20 ms. You can change the batch interval by setting the [BatchFlushInterval][BatchFlushInterval] property before creating the messaging factory. This setting affects all clients that are created by this factory.
-
-To disable batching, set the [BatchFlushInterval][BatchFlushInterval] property to **TimeSpan.Zero**. For example:
-
-```csharp
-var settings = new MessagingFactorySettings
-{
- NetMessagingTransportSettings =
- {
- BatchFlushInterval = TimeSpan.Zero
- }
-};
-var factory = MessagingFactory.Create(namespaceUri, settings);
-```
-
-Batching doesn't affect the number of billable messaging operations, and is available only for the Service Bus client protocol using the [Microsoft.ServiceBus.Messaging](https://www.nuget.org/packages/WindowsAzure.ServiceBus/) library. The HTTP protocol doesn't support batching.
-
-> [!NOTE]
-> Setting `BatchFlushInterval` ensures that the batching is implicit from the application's perspective. i.e.; the application makes `SendAsync` and `CompleteAsync` calls and doesn't make specific Batch calls.
->
-> Explicit client side batching can be implemented by utilizing the below method call:
-> ```csharp
-> Task SendBatchAsync(IEnumerable<BrokeredMessage> messages);
-> ```
-> Here the combined size of the messages must be less than the maximum size supported by the pricing tier.
--- ## Batching store access To increase the throughput of a queue, topic, or subscription, Service Bus batches multiple messages when it writes to its internal store.
site-recovery Azure To Azure Enable Global Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-enable-global-disaster-recovery.md
Title: Enable disaster recovery across Azure regions across the globe description: This article describes the global disaster recovery feature in Azure Site Recovery.- Last updated 08/09/2021-
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-config-server.md
There are some restrictions when you use Config Server with a Git back end. Some
```yaml eureka.client.service-url.defaultZone eureka.client.tls.keystore
+eureka.instance.preferIpAddress
+eureka.instance.instance-id
server.port spring.cloud.config.tls.keystore spring.application.name
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/enterprise-edge.md
A manual setup gives you full control over the CDN configuration including the c
* [Custom domain](./custom-domain.md) configured for your static web app with a time to live (TTL) set to less than 48 hrs. * An application deployed with [Azure Static Web Apps](./get-started-portal.md) that uses the Standard hosting plan.
-* The subscription has been re-registered for Microsoft.CDN Resource Provider.
# [Azure portal](#tab/azure-portal)
-1. Navigate to your subscription in the Azure portal.
-
-1. Select **Resource providers** in the left menu.
-
-1. Click on **Microsoft.CDN** out of the list of resource providers.
-
-1. Click **Register** or **Reregister**.
- 1. Navigate to your static web app in the Azure portal. 1. Select **Enterprise-grade edge** in the left menu.
A manual setup gives you full control over the CDN configuration including the c
# [Azure CLI](#tab/azure-cli) ```azurecli
-az provider register --namespace 'Microsoft.CDN' --wait
az extension add -n enterprise-edge
az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
## Next steps > [!div class="nextstepaction"]
-> [Application configuration](configuration.md)
+> [Application configuration](configuration.md)
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blobs-static-site-github-actions.md
Previously updated : 11/19/2021 Last updated : 01/24/2022
Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a
An Azure subscription and GitHub account. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub repository with your static website code. If you don't have a GitHub account, [sign up for free](https://github.com/join).
+- A GitHub repository with your static website code. If you do not have a GitHub account, [sign up for free](https://github.com/join).
- A working static website hosted in Azure Storage. Learn how to [host a static website in Azure Storage](storage-blob-static-website-how-to.md). To follow this example, you should also deploy [Azure CDN](static-website-content-delivery-network.md). > [!NOTE]
An Azure subscription and GitHub account.
## Generate deployment credentials +
+# [Service principal](#tab/userlevel)
+ You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az_ad_sp_create_for_rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button. Replace the placeholder `myStaticSite` with the name of your site hosted in Azure Storage.
Replace the placeholder `myStaticSite` with the name of your site hosted in Azur
az ad sp create-for-rbac --name {myStaticSite} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} --sdk-auth ```
-In the example above, replace the placeholders with your subscription ID and resource group name. The output is a JSON object with the role assignment credentials that provide access to your storage account similar to below. Copy this JSON object for later.
+In the example, replace the placeholders with your subscription ID and resource group name. The output is a JSON object with the role assignment credentials that provide access to your storage account. Copy this JSON object for later.
```output {
In the example above, replace the placeholders with your subscription ID and res
> [!IMPORTANT] > It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+# [OpenID Connect](#tab/openid)
+
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You will use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+++ ## Configure the GitHub secret
+# [Service principal](#tab/userlevel)
+ 1. In [GitHub](https://github.com/), browse your repository. 1. Select **Settings > Secrets > New secret**.
In the example above, replace the placeholders with your subscription ID and res
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
++++ ## Add your workflow
+# [Service principal](#tab/userlevel)
++ 1. Go to **Actions** for your GitHub repository. :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub actions menu item":::
In the example above, replace the placeholders with your subscription ID and res
branches: [ main ] ```
-1. Rename your workflow `Blob storage website CI` and add the checkout and login actions. These actions will checkout your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
+1. Rename your workflow `Blob storage website CI` and add the checkout and login actions. These actions will check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
```yaml name: Blob storage website CI
In the example above, replace the placeholders with your subscription ID and res
if: always() ```
+# [OpenID Connect](#tab/openid)
+
+1. Go to **Actions** for your GitHub repository.
+
+ :::image type="content" source="media/storage-blob-static-website/storage-blob-github-actions-header.png" alt-text="GitHub actions menu item":::
+
+1. Select **Set up your workflow yourself**.
+
+1. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
+
+ ```yaml
+ name: CI with OpenID Connect
+
+ on:
+ push:
+ branches: [ main ]
+ ```
+
+1. Add a permissions section.
++
+ ```yaml
+ name: CI with OpenID Connect
+
+ on:
+ push:
+ branches: [ main ]
+
+ permissions:
+ id-token: write
+ contents: read
+ ```
+
+1. Add checkout and login actions. These actions will check out your site code and authenticate with Azure using the GitHub secrets you created earlier.
+
+ ```yaml
+ name: CI with OpenID Connect
+
+ on:
+ push:
+ branches: [ main ]
+
+ permissions:
+ id-token: write
+ contents: read
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ ```
+
+1. Use the Azure CLI action to upload your code to blob storage and to purge your CDN endpoint. For `az storage blob upload-batch`, replace the placeholder with your storage account name. The script will upload to the `$web` container. For `az cdn endpoint purge`, replace the placeholders with your CDN profile name, CDN endpoint name, and resource group. To speed up your CDN purge, you can add the `--no-wait` option to `az cdn endpoint purge`. To enhance security, you can also add the `--account-key` option with your [storage account key](../common/storage-account-keys-manage.md).
+
+ ```yaml
+ - name: Upload to blob storage
+ uses: azure/CLI@v1
+ with:
+ inlineScript: |
+ az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> --auth-mode key -d '$web' -s .
+ - name: Purge CDN endpoint
+ uses: azure/CLI@v1
+ with:
+ inlineScript: |
+ az cdn endpoint purge --content-paths "/*" --profile-name "CDN_PROFILE_NAME" --name "CDN_ENDPOINT" --resource-group "RESOURCE_GROUP"
+ ```
+
+1. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+
+ ```yaml
+ name: CI with OpenID Connect
+
+ on:
+ push:
+ branches: [ main ]
+
+ permissions:
+ id-token: write
+ contents: read
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - name: Upload to blob storage
+ uses: azure/CLI@v1
+ with:
+ inlineScript: |
+ az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> --auth-mode key -d '$web' -s .
+ - name: Purge CDN endpoint
+ uses: azure/CLI@v1
+ with:
+ inlineScript: |
+ az cdn endpoint purge --content-paths "/*" --profile-name "CDN_PROFILE_NAME" --name "CDN_ENDPOINT" --resource-group "RESOURCE_GROUP"
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ if: always()
+ ```
++ ## Review your deployment 1. Go to **Actions** for your GitHub repository.
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/metadata/database.md
Title: Shared database
-description: Azure Synapse Analytics provides a shared metadata model where creating a database in serverless Apache Spark pool will make it accessible from its serverless SQL pool and SQL pool engines.
+description: Azure Synapse Analytics provides a shared metadata model where creating a Lake database in an Apache Spark pool will make it accessible from its serverless SQL pool engine.
-# Azure Synapse Analytics shared database
+# Azure Synapse Analytics shared Lake database
-Azure Synapse Analytics allows the different computational workspace engines to share databases and tables. Currently, the databases and the tables (Parquet or CSV backed) that are created on the Apache Spark pools are automatically shared with the serverless SQL pool engine.
+Azure Synapse Analytics allows the different computational workspace engines to share [Lake databases](../database-designer/concepts-lake-database.md) and tables. Currently, the Lake databases and the tables (Parquet or CSV backed) that are created on the Apache Spark pools, [Database templates](../database-designer/concepts-database-templates.md) or Dataverse are automatically shared with the serverless SQL pool engine.
-A database created with a Spark job will become visible with that same name to all current and future Spark pools in the workspace, including the serverless SQL pool engine. You cannot add custom objects (external tables, views, procedures) directly in this synchronized database using the serverless SQL pool.
+A Lake database will become visible with that same name to all current and future Spark pools in the workspace, including the serverless SQL pool engine. You cannot add custom SQL objects (external tables, views, procedures, functions, schema, users) directly in a Lake database using the serverless SQL pool.
-The Spark default database, called `default`, will also be visible in the serverless SQL pool context as a database called `default`.
-You can't create a database in Spark and then create another database with the same name in serverless SQL pool.
+The Spark default database, called `default`, will also be visible in the serverless SQL pool context as a Lake database called `default`.
+You can't create a Lake database and then create another database with the same name in the serverless SQL pool.
-Since the databases are synchronized to serverless SQL pool asynchronously, there will be a delay until they appear.
+The Lake databases are created in the serverless SQL pool asynchronously. There will be a delay until they appear.
-## Manage a Spark created database
+## Manage Lake database
-To manage Spark created databases, you need to use Apache Spark pools. For example, create or delete it through a Spark pool job.
+To manage Spark created Lake databases, you can use Apache Spark pools or [Database designer](../database-designer/create-empty-lake-database.md). For example, create or delete a Lake database through a Spark pool job.
-Objects in synchronized databases cannot be modified from serverless SQL pool.
+Objects in the Lake databases cannot be modified from a serverless SQL pool. Use [Database designer](../database-designer/modify-lake-database.md) or Apache Spark pools to modify the Lake databases.
>[!NOTE]
->You cannot create multiple databases with the same name from different pools. If a serverless SQL pool database is created, you won't be able to create a Spark database with the same name. Respectively, if database is created in Spark, you won't be able to create a serverless SQL pool database with the same name.
+>You cannot create multiple databases with the same name from different pools. If a SQL database in the serverless SQL pool is created, you won't be able to create a Lake database with the same name. Respectively, if you create a Lake database, you won't be able to create a serverless SQL pool database with the same name.
## Security model
-The Spark databases and tables, along with their synchronized representations in the SQL engine will be secured at the underlying storage level.
+The Lake databases and tables will be secured at the underlying storage level.
-The security principal who creates a database is considered the owner of that database, and has all the rights to the database and its objects. `Synapse Administrator` and `Synapse SQL Administrator` will also have all the permissions on synchronized objects in serverless SQL pool by default. Creating custom objects (including users) in synchronized SQL databases is not allowed.
+The security principal who creates a database is considered the owner of that database, and has all the rights to the database and its objects. `Synapse Administrator` and `Synapse SQL Administrator` will also have all the permissions on synchronized objects in a serverless SQL pool by default. Creating custom objects (including users) in synchronized SQL databases is not allowed.
-To give a security principal, such as a user, Azure AD app or a security group, access to the underlying data used for external tables, you need to give them `read (R)` permissions on files (such as the table's underlying data files) and `execute (X)` on folder where the files are stored + on every parent folder up to the root. You can read more about these permissions on [Access control lists(ACLs)](../../storage/blobs/data-lake-storage-access-control.md) page.
+To give a security principal, such as a user, Azure AD app, or a security group, access to the underlying data used for external tables, you need to give them `read (R)` permissions on files (such as the table's underlying data files) and `execute (X)` on the folder where the files are stored + on every parent folder up to the root. You can read more about these permissions on [Access control lists(ACLs)](../../storage/blobs/data-lake-storage-access-control.md) page.
For example, in `https://<storage-name>.dfs.core.windows.net/<fs>/synapse/workspaces/<synapse_ws>/warehouse/mytestdb.db/myparquettable/`, security principals need to have `X` permissions on all the folders starting at the `<fs>` to the `myparquettable` and `R` permissions on `myparquettable` and files inside that folder, to be able to read a table in a database (synchronized or original one).
-If a security principal requires the ability to create objects or drop objects in a database, additional `W` permissions are required on the folders and files in the `warehouse` folder. Modifying objects in a database is not possible from serverless SQL pool, only from Spark.
+If a security principal requires the ability to create objects or drop objects in a database, additional `W` permissions are required on the folders and files in the `warehouse` folder. Modifying objects in a database is not possible from serverless SQL pool, only from Spark pools and [database designer](../database-designer/modify-lake-database.md).
### SQL security model
-Synapse workspace provides T-SQL endpoint that enables you to query the shared database using the serverless SQL pool. As a prerequisite, you need to enable a user to access shared databases in serverless SQL pool. There are two ways to allow a user to access the shared databases:
-- You can assign a `Synapse SQL Administrator` workspace role or `sysadmin` server-level role in the serverless SQL pool. This role has a full control on all databases (note that the shared databases are still read-only even for the administrator role).
+Synapse workspace provides a T-SQL endpoint that enables you to query the Lake database using the serverless SQL pool. As a prerequisite, you need to enable a user to access the shared Lake databases using the serverless SQL pool. There are two ways to allow a user to access the Lake databases:
+- You can assign a `Synapse SQL Administrator` workspace role or `sysadmin` server-level role in the serverless SQL pool. This role has full control over all databases (note that the Lake databases are still read-only even for the administrator role).
- You can grant `GRANT CONNECT ANY DATABASE` and `GRANT SELECT ALL USER SECURABLES` server-level permissions on serverless SQL pool to a login that will enable the login to access and read any database. This might be a good choice for assigning reader/non-admin access to a user.
-Learn more about setting [access control on shared databases](../sql/shared-databases-access-control.md).
+Learn more about [setting access control on shared databases here](../sql/shared-databases-access-control.md).
+
+## Custom SQL metadata objects
+
+Lake databases do not allow creation of custom T-SQL objects, such as schemas, users, procedures, views, and the external tables created on custom locations. If you need to create additional T-SQL objects that reference the shared tables in the Lake database, you have two options:
+- Create a custom SQL database (serverless) that will contain the custom schemas, views, and functions that will reference Lake database external tables using the 3-part names.
+- Instead of Lake database use SQL database (serverless) that will reference data in the lake. SQL database (serverless) enables you to create external tables that can reference data in the lake same way as the Lake database, but it allows creation of additional SQL objects. A drawback is that these objects are not automatically available in Spark.
## Examples
Learn more about setting [access control on shared databases](../sql/shared-data
First create a new Spark database named `mytestdb` using a Spark cluster you have already created in your workspace. You can achieve that, for example, using a Spark C# Notebook with the following .NET for Spark statement: ```csharp
-spark.Sql("CREATE DATABASE mytestdb")
+spark.Sql("CREATE DATABASE mytestlakedb")
```
-After a short delay, you can see the database from serverless SQL pool. For example, run the following statement from serverless SQL pool.
+After a short delay, you can see the Lake database from serverless SQL pool. For example, run the following statement from serverless SQL pool.
```sql SELECT * FROM sys.databases; ```
-Verify that `mytestdb` is included in the results.
+Verify that `mytestlakedb` is included in the results.
## Next steps
synapse-analytics Sql Data Warehouse How To Troubleshoot Missed Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-troubleshoot-missed-classification.md
Previously updated : 10/01/2021 Last updated : 01/24/2022
Azure Synapse Analytics provides workload management capabilities like [classify
However, in some scenarios, a combination of these capabilities can lead to workload classification that doesn't reflect user intent. This article lists such common scenarios and how to troubleshoot them. First, you should query basic information for troubleshooting misclassified workload scenarios. > [!NOTE]
-> This article does not apply to serverless SQL pools in Azure Synapse Analytics.
+> Classifying managed identities (MI) behavior differs between the dedicated SQL pool in Azure Synapse workspaces and the standalone dedicated SQL pool (formerly SQL DW). While the standalone dedicated SQL pool MI maintains the assigned identity, Azure Synapse workspaces adds MI to the **dbo** role. This cannot be changed. The dbo role, by default, is classified to smallrc. Creating a classifier for the dbo role allows for assigning requests to a workload group other than smallrc. If dbo alone is too generic for classification and has broader impacts, consider using label, session or time-based classification in conjunction with the dbo role classification.
## Basic troubleshooting information
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Previously updated : 02/04/2020 Last updated : 01/24/2022
Not all statements are classified as they do not require resources or need impor
Classification for dedicated SQL pool is achieved today by assigning users to a role that has a corresponding resource class assigned to it using [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). The ability to characterize requests beyond a login to a resource class is limited with this capability. A richer method for classification is now available with the [CREATE WORKLOAD CLASSIFIER](/sql/t-sql/statements/create-workload-classifier-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax. With this syntax, dedicated SQL pool users can assign importance and how much system resources are assigned to a request via the `workload_group` parameter.
-> [!NOTE]
-> Classification is evaluated on a per request basis. Multiple requests in a single session can be classified differently.
- ## Classification weighting As part of the classification process, weighting is in place to determine which workload group is assigned. The weighting goes as follows:
The `membername` parameter is mandatory. However, if the membername specified i
If a user is a member of multiple roles with different resource classes assigned or matched in multiple classifiers, the user is given the highest resource class assignment. This behavior is consistent with existing resource class assignment behavior.
+> [!NOTE]
+> Classifying managed identities (MI) behavior differs between the dedicated SQL pool in Azure Synapse workspaces and the standalone dedicated SQL pool (formerly SQL DW). While the standalone dedicated SQL pool MI maintains the assigned identity, Azure Synapse workspaces adds MI to the **dbo** role. This cannot be changed. The dbo role, by default, is classified to smallrc. Creating a classifier for the dbo role allows for assigning requests to a workload group other than smallrc. If dbo alone is too generic for classification and has broader impacts, consider using label, session or time-based classification in conjunction with the dbo role classification.
++ ## System classifiers Workload classification has system workload classifiers. The system classifiers map existing resource class role memberships to resource class resource allocations with normal importance. System classifiers can't be dropped. To view system classifiers, you can run the below query:
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/autoscale-scaling-plan.md
The autoscale feature (preview) lets you scale your Azure Virtual Desktop deploy
>[!NOTE] > - Azure Virtual Desktop (classic) doesn't support the autoscale feature. > - Autoscale doesn't support Azure Virtual Desktop for Azure Stack HCI
-> - Autsoscale doesn't support scaling of ephemeral disks.
+> - Autoscale doesn't support scaling of ephemeral disks.
+> - Autoscale doesn't support scaling of generalized VMs.
For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager templates or first-party tools from Microsoft.
To assign the custom role to grant access:
2. Select the role you just created and continue to the next screen.
-3. Select **+Select members**. In the search bar, enter and select **Windows Virtual Desktop**, as shown in the following screenshot. When you have a Azure Virtual Desktop (classic) deployment and an Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects, you will see two apps with the same name. Select them both.
+3. Select **+Select members**. In the search bar, enter and select **Windows Virtual Desktop**, as shown in the following screenshot. When you have an Azure Virtual Desktop (classic) deployment and an Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects, you will see two apps with the same name. Select them both.
> [!div class="mx-imgBorder"] > ![A screenshot of the add role assignment menu. The Select field is highlighted in red, with the user entering "Windows Virtual Desktop" into the search field.](media/search-for-role.png)
To create a scaling plan:
8. For **Time zone**, select the time zone you'll use with your plan.
-9. In **Exclusion tags**, enter tags for VMs you don't want to include in scaling operations. For example, you might want to tag VMs that are set to drain mode so that autoscale doesn't override drain mode during maintenance.
+9. In **Exclusion tags**, enter a tag name for VMs you don't want to include in scaling operations. For example, you might want to tag VMs that are set to drain mode so that autoscale doesn't override drain mode during maintenance using the exclusion tag "excludeFromScaling". If you've set "excludeFromScaling" as the tag name field on any of the VMs in the host pool, the autoscale feature won't start, stop, or change the drain mode of those particular VMs.
>[!NOTE] >- Though an exclusion tag will exclude the tagged VM from power management scaling operations, tagged VMs will still be considered as part of the calculation of the minimum percentage of hosts.
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/install-office-on-wvd-master-image.md
Here's how to install OneDrive in per-machine mode:
1. First, create a location to stage the OneDrive installer. A local disk folder or [\\\\unc] (file://unc) location is fine.
-2. Download OneDriveSetup.exe to your staged location with this link: <https://aka.ms/OneDriveWVD-Installer>
+2. Download OneDriveSetup.exe to your staged location with this link: <https://go.microsoft.com/fwlink/?linkid=844652>
3. If you installed office with OneDrive by omitting **\<ExcludeApp ID="OneDrive" /\>**, uninstall any existing OneDrive per-user installations from an elevated command prompt by running the following command:
For help with installing Microsoft Teams, see [Use Microsoft Teams on Azure Virt
## Next steps
-Now that you've added Office to the image, you can continue to customize your master VHD image. See [Prepare and customize a master VHD image](set-up-customize-master-image.md).
+Now that you've added Office to the image, you can continue to customize your master VHD image. See [Prepare and customize a master VHD image](set-up-customize-master-image.md).
virtual-machines Tutorial Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/tutorial-virtual-network.md
New-AzVM `
## Secure network traffic
-A network security group (NSG) contains a list of security rules that allow or deny network traffic to resources connected to Azure Virtual Networks (VNet). NSGs can be associated to subnets or individual network interfaces. An NSG is associated with a network interface only applies to the associated VM. When an NSG is associated to a subnet, the rules apply to all resources connected to the subnet.
+A network security group (NSG) contains a list of security rules that allow or deny network traffic to resources connected to Azure Virtual Networks (VNet). NSGs can be associated to subnets or individual network interfaces. An NSG that is associated with a network interface only applies to the associated VM. When an NSG is associated to a subnet, the rules apply to all resources connected to the subnet.
### Network security group rules
virtual-machines Oracle Weblogic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-weblogic.md
This page describes the solutions for running Oracle WebLogic Server (WLS) on Az
You can also run WLS on the Azure Kubernetes Service. The solutions to do so are described in [this Microsoft article](./weblogic-aks.md).
-WLS is a leading Java application server running some of the most mission critical enterprise Java applications across the globe. WLS forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WLS customers with choice and flexibility to run workloads on Azure as a leading cloud platform.
+WLS is a leading Java application server running some of the most mission-critical enterprise Java applications across the globe. WLS forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WLS customers with choice and flexibility to run workloads on Azure as a leading cloud platform.
-The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is installed. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, authentication with Azure Active Directory, centralized logging using ELK and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure SQL, and Oracle DB on the Oracle Cloud or Azure.
+The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is installed. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, authentication with Azure Active Directory, centralized logging using ELK and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
:::image type="content" source="media/oracle-weblogic/wls-on-azure.gif" alt-text="You can use the Azure portal to deploy WebLogic Server on Azure":::
-There are four offers available to meet different scenarios: [single node without an admin server](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls), [single node with an admin server](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin), [cluster](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster), and [dynamic cluster](https://portal.azure.com/#create/oracle.20191021-arm-oraclelinux-wls-dynamic-cluster20191021-arm-oraclelinux-wls-dynamic-cluster). The offers are available free of charge. These offers are described and linked below.
+There are four offers available to meet different scenarios: [single node without an admin server](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls), [single node with an admin server](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin), [cluster](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster), and [dynamic cluster](https://portal.azure.com/#create/oracle.20191021-arm-oraclelinux-wls-dynamic-cluster20191021-arm-oraclelinux-wls-dynamic-cluster). The offers are available free of charge. These offers are described and linked below. You can find detailed documentation on the offers [here](https://wls-eng.github.io/arm-oraclelinux-wls/).
-_These offers are Bring-Your-Own-License_. They assume you've already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
+_These offers are Bring-Your-Own-License_. They assume you have already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
-The offers support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and JDK 11 on Oracle Linux 7.6). These base images are also available on Azure on their own. The base images are suitable for customers that require complex, customized Azure deployments. The current set of base images is available in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WebLogic%20Server%20Base%20Image&page=1).
+The offers support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and Java 11 on Oracle Linux 7.6). These base images are also available on Azure on their own. The base images are suitable for customers that require complex, customized Azure deployments. The current set of base images is available in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WebLogic%20Server%20Base%20Image&page=1).
-_If you're interested in working closely on your migration scenarios with the engineering team developing these offers, select the [CONTACT ME](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) button_ on the [marketplace offer overview page](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview). Program managers, architects, and engineers will reach back out to you shortly and start close collaboration!
+_If you are interested in working closely on your migration scenarios with the engineering team developing these offers, select the [CONTACT ME](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) button_ on the [marketplace offer overview page](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview). Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
## Oracle WebLogic Server Single Node
-[This offer](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls) provisions a single virtual machine and installs WLS on it. It doesn't create a domain or start the administration server. The single node offer is useful for scenarios with highly customized domain configuration.
+[This offer](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls) provisions a single virtual machine and installs WLS on it. It does not create a domain or start the administration server. The single node offer is useful for scenarios with highly customized domain configuration.
## Oracle WebLogic Server with Admin Server
The solutions will enable a wide range of production-ready deployment architectu
:::image type="content" source="media/oracle-weblogic/weblogic-architecture-vms.png" alt-text="Complex WebLogic Server deployments are enabled on Azure":::
-Beyond what is automatically provisioned by the solutions, customers have complete flexibility to customize their deployments further. It's likely on top of deploying applications customers will integrate further Azure resources with their deployments. Customers are encouraged to [connect with the development team](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) and provide feedback on further improving the solutions.
+Beyond what is automatically provisioned by the solutions, customers have complete flexibility to customize their deployments further. It is likely on top of deploying applications customers will integrate further Azure resources with their deployments. Customers are encouraged to [connect with the development team](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) and provide feedback on further improving the solutions.
## Next steps
virtual-machines Weblogic Aks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/weblogic-aks.md
This page describes the solutions for running Oracle WebLogic Server (WLS) on the Azure Kubernetes Service (AKS). These solutions are jointly developed and supported by Oracle and Microsoft.
-It's also possible to run WebLogic Server on Azure Virtual Machines. The solutions to do so are described in [this Microsoft article](./oracle-weblogic.md).
+It is also possible to run WebLogic Server on Azure Virtual Machines. The solutions to do so are described in [this Microsoft article](./oracle-weblogic.md).
-WebLogic Server is a leading Java application server running some of the most mission critical enterprise Java applications across the globe. WebLogic Server forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WebLogic Server customers with choice and flexibility to run workloads on Azure as a leading cloud platform.
+WebLogic Server is a leading Java application server running some of the most mission-critical enterprise Java applications across the globe. WebLogic Server forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WebLogic Server customers with choice and flexibility to run workloads on Azure as a leading cloud platform.
## WLS on AKS certified and supported WebLogic Server is certified by Oracle and Microsoft to run well on AKS. The WLS on AKS solutions are aimed at making it as easy as possible to run your containerized and orchestrated Java applications on Docker and Kubernetes infrastructure. The solutions are focused on reliability, scalability, manageability, and enterprise support.
-WLS clusters are fully enabled to run on Kubernetes via the WebLogic Kubernetes Operator (referred to simply as the 'Operator' from here onward). The Operator follows the standard Kubernetes Operator pattern. It simplifies the management and operation of WebLogic domains and deployments on Kubernetes by automating otherwise manual tasks and adding extra operational reliability features. The Operator supports Oracle WebLogic Server 12c, Oracle Fusion Middleware Infrastructure 12c and beyond. We've tested the official Docker images for WebLogic Server 12.2.1.3 and 12.2.1.4 with the Operator. For details on the Operator, refer to the [official documentation from Oracle](https://oracle.github.io/weblogic-kubernetes-operator/).
+WLS clusters are fully enabled to run on Kubernetes via the WebLogic Kubernetes Operator (referred to simply as the 'Operator' from here onward). The Operator follows the standard Kubernetes Operator pattern. It simplifies the management and operation of WebLogic domains and deployments on Kubernetes by automating otherwise manual tasks and adding extra operational reliability features. The Operator supports Oracle WebLogic Server 12c, Oracle Fusion Middleware Infrastructure 12c and beyond. We have tested the official Docker images for WebLogic Server 12.2.1.3 and 12.2.1.4 with the Operator. For details on the Operator, refer to the [official documentation from Oracle](https://oracle.github.io/weblogic-kubernetes-operator/).
## WLS on AKS marketplace solution template
-Beyond certifying WLS on AKS, Oracle and Microsoft jointly provide a [marketplace solution template](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks) with the goal of making it as quick and easy as possible to migrate WLS workloads to AKS. The offer does so by automating the provisioning of a number of Java and Azure resources. The automatically provisioned resources include an AKS cluster, the WebLogic Kubernetes Operator, WLS Docker images and the Azure Container Registry (ACR). It's possible to use an existing AKS cluster or ACR instance with the offer if desired. The offer also supports configuring load balancing with Azure App Gateway or the Azure Load Balancer, DNS configuration, SSL/TLS configuration, easing database connectivity, publishing metrics to Azure Monitor as well as mounting Azure Files as Kubernetes Persistent Volumes. The currently supported database integrations include Azure PostgreSQL, Azure SQL, and Oracle DB on the Oracle Cloud or Azure.
+Beyond certifying WLS on AKS, Oracle and Microsoft jointly provide a [marketplace solution template](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks) with the goal of making it as quick and easy as possible to migrate WLS workloads to AKS. The offer does so by automating the provisioning of a number of Java and Azure resources. The automatically provisioned resources include an AKS cluster, the WebLogic Kubernetes Operator, WLS Docker images, and the Azure Container Registry (ACR). It is possible to use an existing AKS cluster or ACR instance with the offer if desired. The offer also supports configuring load balancing with Azure App Gateway or the Azure Load Balancer, easing database connectivity, publishing metrics to Azure Monitor as well as mounting Azure Files as Kubernetes Persistent Volumes. The currently supported database integrations include Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
:::image type="content" source="media/oracle-weblogic/wls-aks-demo.gif" alt-text="You can use the marketplace solution to deploy WebLogic Server on AKS":::
-After the offer performs most boilerplate resource provisioning and configuration, you can focus on deploying your WLS application to AKS, typically through a DevOps tool such as GitHub Actions and tools from WebLogic Kubernetes tooling such as the WebLogic Image Tool and WebLogic Deploy Tooling. You're completely free to customize the deployment further.
+After the offer performs most boilerplate resource provisioning and configuration, you can focus on deploying your WLS application to AKS, typically through a DevOps tool such as GitHub Actions and tools from WebLogic Kubernetes tooling such as the WebLogic Image Tool and WebLogic Deploy Tooling. You are completely free to customize the deployment further.
+
+You can find detailed documentation on the solution template [here](https://oracle.github.io/weblogic-kubernetes-operator/userguide/aks/).
## Guidance, scripts, and samples for WLS on AKS
Oracle and Microsoft also provide basic step-by-step guidance, scripts, and samp
The guidance supports two ways of deploying WLS domains to AKS. Domains can be deployed directly to Kubernetes Persistent Volumes. This deployment option is good if you want to migrate to AKS but still want to administer WLS using the Admin Console or the WebLogic Scripting Tool (WLST). The option also allows you to move to AKS without adopting Docker development. The more Kubernetes native way of deploying WLS domains to AKS is to build custom Docker images based on official WLS images from the Oracle Container Registry, publish the custom images to ACR and deploy the domain to AKS using the Operator. This option in the solution also allows you to update the domain through Kubernetes ConfigMaps after the deployment is done.
-_These solutions are all Bring-Your-Own-License_. They assume you've already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
+_These solutions are all Bring-Your-Own-License_. They assume you have already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
-_If you're interested in working closely on your migration scenarios with the engineering team developing these solutions, fill out [this short survey](https://aka.ms/wls-on-azure-survey) and include your contact information_. Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
+_If you are interested in working closely on your migration scenarios with the engineering team developing these solutions, fill out [this short survey](https://aka.ms/wls-on-azure-survey) and include your contact information_. Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
## Deployment architectures
The solutions for running Oracle WebLogic Server on the Azure Kubernetes Service
:::image type="content" source="media/oracle-weblogic/wls-aks-architecture.jpg" alt-text="Complex WebLogic Server deployments are enabled on AKS":::
-Beyond what the solutions provide customers have complete flexibility to customize their deployments further. It's likely on top of deploying applications customers will integrate further Azure resources with their deployments or tune the deployments to their specific applications. Customers are encouraged to provide feedback in the [survey](https://aka.ms/wls-on-azure-survey) on further improving the solutions.
+Beyond what the solutions provide you have complete flexibility to customize your deployments further. It is likely on top of deploying applications you will integrate further Azure resources with your deployments or tune the deployments to your specific applications. You are encouraged to provide feedback in the [survey](https://aka.ms/wls-on-azure-survey) on further improving the solutions.
## Next steps
Explore running Oracle WebLogic Server on the Azure Kubernetes Service.
> [!div class="nextstepaction"] > [WLS on AKS marketplace solution](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks)
+> [!div class="nextstepaction"]
+> [WLS on AKS marketplace solution documentation](https://oracle.github.io/weblogic-kubernetes-operator/userguide/aks/)
+ > [!div class="nextstepaction"] > [Guidance, scripts and samples for running WLS on AKS](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/)
virtual-machines High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid.md
vm-windows Previously updated : 10/16/2020 Last updated : 01/24/2022
This documentation assumes that:
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW2_ASCS IPaddr2 \
- params ip=10.3.1.16 cidr_netmask=24 \
+ params ip=10.3.1.16 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW2_ASCS azure-lb port=62010
This documentation assumes that:
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW3_ASCS IPaddr2 \
- params ip=10.3.1.13 cidr_netmask=24 \
+ params ip=10.3.1.13 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW3_ASCS azure-lb port=62020
This documentation assumes that:
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW2_ERS IPaddr2 \
- params ip=10.3.1.17 cidr_netmask=24 \
+ params ip=10.3.1.17 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW2_ERS azure-lb port=62112
This documentation assumes that:
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW3_ERS IPaddr2 \
- params ip=10.3.1.19 cidr_netmask=24 \
+ params ip=10.3.1.19 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW3_ERS azure-lb port=62122
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 12/07/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_<b>QAS</b>_ASCS IPaddr2 \
- params ip=<b>10.1.1.20</b> cidr_netmask=<b>24</b> \
+ params ip=<b>10.1.1.20</b> \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_<b>QAS</b>_ASCS azure-lb port=620<b>00</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_<b>QAS</b>_ERS IPaddr2 \
- params ip=<b>10.1.1.21</b> cidr_netmask=<b>24</b> \
+ params ip=<b>10.1.1.21</b> \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_<b>QAS</b>_ERS azure-lb port=621<b>01</b>
virtual-machines High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-azure-files.md
vm-windows Previously updated : 12/07/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW1_ASCS IPaddr2 \
- params ip=10.90.90.10 cidr_netmask=24 \
+ params ip=10.90.90.10 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_NW1_ERS IPaddr2 \
- params ip=10.90.90.9 cidr_netmask=24 \
+ params ip=10.90.90.9 \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_NW1_ERS azure-lb port=62101
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
vm-windows Previously updated : 04/12/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
params directory="/srv/nfs/<b>NW1</b>" \ options="rw,no_root_squash,crossmnt" clientspec="*" fsid=1 wait_for_leasetime_on_stop=true op monitor interval="30s"
- sudo crm configure primitive vip_<b>NW1</b>_nfs \
- IPaddr2 \
- params ip=<b>10.0.0.4</b> cidr_netmask=<b>24</b> op monitor interval=10 timeout=20
+ sudo crm configure primitive vip_<b>NW1</b>_nfs IPaddr2 \
+ params ip=<b>10.0.0.4</b> op monitor interval=10 timeout=20
sudo crm configure primitive nc_<b>NW1</b>_nfs azure-lb port=<b>61000</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
params directory="/srv/nfs/<b>NW2</b>" \ options="rw,no_root_squash,crossmnt" clientspec="*" fsid=2 wait_for_leasetime_on_stop=true op monitor interval="30s"
- sudo crm configure primitive vip_<b>NW2</b>_nfs \
- IPaddr2 \
- params ip=<b>10.0.0.5</b> cidr_netmask=<b>24</b> op monitor interval=10 timeout=20
+ sudo crm configure primitive vip_<b>NW2</b>_nfs IPaddr2 \
+ params ip=<b>10.0.0.5</b> op monitor interval=10 timeout=20
sudo crm configure primitive nc_<b>NW2</b>_nfs azure-lb port=<b>61001</b>
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
vm-windows Previously updated : 12/07/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_<b>NW1</b>_ASCS IPaddr2 \
- params ip=<b>10.0.0.7</b> cidr_netmask=<b>24</b> \
+ params ip=<b>10.0.0.7</b> \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_<b>NW1</b>_ASCS azure-lb port=620<b>00</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20s timeout=40s sudo crm configure primitive vip_<b>NW1</b>_ERS IPaddr2 \
- params ip=<b>10.0.0.8</b> cidr_netmask=<b>24</b> \
+ params ip=<b>10.0.0.8</b> \
op monitor interval=10 timeout=20 sudo crm configure primitive nc_<b>NW1</b>_ERS azure-lb port=621<b>02</b>