Updates from: 05/31/2022 01:06:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Our ISV partner network extends our solution capabilities to help you build seam
To be considered into this sample documentation, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/Default.aspx). For any additional questions, send an email to [SaaSApplicationIntegrations@service.microsoft.com](mailto:SaaSApplicationIntegrations@service.microsoft.com). >[!NOTE]
->The [Azure Active Directory B2C community site on GitHub](https://azure-ad-b2c.github.io/azureadb2ccommunity.io/) also provides sample custom policies from the community.
+>The [Azure Active Directory B2C community site on GitHub](https://github.com/azure-ad-b2c/partner-integrations) also provides sample custom policies from the community.
## Identity verification and proofing
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
The following XML snippet is an example of a RESTful technical profile configure
</ClaimsProvider> ```
+Add the validation technical profile reference to the sign up technical profile, which calls the `REST-AcquireAccessToken`. This behavior means that Azure AD B2C moves on to create the account in the directory only after successful validation.
+
+For example:
+ ```XML
+ <ValidationTechnicalProfiles>
+ ....
+ <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" />
+ ....
+ </ValidationTechnicalProfiles>
+ ```
+
++ ## API key authentication ::: zone pivot="b2c-user-flow"
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
+
+ Title: Kubernetes Event-driven Autoscaling (KEDA) (Preview)
+description: Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on.
+++ Last updated : 05/24/2022+++
+# Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on (Preview)
+
+Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Incubation project.
+
+It applies event-driven autoscaling to scale your application to meet demand in a sustainable and cost-efficient manner with scale-to-zero.
+
+The KEDA add-on makes it even easier by deploying a managed KEDA installation, providing you with [a rich catalog of 50+ KEDA scalers][keda-scalers] that you can scale your applications with on your Azure Kubernetes Services (AKS) cluster.
++
+## Architecture
+
+[KEDA][keda] provides two main components:
+
+- **KEDA operator** allows end-users to scale workloads in/out from 0 to N instances with support for Kubernetes Deployments, Jobs, StatefulSets or any custom resource that defines `/scale` subresource.
+- **Metrics server** exposes external metrics to Horizontal Pod Autoscaler (HPA) in Kubernetes for autoscaling purposes such as messages in a Kafka topic, or number of events in an Azure event hub. Due to upstream limitations, KEDA must be the only installed metric adapter.
+
+![Diagram that shows the architecture of K E D A and how it extends Kubernetes instead of re-inventing the wheel.](./media/keda/architecture.png)
+
+Learn more about how KEDA works in the [official KEDA documentation][keda-architecture].
+
+## Installation and version
+
+KEDA can be added to your Azure Kubernetes Service (AKS) cluster by enabling the KEDA add-on using an [ARM template][keda-arm].
+
+The KEDA add-on provides a fully supported installation of KEDA that is integrated with AKS.
++
+## Capabilities and features
+
+KEDA provides the following capabilities and features:
+
+- Build sustainable and cost-efficient applications with scale-to-zero
+- Scale application workloads to meet demand using [a rich catalog of 50+ KEDA scalers][keda-scalers]
+- Autoscale applications with `ScaledObjects`, such as Deployments, StatefulSets or any custom resource that defines `/scale` subresource
+- Autoscale job-like workloads with `ScaledJobs`
+- Use production-grade security by decoupling autoscaling authentication from workloads
+- Bring-your-own external scaler to use tailor-made autoscaling decisions
+
+## Add-on limitations
+
+The KEDA AKS add-on has the following limitations:
+
+* KEDA's [HTTP add-on (preview)][keda-http-add-on] to scale HTTP workloads isn't installed with the extension, but can be deployed separately.
+* KEDA's [external scaler for Azure Cosmos DB][keda-cosmos-db-scaler] to scale based on Azure Cosmos DB change feed isn't installed with the extension, but can be deployed separately.
+* Only one metric server is allowed in the Kubernetes cluster. Because of that the KEDA add-on should be the only metrics server inside the cluster.
+ * Multiple KEDA installations aren't supported
+* Managed identity isn't supported.
+
+For general KEDA questions, we recommend [visiting the FAQ overview][keda-faq].
+
+## Next steps
+
+* [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Autoscale a .NET Core worker processing Azure Service Bus Queue messages][keda-sample]
+
+<!-- LINKS - internal -->
+[keda-azure-cli]: keda-deploy-addon-az-cli.md
+[keda-arm]: keda-deploy-add-on-arm.md
+
+<!-- LINKS - external -->
+[keda]: https://keda.sh/
+[keda-architecture]: https://keda.sh/docs/latest/concepts/
+[keda-faq]: https://keda.sh/docs/latest/faq/
+[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
+[keda-scalers]: https://keda.sh/docs/scalers/
+[keda-http-add-on]: https://github.com/kedacore/http-add-on
+[keda-cosmos-db-scaler]: https://github.com/kedacore/external-scaler-azure-cosmos-db
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
+
+ Title: Deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on by using an ARM template
+description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).
+++ Last updated : 05/24/2022+++
+# Deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on by using ARM template
+
+This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using an [ARM](../azure-resource-manager/templates/index.yml) template.
+++
+## Prerequisites
+
+> [!NOTE]
+> KEDA is currently only available in the `westcentralus` region.
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Register the `AKS-KedaPreview` feature flag
+
+To use the KEDA, you must enable the `AKS-KedaPreview` feature flag on your subscription.
+
+```azurecli
+az feature register --name AKS-KedaPreview --namespace Microsoft.ContainerService
+```
+
+You can check on the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-KedaPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Deploy the KEDA add-on with Azure Resource Manager (ARM) templates
+
+The KEDA add-on can be enabled by deploying an AKS cluster with an Azure Resource Manager template and specifying the `workloadAutoScalerProfile` field:
+
+```json
+ "workloadAutoScalerProfile": {
+ "keda": {
+ "enabled": true
+ }
+ }
+```
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
+
+If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][az aks install-cli] command:
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az aks get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+
+```azurecli
+az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+```
+
+## Example deployment
+
+The following snippet is a sample deployment that creates a cluster with KEDA enabled with a single node pool comprised of three `DS2_v5` nodes.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "apiVersion": "2022-05-02-preview",
+ "dependsOn": [],
+ "type": "Microsoft.ContainerService/managedClusters",
+ "location": "westcentralus",
+ "name": "myAKSCluster",
+ "properties": {
+ "kubernetesVersion": "1.23.5",
+ "enableRBAC": true,
+ "dnsPrefix": "myAKSCluster",
+ "agentPoolProfiles": [
+ {
+ "name": "agentpool",
+ "osDiskSizeGB": 200,
+ "count": 3,
+ "enableAutoScaling": false,
+ "vmSize": "Standard_D2S_v5",
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "type": "VirtualMachineScaleSets",
+ "mode": "System",
+ "maxPods": 110,
+ "availabilityZones": [],
+ "nodeTaints": [],
+ "enableNodePublicIP": false
+ }
+ ],
+ "networkProfile": {
+ "loadBalancerSku": "standard",
+ "networkPlugin": "kubenet"
+ },
+ "workloadAutoScalerProfile": {
+ "keda": {
+ "enabled": true
+ }
+ }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ]
+}
+```
+
+## Start scaling apps with KEDA
+
+Now that KEDA is installed, you can start autoscaling your apps with KEDA by using its custom resource definition has been defined (CRD).
+
+To learn more about KEDA CRDs, follow the official [KEDA documentation][keda-scalers] to define your scaler.
+
+## Clean Up
+
+To remove the resource group, and all related resources, use the [az group delete][az-group-delete] command:
+
+```azurecli
+az group delete --name MyResourceGroup
+```
+## Next steps
+
+This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps
+
+<!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az aks install-cli]: /cli/azure/aks#az-aks-install-cli
+[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az aks update]: /cli/azure/aks#az-aks-update
+[az-group-delete]: /cli/azure/group#az-group-delete
+
+<!-- LINKS - external -->
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl
+[keda]: https://keda.sh/
+[keda-scalers]: https://keda.sh/docs/scalers/
+[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
+
+ Title: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview)
+description: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview).
+++ Last updated : 05/24/2022+++
+# Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview)
+
+The Kubernetes Event-driven Autoscaling (KEDA) add-on integrates with features provided by Azure and open source projects.
++
+> [!IMPORTANT]
+> Integrations with open source projects are not covered by the [AKS support policy][aks-support-policy].
+
+## Observe your autoscaling with Kubernetes events
+
+KEDA automatically emits Kubernetes events allowing customers to operate their application autoscaling.
+
+To learn about the available metrics, we recommend reading the [KEDA documentation][keda-event-docs].
+
+## Scalers for Azure services
+
+KEDA can integrate with various tools and services through [a rich catalog of 50+ KEDA scalers][keda-scalers]. It supports leading cloud platforms (such as Azure) and open-source technologies such as Redis and Kafka.
+
+It leverages the following scalers for Azure
+
+- [Azure Application Insights](https://keda.sh/docs/latest/scalers/azure-app-insights/)
+- [Azure Blob Storage](https://keda.sh/docs/latest/scalers/azure-storage-blob/)
+- [Azure Data Explorer](https://keda.sh/docs/latest/scalers/azure-data-explorer/)
+- [Azure Event Hubs](https://keda.sh/docs/latest/scalers/azure-event-hub/)
+- [Azure Log Analytics](https://keda.sh/docs/latest/scalers/azure-log-analytics/)
+- [Azure Monitor](https://keda.sh/docs/latest/scalers/azure-monitor/)
+- [Azure Pipelines](https://keda.sh/docs/latest/scalers/azure-pipelines/)
+- [Azure Service Bus](https://keda.sh/docs/latest/scalers/azure-service-bus/)
+- [Azure Storage Queue](https://keda.sh/docs/latest/scalers/azure-storage-queue/)
+
+Next to the built-in scalers, you can install external scalers yourself to autoscale on other Azure
+
+- [Azure Cosmos DB (Change feed)](https://github.com/kedacore/external-scaler-azure-cosmos-db)
+
+However, these external scalers aren't supported as part of the add-on and rely on community support.
+
+## Next steps
+
+* [Enable the KEDA add-on with an ARM template][keda-arm]
+* [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample]
+
+<!-- LINKS - internal -->
+[aks-support-policy]: support-policies.md
+[azure-monitor]: ../azure-monitor/overview.md
+[azure-monitor-container-insights]: ../azure-monitor/containers/container-insights-onboard.md
+[keda-arm]: keda-deploy-add-on-arm.md
+
+<!-- LINKS - external -->
+[keda-scalers]: https://keda.sh/docs/scalers/
+[keda-metrics]: https://keda.sh/docs/latest/operate/prometheus/
+[keda-event-docs]: https://keda.sh/docs/latest/operate/kubernetes-events/
+[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
There's no cost to migrate your App Service Environment. You'll stop being charg
- **What happens if migration fails or there is an unexpected issue during the migration?** If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments. - **What happens to my old App Service Environment?**
- If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible.
+ If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. A rollback to the old environment will not be possible.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
There's no cost to migrate your App Service Environment. You'll stop being charg
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Using an App Service Environment v3](using.md)
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
If you want to use manually labeled data, you'll also have to upload the *.label
When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) API to learn the expected sizes and positions of printed and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
### [Form Recognizer Studio](#tab/studio)
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models.md
You [train your model](./quickstarts/try-sdk-rest-api.md#train-a-custom-model)
When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of printed and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
[Get started with Train with labels](label-tool.md)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
You'll need a form document. You can use our [sample form document](https://raw.
## Data extraction
-The layout model extracts table structures, selection marks, printed and handwritten text, and bounding box coordinates from your documents.
+The layout model extracts table structures, selection marks, typeface and handwritten text, and bounding box coordinates from your documents.
### Tables and table headers
Layout API also extracts selection marks from documents. Extracted selection mar
### Text lines and words
-The layout model extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Printed and handwritten text is extracted from lines and words. The service then returns bounding box coordinates, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+The layout model extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Typeface and handwritten text is extracted from lines and words. The service then returns bounding box coordinates, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
:::image type="content" source="./media/layout-text-extraction.png" alt-text="Layout text extraction output":::
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| **Model** | **Description** | | | | |**Document analysis**||
-| 🆕[Read (preview)](#read-preview) | Extract printed and handwritten text lines, words, locations, and detected languages.|
+| 🆕[Read (preview)](#read-preview) | Extract typeface and handwritten text lines, words, locations, and detected languages.|
| 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities.| | [Layout](#layout) | Extract text and layout information from documents.| |**Prebuilt**||
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
# Form Recognizer read model
-The Form Recognizer v3.0 preview includes the new Read OCR model. Form Recognizer Read builds on the success of Computer Vision Read and optimizes even more for analyzing documents, including new document formats in the future. It extracts printed and handwritten text from documents and images and can handle mixed languages in the documents and text line. The read model can detect lines, words, locations, and additionally detect languages. It is the foundational technology powering the text extraction in Form Recognizer Layout, prebuilt, general document, and custom models.
+Form Recognizer v3.0 preview includes the new Read API model. The read model extracts typeface and handwritten text including mixed languages in documents. The read model can detect lines, words, locations, and languages and is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
## Development options
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
+
+ Title: Create SAS tokens for containers and blobs with the Azure portal
+description: Learn how to create shared access signature (SAS) tokens for containers using Azure portal, or Azure Explorer
+++++ Last updated : 05/27/2022+
+recommendations: false
++
+# Create SAS tokens for storage containers
+
+ In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+
+At a high level, here's how SAS tokens work:
+
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
+
+* If the storage service verifies that the SAS is valid, the request is authorized.
+
+* If the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
+
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs.
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+## When to use a SAS token
+
+* **Training custom models**. Your assembled set of training documents *must* be uploaded to an Azure Blob Storage container. You can opt to use a SAS token to grant access to your training documents.
+
+* **Using storage containers with public access**. You can opt to use a SAS token to grant limited access to your storage resources that have public read access.
+
+ > [!IMPORTANT]
+ >
+ > * If your Azure storage account is protected by a virtual network or firewall, you can't grant access with a SAS token. You'll have to use a [managed identity](managed-identities.md) to grant access to your storage resource.
+ >
+ > * [Managed identity](managed-identities-secured-access.md) supports both privately and publicly accessible Azure Blob Storage accounts.
+ >
+ > * SAS tokens grant permissions to storage resources, and should be protected in the same manner as an account key.
+ >
+ > * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* A [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
+
+## Upload your documents
+
+1. Go to the [Azure portal](https://portal.azure.com/#home).
+ * Select **Your storage account** → **Data storage** → **Containers**.
+
+ :::image type="content" source="media/sas-tokens/data-storage-menu.png" alt-text="Screenshot that shows the Data storage menu in the Azure portal.":::
+
+1. Select a container from the list.
+
+1. Select **Upload** from the menu at the top of the page.
+
+ :::image type="content" source="media/sas-tokens/container-upload-button.png" alt-text="Screenshot that shows the container Upload button in the Azure portal.":::
+
+1. The **Upload blob** window will appear. Select your files to upload.
+
+ :::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal.":::
+
+ > [!NOTE]
+ > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](./build-training-data-set.md#organize-your-data-in-subfolders-optional).
+
+## Use the Azure portal
+
+The Azure portal is a web-based console that enables you to manage your Azure subscription and resources using a graphical user interface (GUI).
+
+1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows:
+
+ * **Your storage account** → **containers** → **your container**.
+
+1. Select **Generate SAS** from the menu near the top of the page.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by selecting or clearing the appropriate checkbox.</br>
+
+ * Make sure the **Read**, **Write**, **Delete**, and **List** permissions are selected.
+
+ :::image type="content" source="media/sas-tokens/sas-permissions.png" alt-text="Screenshot that shows the SAS permission fields in the Azure portal.":::
+
+ >[!IMPORTANT]
+ >
+ > * If you receive a message similar to the following one, you'll also need to assign access to the blob data in your storage account:
+ >
+ > :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning.":::
+ >
+ > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
+ > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a SAS token, the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you'll be using your storage account for Form Recognizer Service operations.
+ * The value for the expiry time is a maximum of seven days from the creation of the SAS token.
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS token. The default value is HTTPS.
+
+1. Select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** appear in the lower area of the window. To use the Blob SAS token, append it to a storage service URI.
+
+1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. They're displayed only once and can't be retrieved after the window is closed.
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Use Azure Storage Explorer
+
+Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
+
+### Get started
+
+* You'll need the [**Azure Storage Explorer**](../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+
+* After the Azure Storage Explorer app is installed, [connect it the storage account](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Form Recognizer.
+
+### Create your SAS tokens
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand the Storage Accounts node and select **Blob Containers**.
+1. Expand the Blob Containers node and right-click a storage **container** node to display the options menu.
+1. Select **Get Shared Access Signature** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by selecting the **Read**, **Write**, **List**, and **Delete** checkboxes.
+ * Select **key1** or **key2**.
+ * Review and select **Create**.
+
+1. A new window will appear with the **Container** name, **SAS URL**, and **Query string** for your container.
+
+1. **Copy and paste the SAS URL and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Use your SAS URL to grant access
+
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the resources may be accessed by the client.
+
+### REST API
+
+To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync), add the SAS URL to the request body:
+
+ ```json
+ {
+ "source":"<BLOB SAS URL>"
+ }
+ ```
+
+### Sample Labeling Tool
+
+To use your SAS URL with the [Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/connections/create), add the SAS URL to the **Connection Settings** → **Azure blob container** → **SAS URI** field:
+
+ :::image type="content" source="media/sas-tokens/fott-add-sas-uri.png" alt-text="Screenshot that shows the SAS URI field.":::
+
+That's it! You've learned how to create SAS tokens to authorize how clients access your data.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Build a training data set](build-training-data-set.md)
applied-ai-services Generate Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/generate-sas-tokens.md
- Title: Generate SAS tokens for containers and blobs with the Azure portal
-description: Learn how to generate shared access signature (SAS) tokens for containers and blobs in the Azure portal.
----- Previously updated : 09/23/2021-
-recommendations: false
--
-# Generate SAS tokens for storage containers
-
-In this article, you'll learn how to generate user delegation shared access signature (SAS) tokens for Azure Blob Storage containers. A user delegation SAS token is signed with Azure Active Directory (Azure AD) credentials instead of Azure Storage keys. It provides superior secure and delegated access to resources in your Azure storage account.
-
-At a high level, here's how it works: your application provides the SAS token to Azure Storage as part of a request. If the storage service verifies that the shared access signature is valid, the request is authorized. If the shared access signature is considered invalid, the request is declined with error code 403 (Forbidden).
-
-Azure Blob Storage offers three types of resources:
-
-* **Storage** accounts provide a unique namespace in Azure for your data.
-* **Containers** are located in storage accounts and organize sets of blobs.
-* **Blobs** are located in containers and store text and binary data.
-
-> [!NOTE]
->
-> * If your Azure storage account is protected by a virtual network or firewall, you can't grant access by using a SAS token. You'll have to use a [managed identity](managed-identity-byos.md) to grant access to your storage resource.
-> * [Managed identity](managed-identity-byos.md) supports both privately and publicly accessible Azure Blob Storage accounts.
->
-
-## When to use a shared access signature
-
-* If you're using storage containers with public access, you can opt to use a SAS token to grant limited access to your storage resources.
-* When you're training a custom model, your assembled set of training documents *must* be uploaded to an Azure Blob Storage container. You can grant permission to your training resources with a user delegation SAS token.
-
-## Prerequisites
-
-To get started, you'll need:
-
-* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
-* A [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
-* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a container, following these quickstarts:
-
- * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
- * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
-
-## Upload your documents
-
-1. Go to the [Azure portal](https://portal.azure.com/#home). Select **Your storage account** > **Data storage** > **Containers**.
-
- :::image type="content" source="media/sas-tokens/data-storage-menu.png" alt-text="Screenshot that shows the Data storage menu in the Azure portal.":::
-
-1. Select a container from the list.
-1. Select **Upload** from the menu at the top of the page.
-
- :::image type="content" source="media/sas-tokens/container-upload-button.png" alt-text="Screenshot that shows the container Upload button in the Azure portal.":::
-
- The **Upload blob** window appears.
-1. Select your files to upload.
-
- :::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal.":::
-
-> [!NOTE]
-> By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](./build-training-data-set.md#organize-your-data-in-subfolders-optional).
-
-## Create a shared access signature with the Azure portal
-
-> [!IMPORTANT]
->
-> Generate and retrieve the shared access signature for your container, not for the storage account itself.
-
-1. In the [Azure portal](https://portal.azure.com/#home), select **Your storage account** > **Containers**.
-1. Select a container from the list.
-1. Go to the right of the main window, and select the three ellipses associated with your chosen container.
-1. Select **Generate SAS** from the dropdown menu to open the **Generate SAS** window.
-
- :::image type="content" source="media/sas-tokens/generate-sas.png" alt-text="Screenshot that shows the Generate SAS token dropdown menu in the Azure portal.":::
-
-1. Select **Signing method** > **User delegation key**.
-
-1. Define **Permissions** by selecting or clearing the appropriate checkbox. Make sure the **Read**, **Write**, **Delete**, and **List** permissions are selected.
-
- :::image type="content" source="media/sas-tokens/sas-permissions.png" alt-text="Screenshot that shows the SAS permission fields in the Azure portal.":::
-
- >[!IMPORTANT]
- >
- > * If you receive a message similar to the following one, you'll need to assign access to the blob data in your storage account:
- >
- > :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning.":::
- >
- > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
- > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.md?tabs=current) shows you how to assign a role that allows for read, write, and delete permissions for your Azure storage container. For example, see [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
-
-1. Specify the signed key **Start** and **Expiry** times. The value for the expiry time is a maximum of seven days from the start of the shared access signature.
-
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
-
-1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the shared access signature. The default value is HTTPS.
-
-1. Select **Generate SAS token and URL**.
-
-1. The **Blob SAS token** query string and **Blob SAS URL** appear in the lower area of the window. To use the Blob SAS token, append it to a storage service URI.
-
-1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. They're displayed only once and can't be retrieved after the window is closed.
-
-## Create a shared access signature with the Azure CLI
-
-1. To create a user delegation SAS for a container by using the Azure CLI, make sure that you've installed version 2.0.78 or later. To check your installed version, use the `az --version` command.
-
-1. Call the [az storage container generate-sas](/cli/azure/storage/container#az-storage-container-generate-sas) command.
-
-1. The following parameters are required:
-
- * `auth-mode login`. This parameter ensures that requests made to Azure Storage are authorized with your Azure AD credentials.
- * `as-user`. This parameter indicates that the generated SAS is a user delegation SAS.
-
-1. Supported permissions for a user delegation SAS on a container include Add (a), Create (c), Delete (d), List (l), Read (r), and Write (w). Make sure **r**, **w**, **d**, and **l** are included as part of the permissions parameters.
-
-1. When you create a user delegation SAS with the Azure CLI, the maximum interval during which the user delegation key is valid is seven days from the start date. Specify an expiry time for the shared access signature that's within seven days of the start time. For more information, see [Create a user delegation SAS for a container or blob with the Azure CLI](../../storage/blobs/storage-blob-user-delegation-sas-create-cli.md#use-azure-ad-credentials-to-secure-a-sas).
-
-### Example
-
-Generate a user delegation SAS. Replace the placeholder values in the brackets with your own values:
-
-```azurecli-interactive
-az storage container generate-sas \
- --account-name <storage-account> \
- --name <container> \
- --permissions rwdl \
- --expiry <date-time> \
- --auth-mode login \
- --as-user
-```
-
-## Use your Blob SAS URL
-
-Two options are available:
-
-* To use your Blob SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync), add the SAS URL to the request body:
-
- ```json
- {
- "source":"<BLOB SAS URL>"
- }
- ```
-
-* To use your Blob SAS URL with the [Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/connections/create), add the SAS URL to the **Connection Settings** > **Azure blob container** > **SAS URI** field:
-
- :::image type="content" source="media/sas-tokens/fott-add-sas-uri.png" alt-text="Screenshot that shows the SAS URI field.":::
-
-That's it. You've learned how to generate SAS tokens to authorize how clients access your data.
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Build a training data set](build-training-data-set.md)
applied-ai-services Use Prebuilt Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-prebuilt-read.md
recommendations: false
# Use the Read Model
- In this how-to guide, you'll learn to use Azure Form Recognizer's [read model](../concept-read.md) to extract printed and handwritten text from documents. The read model can detect lines, words, locations, and languages. You can use a programming language of your choice or the REST API. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+ In this how-to guide, you'll learn to use Azure Form Recognizer's [read model](../concept-read.md) to extract typeface and handwritten text from documents. The read model can detect lines, words, locations, and languages. You can use a programming language of your choice or the REST API. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
The read model is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Form Recognizer uses the following models to easily identify, extract, and analy
**Document analysis models**
-* [**Read model**](concept-read.md) | Extract printed and handwritten text lines, words, locations, and detected languages from documents and images.
+* [**Read model**](concept-read.md) | Extract typeface and handwritten text lines, words, locations, and detected languages from documents and images.
* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents (PDF and TIFF) and images (JPG, PNG, and BMP). * [**General document model**](concept-general-document.md) | Extract key-value pairs, selection marks, and entities from documents.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
To learn more about Form Recognizer features and development options, visit our
**Document Analysis**
-* 🆕 Read—Analyze and extract printed and handwritten text lines, words, locations, and detected languages.
+* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities. * Layout—Analyze and extract tables, lines, words, and selection marks from documents, without the need to train a model.
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
+
+ Title: Concept - Internet connectivity design considerations (Preview)
+description: Options for Azure VMware Solution Internet Connectivity.
+ Last updated : 5/12/2022+
+# Internet connectivity design considerations (Preview)
+
+There are three primary patterns for creating outbound access to the Internet from Azure VMware Solution and to enable inbound Internet access to resources on your Azure VMware Solution private cloud.
+
+- [Internet Service hosted in Azure](#internet-service-hosted-in-azure)
+- [Azure VMware Solution Managed SNAT](#azure-vmware-solution-managed-snat)
+- [Public IP to NSX edge](#public-ip-to-nsx-edge)
+
+Your requirements for security controls, visibility, capacity, and operations drive the selection of the appropriate method for delivery of Internet access to the Azure VMware Solution private cloud.
+
+## Internet Service hosted in Azure
+
+There are multiple ways to generate a default route in Azure and send it towards your Azure VMware Solution private cloud or on-premise. The options are as follows:
+
+- An Azure firewall in a Virtual WAN Hub.
+- A third-party Network Virtual Appliance in a Virtual WAN Hub Spoke Virtual Network.
+- A third-party Network Virtual Appliance in a Native Azure Virtual Network using Azure Route Server.
+- A default route from on-premises transferred to Azure VMware Solution over Global Reach.
+
+Use any of these patterns to provide an outbound SNAT service with the ability to control what sources are allowed out, to view the connection logs, and for some services, do further traffic inspection.
+
+The same service can also consume an Azure Public IP and create an inbound DNAT from the Internet towards targets in Azure VMware Solution.
+
+An environment can also be built that utilizes multiple paths for Internet traffic. One for outbound SNAT (for example, a third-party security NVA), and another for inbound DNAT (like a third party Load balancer NVA using SNAT pools for return traffic).
+
+## Azure VMware Solution Managed SNAT
+
+A Managed SNAT service provides a simple method for outbound internet access from an Azure VMware Solution private cloud. Features of this service include the following.
+
+- Easily enabled ΓÇô select the radio button on the Internet Connectivity tab and all workload networks will have immediate outbound access to the Internet through a SNAT gateway.
+- No control over SNAT rules, all sources that reach the SNAT service are allowed.
+- No visibility into connection logs.
+- Two Public IPs are used and rotated to support up to 128k simultaneous outbound connections.
+- No inbound DNAT capability is available with the Azure VMware Solution Managed SNAT.
+
+## Public IP to NSX edge
+
+This option brings an allocated Azure Public IP directly to the NSX Edge for consumption. It allows the Azure VMware Solution private cloud to directly consume and apply public network addresses in NSX as required. These addresses are used for the following types of connections:
+- Outbound SNAT
+- Inbound DNAT
+- Load balancing using VMware AVI third-party Network Virtual Appliances
+- Applications directly connected to a workload VM interface.
+
+This option also lets you configure the public address on a third-party Network Virtual Appliance to create a DMZ within the Azure VMware Solution private cloud.
+
+Features include:
+
+ - Scale ΓÇô the soft limit of 64 public IPs can be increased by request to 1000s of Public IPs allocated if required by an application.
+ - Flexibility ΓÇô A Public IP can be applied anywhere in the NSX ecosystem. It can be used to provide SNAT or DNAT, on load balancers like VMwareΓÇÖs AVI, or third-party Network Virtual Appliances. It can also be used on third-party Network Virtual Security Appliances on VMware segments or on directly on VMs.
+ - Regionality ΓÇô the Public IP to the NSX Edge is unique to the local SDDC. For ΓÇ£multi private cloud in distributed regions,ΓÇ¥ with local exit to Internet intentions, itΓÇÖs much easier to direct traffic locally versus trying to control default route propagation for a security or SNAT service hosted in Azure. If you've two or more Azure VMware Solution private clouds connected with a Public IP configured, they can both have a local exit.
+
+## Considerations for selecting an option
+
+The option that you select depends on the following factors:
+
+- To add an Azure VMware private cloud to a security inspection point provisioned in Azure native that inspects all Internet traffic from Azure native endpoints, use an Azure native construct and leak a default route from Azure to your Azure VMware Solution private cloud.
+- If you need to run a third-party Network Virtual Appliance to conform to existing standards for security inspection or streamlined opex, you have two options. You can run your Public IP in Azure native with the default route method or run it in Azure VMware Solution using Public IP to NSX edge.
+- There are scale limits on how many Public IPs can be allocated to a Network Virtual Appliance running in native Azure or provisioned on Azure Firewall. The Public IP to NSX edge option allows for much higher allocations (1000s versus 100s).
+- Use a Public IP to the NSX for a localized exit to the Internet from each private cloud in its local region. Using multiple Azure VMware Solution private clouds in several Azure regions that need to communicate with each other and the Internet, it can be challenging to match an Azure VMware Solution private cloud with a security service in Azure. The difficulty is due to the way a default route from Azure works.
+
+## Next Steps
+
+[Enable Managed SNAT for Azure VMware Solution Workloads](enable-managed-snat-for-workloads.md)
+
+[Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)](enable-public-ip-nsx-edge.md)
+
+[Disable Internet access or enable a default route](disable-internet-access.md)
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter Server
description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source. Last updated 04/22/2022--- # Configure external identity source for vCenter Server
azure-vmware Disable Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disable-internet-access.md
+
+ Title: Disable internet access or enable a default route
+description: This article explains how to disable internet access for Azure VMware Solution and enable default route for Azure VMware Solution.
+ Last updated : 05/12/2022+
+# Disable internet access or enable a default route
+In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premise. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
+
+With a default route setup, you can achieve the following tasks:
+- Disable Internet access to your Azure VMware Solution private cloud.
+
+ > [!Note]
+ > Ensure that a default route is not advertised from on-premises or Azure as that will override this setup.
+
+- Enable Internet access by generating a default route from Azure Firewall or third-party Network Virtual Appliance.
+## Prerequisites
+- If Internet access is required, a default route must be advertised from an Azure Firewall, Network Virtual Appliance or Virtual WAN Hub.
+- Azure VMware Solution private cloud.
+## Disable Internet access or enable a default route in the Azure portal
+1. Log in to the Azure portal.
+1. Search for **Azure VMware Solution** and select it.
+1. Locate and select your Azure VMware Solution private cloud.
+1. On the left navigation, under **Workload networking**, select **Internet connectivity**.
+1. Select the **Don't connect or connect using default route from Azure** button and select **Save**.
+If you don't have a default route from on-premises or from Azure, you have successfully disabled Internet connectivity to your Azure VMware Solution private cloud.
+
+## Next steps
+
+[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md)
+
+[Enable Managed SNAT for Azure VMware Solution Workloads](enable-managed-snat-for-workloads.md)
+
+[Enable Public IP to the NSX Edge for Azure VMware Solution](enable-public-ip-nsx-edge.md)
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
+
+ Title: Enable Managed SNAT for Azure VMware Solution Workloads
+description: This article explains how to enable Managed SNAT for Azure VMware Solution Workloads.
+ Last updated : 05/12/2022+
+# Enable Managed SNAT for Azure VMware Solution workloads
+
+In this article, you'll learn how to enable Azure VMware SolutionΓÇÖs Managed Source NAT (SNAT) to connect to the Internet outbound. A SNAT service translates from RFC1918 space to the public Internet for simple outbound Internet access. The SNAT service won't work when you have a default route from Azure.
+
+With this capability, you:
+
+- Have a basic SNAT service with outbound Internet connectivity from your Azure VMware Solution private cloud.
+- Have no control of outbound SNAT rules.
+- Are unable to view connection logs.
+- Have a limit of 128 000 concurrent connections.
+
+## Prerequisites
+- Azure Solution VMware private cloud
+- DNS Server configured on the NSX-T Datacenter
+
+## Reference architecture
+The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge.
+
+## Configure Outbound Internet access using Managed SNAT in the Azure portal
+
+1. Log in to the Azure portal and then search for and select **Azure VMware Solution**.
+2. Select the Azure VMware Solution private cloud.
+1. In the left navigation, under **Workload Networking**, select **Internet Connectivity**.
+4. Select **Connect using SNAT** button and select **Save**.
+ You have successfully enabled outbound Internet access for your Azure VMware Solution private cloud using our Managed SNAT service.
+
+## Next steps
+[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md)
+
+[Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)](enable-public-ip-nsx-edge.md)
+
+[Disable Internet access or enable a default route](disable-internet-access.md)
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-internet-access.md
In this scenario, you'll publish the IIS webserver to the internet. Use the publ
1. Select the Azure VMware Solution private cloud.
- :::image type="content" source="media/public-ip-usage/avs-private-cloud-resource.png" alt-text="Screenshot of the Azure VMware Solution private cloud." lightbox="media/public-ip-usage/avs-private-cloud-resource.png":::
- 1. Under **Manage**, select **Connectivity**. :::image type="content" source="media/public-ip-usage/avs-private-cloud-manage-menu.png" alt-text="Screenshot of the Connectivity section." lightbox="media/public-ip-usage/avs-private-cloud-manage-menu.png":::
Once all components are deployed, you can see them in the added Resource group.
1. Select a hub from the list and select **Add**.
- :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png":::
+ :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-policy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-policy.png":::
1. Select **Next: Tags**.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
+
+ Title: Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)
+description: This article explains how to enable internet access for your Azure VMware Solution.
+ Last updated : 05/12/2022+
+# Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)
+
+In this article, you'll learn how to enable Public IP to the NSX Edge for your Azure VMware Solution.
+
+>[!TIP]
+>Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
+
+Public IP to the NSX Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment. The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data center interface within your Azure VMware Solution private cloud.
+With this capability, you have the following features:
+- A cohesive and simplified experience for reserving and using a Public IP down to the NSX Edge.
+- The ability to receive up to 1000 or more Public IPs, enabling Internet access at scale.
+- Inbound and outbound internet access for your workload VMs.
+- DDoS Security protection against network traffic in and out of the Internet.
+- HCX Migration support over the Public Internet.
+
+## Reference architecture
+The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge.
+
+## Configure a Public IP in the Azure portal
+1. Log in to the Azure portal.
+1. Search for and select Azure VMware Solution.
+2. Select the Azure VMware Solution private cloud.
+1. In the left navigation, under **Workload Networking**, select **Internet connectivity**.
+4. Select the **Connect using Public IP down to the NSX-T Edge** button.
+
+>[!TIP]
+>Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md)
+
+5. Select **Public IP**.
+ :::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge":::
+6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**.
+7. This Public IP should be configured within 20 minutes and will show the subnet.
+ :::image type="content" source="media/public-ip-nsx-edge/public-ip-subnet-internet-connectivity.png" alt-text="Diagram that shows Internet connectivity in Azure VMware Solution.":::
+1. If you don't see the subnet, refresh the list. If the refresh fails, try the configuration again.
+
+9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options.
+10. Select **Save**.
+
+You have successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX Edge for your workloads. The NSX-T Datacenter is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX Edge.
+
+There are three options for configuring your reserved Public IP down to the NSX Edge: Outbound Internet Access for VMs, Inbound Internet Access for VMs, and Gateway Firewall used to Filter Traffic to VMs at T1 Gateways.
+
+### Outbound Internet access for VMs
+
+A Sourced Network Translation Service (SNAT) with Port Address Translation (PAT) is used to allow many VMs to one SNAT service. This connection means you can provide Internet connectivity for many VMs.
+
+**Add rule**
+1. From your Azure VMware Solution private cloud, select **vCenter Credentials**
+2. Locate your NSX-T URL and credentials.
+3. Log in to **VMWare NSX-T**.
+4. Navigate to **NAT Rules**.
+5. Select the T1 Router.
+1. select **ADD NAT RULE**.
+
+**Configure rule**
+
+1. Enter a name.
+1. Select **SNAT**.
+1. Optionally enter a source such as a subnet to SNAT or destination.
+1. Enter the translated IP. This IP is from the range of Public IPs you reserved from the Azure VMware Solution Portal.
+1. Optionally give the rule a higher priority number. This prioritization will move the rule further down the rule list to ensure more specific rules are matched first.
+1. Click **SAVE**.
+
+Logging can be enabled by way of the logging slider. For more information on NSX-T NAT configuration and options, see the
+[NSX-T NAT Administration Guide](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-7AD2C384-4303-4D6C-A44A-DEF45AA18A92.html)
+### Inbound Internet Access for VMs
+A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs.
+
+**Log in VMware NSX-T**
+1. From your Azure VMware Solution private cloud, select **VMware credentials**.
+2. Locate your NSX-T URL and credentials.
+3. Log in to **VMware NSX-T**.
+
+**Configure the DNAT rule**
+ 1. Name the rule.
+ 1. Select **DNAT** as the action.
+ 1. Enter the reserved Public IP in the destination match.
+ 1. Enter the VM Private IP in the translated IP. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal.
+ 1. Select **SAVE**.
+ 1. Optionally, configure the Translated Port or source IP for more specific matches.
+
+The VM is now exposed to the internet on the specific Public IP and/or specific ports.
+
+### Gateway Firewall used to filter traffic to VMs at T1 Gateways
+
+You can provide security protection for your network traffic in and out of the public Internet through your Gateway Firewall.
+1. From your Azure VMware Solution Private Cloud, select **VMware credentials**
+2. Locate your NSX-T URL and credentials.
+3. Log in to **VMware NSX-T**.
+4. From the NSX-T home screen, select **Gateway Policies**.
+5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**.
+6. Select **New Policy** and enter a policy name.
+7. Select the Policy and select **ADD RULE**.
+8. Configure the rule.
+
+ 1. Select **New Rule**.
+ 1. Enter a descriptive name.
+ 1. Configure the source, destination, services, and action.
+
+1. Select **Match External Address** to apply firewall rules to the external address of a NAT rule.
+
+For example, the following rule is set to Match External Address, and this setting will allow SSH traffic inbound to the Public IP.
+ :::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot Internet connectivity inbound Public IP." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png":::
+
+If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
+For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
+The Distributed Firewall may also be used to filter traffic to VMs. This feature is outside the scope of this document. The [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html) .
++
+## Next steps
+[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md)
+
+[Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md)
+
+[Disable Internet access or enable a default route](disable-internet-access.md)
+
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
Title: Create shared access signature (SAS) tokens for containers and blobs with Microsoft Storage Explorer
+ Title: Create shared access signature (SAS) tokens for containers and blobs with Microsoft Storage Explorer
description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal. Previously updated : 04/26/2022 Last updated : 05/27/2022 # Create SAS tokens for your storage containers
-In this article, you'll learn how to create shared access signature (SAS) tokens using the Azure Storage Explorer or the Azure portal. A SAS token provides secure, delegated access to resources in your Azure storage account.
+In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
-## Create your SAS tokens with Azure Storage Explorer
+At a high level, here's how SAS tokens work:
-### Prerequisites
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
-* You'll need a [**Azure Storage Explorer**](../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment. Azure Storage Explorer is a free tool that enables you to easily manage your Azure cloud storage resources.
-* After the Azure Storage Explorer app is installed, [connect it the storage account](../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation.
+* If the storage service verifies that the SAS is valid, the request is authorized.
-### Create your tokens
+* If the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
-### [SAS tokens for containers](#tab/Containers)
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs (files, text, or images).
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+> [!IMPORTANT]
+>
+> * SAS tokens are used to grant permissions to storage resources, and should be protected in the same manner as an account key.
+>
+> * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you'll need the following resources:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* A [Translator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
+
+## Create SAS tokens in the Azure portal
+
+<!-- markdownlint-disable MD024 -->
+
+Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your container or a specific file as follows and continue with the steps below:
+
+| Create SAS token for a container| Create SAS token for a specific file|
+|:--:|:--:|
+**Your storage account** → **containers** → **your container** |**Your storage account** → **containers** → **your container**→ **your file** |
+
+1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by checking and/or clearing the appropriate check box:
+
+ * Your **source** container or file must have designated **read** and **list** access.
+
+ * Your **target** container or file must have designated **write** and **list** access.
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you'll be using your storage account for Translator Service operations.
+ * The value for the expiry time is a maximum of seven days from the creation of the SAS token.
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
+
+1. Review then select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** will be displayed in the lower area of window.
+
+1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Create SAS tokens with Azure Storage Explorer
+
+Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
+
+* You'll need the [**Azure Storage Explorer**](../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+
+* After the Azure Storage Explorer app is installed, [connect it to the storage account](../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation. Follow the steps below to create tokens for a storage container or specific blob file:
+
+### [SAS tokens for storage containers](#tab/Containers)
1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**. 1. Expand the Storage Accounts node and select **Blob Containers**.
In this article, you'll learn how to create shared access signature (SAS) tokens
* Define your container **Permissions** by checking and/or clearing the appropriate check box. * Review and select **Create**.
-1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container.
+1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container.
1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
-1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-### [SAS tokens for blobs](#tab/blobs)
+### [SAS tokens for specific blob file](#tab/blobs)
1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**. 1. Expand your storage node and select **Blob Containers**. 1. Expand the Blob Containers node and select a **container** node to display the contents in the main window.
-1. Select the blob where you wish to delegate SAS access and right-click to display the options menu.
+1. Select the file where you wish to delegate SAS access and right-click to display the options menu.
1. Select **Get Shared Access Signature...** from options menu. 1. In the **Shared Access Signature** window, make the following selections: * Select your **Access policy** (the default is none). * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked. * Select the **Time zone** for the Start and Expiry date and time (default is Local). * Define your container **Permissions** by checking and/or clearing the appropriate check box.
+ * Your **source** container or file must have designated **read** and **list** access.
+ * Your **target** container or file must have designated **write** and **list** access.
+ * Select **key1** or **key2**.
* Review and select **Create**.
-1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob.
+
+1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob.
1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.**
-1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-## Create SAS tokens for blobs in the Azure portal
+### Use your SAS URL to grant access
-<!-- markdownlint-disable MD024 -->
-### Prerequisites
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the resources may be accessed by the client.
-To get started, you'll need:
+You can include your SAS URL with REST API requests in two ways:
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**Translator**](https://portal.azure.com/#create/Microsoft) service resource (**not** a Cognitive Services multi-service resource. *See* [Create a new Azure resource](../../cognitive-services-apis-create-account.md#create-a-new-azure-cognitive-services-resource).
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
+* Use the **SAS URL** as your sourceURL and targetURL values.
-### Create your tokens
+* Append the **SAS query string** to your existing sourceURL and targetURL values.
-Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows:
+Here is a sample REST API request:
- **Your storage account** → **containers** → **your container** → **your blob**
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
-1. Select **Generate SAS** from the menu near the top of the page.
-
-1. Select **Signing method** → **User delegation key**.
-
-1. Define **Permissions** by checking and/or clearing the appropriate check box.
-
-1. Specify the signed key **Start** and **Expiry** times.
-
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
-
-1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
-
-1. Review then select **Generate SAS token and URL**.
-
-1. The **Blob SAS token** query string and **Blob SAS URL** will be displayed in the lower area of window.
-
-1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
-
-1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
-
-## Learn more
-
-* [Create SAS tokens for blobs or containers programmatically](../../../storage/blobs/sas-service-create.md)
-* [Permissions for a directory, container, or blob](/rest/api/storageservices/create-service-sas#permissions-for-a-directory-container-or-blob)
+That's it! You've learned how to create SAS tokens to authorize how clients access your data.
## Next steps > [!div class="nextstepaction"] > [Get Started with Document Translation](get-started-with-document-translation.md) >
->
cognitive-services How To Create Translator Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/how-to-create-translator-resource.md
In this article, you'll learn how to create a Translator resource in the Azure p
To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free 12-month subscription**](https://azure.microsoft.com/free/).
-## Translator resource types
+## Create your resource
The Translator service can be accessed through two different resource types:
-* **Single-service** resource types enable access to a single service API key and endpoint.
-
-* **Multi-service** resource types enable access to multiple Cognitive Services using a single API key and endpoint. The Cognitive Services resource is currently available for the following
- * Language ([Translator](../translator/translator-overview.md), [Language Understanding (LUIS)](../luis/what-is-luis.md), [Language service](../text-analytics/overview.md))
- * Vision ([Computer Vision](../computer-vision/overview.md)), ([Face](../face/overview.md))
- * Decision ([Content Moderator](../content-moderator/overview.md))
-
-## Create your resource
-
-* Navigate directly to the [**Create Translator**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) page in the Azure portal to complete your project details.
+* [**Single-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource types enable access to a single service API key and endpoint.
-* Navigate directly to the [**Create Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) page in the Azure portal to complete your project details.
+* [**Multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource types enable access to multiple Cognitive Services using a single API key and endpoint. The Cognitive Services resource is currently available for the following
->[!TIP]
->If you prefer, you can start on the Azure Portal home page to begin the **Create** process as follows:
->
-> 1. Navigate to the [**Azure Portal**](https://portal.azure.com/#home) home page.
-> 1. Select Γ₧ò**Create a resource** from the Azure services menu.
->1. In the **Search the Marketplace** search box, enter and select **Translator** (single-service resource) or **Cognitive Services** (multi-service resource). *See* [Choose your resource type](#create-your-resource), above.
-> 1. Select **Create** and you will be taken to the project details page.
-><br/><br/>
+> [!TIP]
+> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Translator Service access only, create a Translator single-service resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../active-directory/authentication/overview-authentication.md).
## Complete your project and instance details
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
You will need a certificate in PEM format. You can create more than one certific
## OpenSSL
-We recommending using OpenSSL to generate certificates. If you have git installed, you can run OpenSSL in the git shell. Otherwise, you can install OpenSSL for your OS.
+We recommend using OpenSSL to generate certificates. If you have git installed, you can run OpenSSL in the git shell. Otherwise, you can install OpenSSL for your OS.
- **Windows**: Install [chocolatey for Windows](https://chocolatey.org/install), open a PowerShell terminal windows in admin mode, and run `choco install openssl`. Alternatively, you can install OpenSSL for Windows directly from [here](http://gnuwin32.sourceforge.net/packages/openssl.htm). - **Linux**: Run `sudo apt-get install openssl`
openssl req -new -key "privkey_name.pem" -x509 -nodes -days 365 -out "cert.pem"
## Next steps -- [Overview of Microsoft Azure confidential ledger](overview.md)
+- [Overview of Microsoft Azure confidential ledger](overview.md)
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 04/18/2022 Last updated : 05/28/2022 tags: connectors
From your workflow in Azure Logic Apps, you can access and manage files stored a
You can connect to Blob Storage from both **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the *built-in* **Azure Blob** operations or the **Azure Blob Storage** managed connector operations.
-> [!IMPORTANT]
-> A logic app workflow can't directly access a storage account behind a firewall if they're both in the same region.
-> As a workaround, your logic app and storage account can be in different regions. For more information about enabling
-> access from Azure Logic Apps to storage accounts behind firewalls, review the [Access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls) section later in this topic.
- ## Prerequisites - An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
You can connect to Blob Storage from both **Logic App (Consumption)** and **Logi
## Connector reference
-For more technical details about this connector, such as triggers, actions, and limits, review the [connector's reference page](/connectors/azureblobconnector/). If you don't want to use the Blob operations, you can use the [use HTTP trigger or action along with a a managed identity for blob operations instead](#access-blob-storage-with-managed-identities).
+For more technical details about this connector, such as triggers, actions, and limits, review the [connector's reference page](/connectors/azureblobconnector/).
<a name="add-trigger"></a>
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
| Task | Path syntax | ||-|
- | Check the root folder for a newly added blob. | **<*container-name*>** |
+ | Check the root folder and its nested subfolders for a newly added blob. | **<*container-name*>** |
| Check the root folder for changes to a specific blob. | **<*container-name*>/<*blob-name*>.<*blob-extension*>** | | Check the root folder for changes to any blobs with the same extension, for example, **.txt**. | **<*container-name*>/{name}.txt** <br><br>**Important**: Make sure that you use **{name}** as a literal. | | Check the root folder for changes to any blobs with names starting with a specific string, for example, **Sample-**. | **<*container-name*>/Sample-{name}** <br><br>**Important**: Make sure that you use **{name}** as a literal. |
To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, f
This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
-1. Under the trigger or action where you want to add the Blob action, select **New step** or **Add an action**, if between steps.
+1. Under the trigger or action where you want to add the Blob action, select **New step** or **Add an action**, if between steps. This example uses the built-in Azure Blob action.
1. Under the designer search box, make sure that **All** is selected. In the search box, enter **Azure blob**. Select the Blob action that you want to use.
You can add network security to an Azure storage account by [restricting access
- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption, Standard, and ISE-based logic apps, review the following documentation:
- - [Access storage accounts with managed identities](#access-blob-storage-with-managed-identities)
+ - [Access storage accounts in same region with managed identities](#access-blob-storage-in-same-region-with-managed-identities)
- [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)
To add your outbound IP addresses to the storage account firewall, follow these
You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
-### Access Blob Storage with managed identities
+### Access Blob Storage in same region with managed identities
To connect to Azure Blob Storage in any region, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md). You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 04/12/2022 Last updated : 04/24/2022 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
If all of your source records map to the same target entity and your source data
## Mapping data flow properties
-When transforming data in mapping data flow, you can read and write to tables from Dynamics. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use a Dynamics dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
+When transforming data in mapping data flow, you can read from and write to tables in Dynamics. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use a Dynamics dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
### Source transformation
The below table lists the properties supported by Dynamics. You can edit these p
| Name | Description | Required | Allowed values | Data flow script property | | - | -- | -- | -- | - |
-| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - | tableName |
+| Entity name| The logical name of the entity to retrieve. | Yes when use inline dataset | - | *(for inline dataset only)*<br>entity |
| Query |FetchXML is a proprietary query language that is used in Dynamics online and on-premises. See the following example. To learn more, see [Build queries with FetchXML](/previous-versions/dynamicscrm-2016/developers-guide/gg328332(v=crm.8)). | No | String | query |
-| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
> [!Note] > If you select **Query** as input type, the column type from tables can not be retrieved. It will be treated as string by default. #### Dynamics source script example
-When you use Dynamics as source type, the associated data flow script is:
+When you use Dynamics dataset as source type, the associated data flow script is:
```
-source(
- output(
- new_name as string,
- new_dataflowtestid as string
- ),
- store: 'dynamics',
- format: 'dynamicsformat',
- baseUrl: $baseUrl,
- cloudType:'AzurePublic',
- servicePrincipalId:$servicePrincipalId,
- servicePrincipalCredential:$servicePrincipalCredential,
- entity:'new_datalowtest'
-query:' <fetch mapping='logical' count='3 paging-cookie=''><entity name='new_dataflow_crud_test'><attribute name='new_name'/><attribute name='new_releasedate'/></entity></fetch> '
- ) ~> movies
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ query: '<fetch mapping='logical' count='3 paging-cookie=''><entity name='new_dataflow_crud_test'><attribute name='new_name'/><attribute name='new_releasedate'/></entity></fetch>') ~> DynamicsSource
+```
+
+If you use inline dataset, the associated data flow script is:
```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ entity: 'Entity1',
+ query: '<fetch mapping='logical' count='3 paging-cookie=''><entity name='new_dataflow_crud_test'><attribute name='new_name'/><attribute name='new_releasedate'/></entity></fetch>') ~> DynamicsSource
+```
### Sink transformation
The below table lists the properties supported by Dynamics sink. You can edit th
| Name | Description | Required | Allowed values | Data flow script property | | - | -- | -- | -- | - |
-| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
-| Request interval | The interval time between API requests in millisecond. | No | - | requestInterval|
-| Update method | Specify what operations are allowed on your database destination. The default is to only allow inserts.<br>To update, upsert, or delete rows, an [Alter row transformation](data-flow-alter-row.md) is required to tag rows for those actions. | Yes | `true` or `false` | insertable <br/>updateable<br/>upsertable<br>deletable|
| Alternate key name | The alternate key name defined on your entity to do an update, upsert or delete. | No | - | alternateKeyName |
+| Update method | Specify what operations are allowed on your database destination. The default is to only allow inserts.<br>To update, upsert, or delete rows, an [Alter row transformation](data-flow-alter-row.md) is required to tag rows for those actions. | Yes | `true` or `false` | insertable <br/>updateable<br/>upsertable<br>deletable|
+| Entity name| The logical name of the entity to write. | Yes when use inline dataset | - | *(for inline dataset only)*<br>entity|
+ #### Dynamics sink script example
-When you use Dynamics as sink type, the associated data flow script is:
+When you use Dynamics dataset as sink type, the associated data flow script is:
```
-moviesAltered sink(
- input(new_name as string,
- new_id as string,
- new_releasedate as string
- ),
- store: 'dynamics',
- format: 'dynamicsformat',
- baseUrl: $baseUrl,
-
- cloudType:'AzurePublic',
- servicePrincipalId:$servicePrincipalId,
- servicePrincipalCredential:$servicePrincipalCredential,
- updateable: true,
- upsertable: true,
- insertable: true,
- deletable:true,
- alternateKey:'new_testalternatekey',
- entity:'new_dataflow_crud_test',
-
-requestInterval:1000
- ) ~> movieDB
+IncomingStream sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> DynamicsSink
```
+If you use inline dataset, the associated data flow script is:
+
+```
+IncomingStream sink(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ entity: 'Entity1',
+ deletable: true,
+ insertable: true,
+ updateable: true,
+ upsertable: true,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> DynamicsSink
+```
## Lookup activity properties To learn details about the properties, see [Lookup activity](control-flow-lookup-activity.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in May include:
- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level) - [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)-- [Add and remove the Defender profile for AKS clusters from the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-from-the-cli)
+- [Add and remove the Defender profile for AKS clusters using the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-using-the-cli)
### Multicloud settings of Servers plan are now available in connector level
When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatical
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
-### Add and remove the Defender profile for AKS clusters from the CLI
+### Add and remove the Defender profile for AKS clusters using the CLI
The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster.
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
This article describes the **HPE ProLiant DL360** appliance for OT sensors.
| Appliance characteristic |Details | ||| |**Hardware profile** | Corporate |
-|**Performance** | Max bandwidth: 3Gbp/s <br> Max devices: 12,000 |
+|**Performance** | Max bandwidth: 3Gbp/s <br> Max devices: 12,000 |
|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)| |**Status** | Supported, Available preconfigured|
defender-for-iot Virtual Management Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-vmware.md
This article describes an on-premises management console deployment on a virtual
| Appliance characteristic |Details | ||| |**Hardware profile** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
-|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
|**Physical specifications** | Virtual Machine | |**Status** | Supported |
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
You are able to attach a SPAN Virtual Interface to the Virtual Switch through Wi
1. Select **OK**.
-These commands set the name of the newly added adapter hardware to be `Monitor`. If you are using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
+These commands set the name of the newly added adapter hardware to be `Monitor`. If you're using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
**To attach a SPAN Virtual Interface to the virtual switch with Hyper-V Manager**:
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
The alert group will appear in supported partner solutions with the following pr
- **alert_group** for Syslog objects
-These fields should be configured in the partner solution to display the alert group name. If there is no alert associated with an alert group, the field in the partner solution will display **NA**.
+These fields should be configured in the partner solution to display the alert group name. If there's no alert associated with an alert group, the field in the partner solution will display **NA**.
### Default alert groups
Add custom alert rule to pinpoint specific activity as needed for your organizat
For example, you might want to define an alert for an environment running MODBUS to detect any write commands to a memory register, on a specific IP address and ethernet destination. Another example would be an alert for any access to a specific IP address.
-Use custom alert rule actions to for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
+Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
**To create a custom alert rule**:
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
System messages provide general information about your sensor that may require y
For more information, see: -- [Threat intelligence research and packages ](how-to-work-with-threat-intelligence-packages.md)
+- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor)
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
You may need to review programming activity:
- After a planned update to controllers
- - When a process or machine is not working correctly (to see who carried out the last update and when)
+ - When a process or machine isn't working correctly (to see who carried out the last update and when)
:::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Screenshot of a Programming Change Log":::
Other options let you:
## About authorized versus unauthorized programming events
-Unauthorized programming events are carried out by devices that have not been learned or manually defined as programming devices. Authorized programming events are carried out by devices that were resolved or manually defined as programming devices.
+Unauthorized programming events are carried out by devices that haven't been learned or manually defined as programming devices. Authorized programming events are carried out by devices that were resolved or manually defined as programming devices.
The Programming Analysis window displays both authorized and unauthorized programming events.
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
If you're working with dynamic networks, you handle IP address changes that occu
Changes might happen, for example, when a DHCP server assigns IP addresses.
-Defining dynamic IP addresses on each sensor enables comprehensive, transparent support in instances of IP address changes. This ensures comprehensive reporting for each unique device.
+Defining dynamic IP addresses on each sensor enables comprehensive, transparent support in instances of IP address changes. This activity ensures comprehensive reporting for each unique device.
The sensor console presents the most current IP address associated with the device and indicates which devices are dynamic. For example:
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
When a primary and secondary on-premises management console is paired:
- The primary on-premises management console data is automatically backed up to the secondary on-premises management console every 10 minutes. The on-premises management console configurations and device data are backed up. PCAP files and logs are not included in the backup. You can back up and restore of PCAPs and logs manually. -- The primary setup at the management console is duplicated on the secondary; for example, system settings. If these settings are updated on the primary, they are also updated on the secondary.
+- The primary setup at the management console is duplicated on the secondary; for example, system settings. If these settings are updated on the primary, they're also updated on the secondary.
- Before the license of the secondary expires, you should define it as the primary in order to update the license. ## About failover and failback
-If a sensor cannot connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Fail over from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
+If a sensor can't connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Fail over from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
During failover, sensors continue attempting to communicate with the primary appliance. When more than half the managed sensors succeed to communicate with the primary, the primary is restored. The following message appears at the secondary console when the primary is restored.
The installation and configuration procedures are performed in four main stages:
## High availability requirements
-Verify that you have met the following high availability requirements:
+Verify that you've met the following high availability requirements:
- Certificate requirements
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
Message string with the operation status details:
- **Failure ΓÇô error**: User authentication failure -- **Failure ΓÇô error**: User does not exist
+- **Failure ΓÇô error**: User doesn't exist
- **Failure ΓÇô error**: Password doesn't match security policy -- **Failure ΓÇô error**: User does not have the permissions to change password
+- **Failure ΓÇô error**: User doesn't have the permissions to change password
#### Response example
The below API's can be used with the ServiceNow integration via the ServiceNow's
- Type: JSON - Structure:
- - ΓÇ£**u_id**ΓÇ¥ - the internal id of the device.
+ - ΓÇ£**u_id**ΓÇ¥ - the internal ID of the device.
- ΓÇ£**u_vendor**ΓÇ¥ - the name of the vendor. - ΓÇ£**u_mac_address_objects**ΓÇ¥ - array of - ΓÇ£**u_mac_address**ΓÇ¥ - mac address of the device.
The below API's can be used with the ServiceNow integration via the ServiceNow's
- ΓÇ£**u_protocol**ΓÇ¥ - protocol the device uses. - ΓÇ£**u_purdue_layer**ΓÇ¥ - the purdue layer that was manually set by the user. - ΓÇ£**u_sensor_ids**ΓÇ¥ - array of
- - ΓÇ£**u_sensor_id**ΓÇ¥ - the id of the sensor that saw the device.
+ - ΓÇ£**u_sensor_id**ΓÇ¥ - the ID of the sensor that saw the device.
- ΓÇ£**u_device_urls**ΓÇ¥ - array of - ΓÇ£**u_device_url**ΓÇ¥ the URL to view the device in the sensor. - ΓÇ£**u_firmwares**ΓÇ¥ - array of
The below API's can be used with the ServiceNow integration via the ServiceNow's
- Type: JSON - Structure: - Array of
- - ΓÇ£**u_id**ΓÇ¥ - the id of the deleted device.
+ - ΓÇ£**u_id**ΓÇ¥ - the ID of the deleted device.
### Sensors
The below API's can be used with the ServiceNow integration via the ServiceNow's
- Type: JSON - Structure: - Array of
- - ΓÇ£**u_id**ΓÇ¥ - internal sensor id, to be used in the devices API.
+ - ΓÇ£**u_id**ΓÇ¥ - internal sensor ID, to be used in the devices API.
- ΓÇ£**u_name**ΓÇ¥ - the name of the appliance. - ΓÇ£**u_connection_state**ΓÇ¥ - connectivity with the CM state. One of the following: - ΓÇ£**SYNCED**ΓÇ¥ - Connection is successful.
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
The following table describes the commands available to configure your network o
## Network capture filter configuration
-The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list. This command does not support the malware detection engine.
+The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list. This command doesn't support the malware detection engine.
```azurecli-interactive network capture-filter
You're asked the following question:
Your options are:ΓÇ»`all`, `dissector`, `collector`, `statistics-collector`, `rpc-parser`, or `smb-parser`.
-In most common use cases, we recommend that you select `all`. Selecting `all` does not include the malware detection engine, which is not supported by this command.
+In most common use cases, we recommend that you select `all`. Selecting `all` doesn't include the malware detection engine, which isn't supported by this command.
-### Custom base capture filter
+### Custom base capture filter
The base capture filter is the baseline for the components. For example, the filter determines which ports are available to the component.
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-frequently-asked-questions.md
Microsoft Defender for IoT provides comprehensive protocol support. In addition
- Secure proprietary information by developing on-site as an external plugin. - Localize text for alerts, events, and protocol parameters.
-This unique solution for developing protocols as plugins, does not require dedicated developer teams or version releases in order to support a new protocol. Developers, partners, and customers can securely develop protocols and share insights and knowledge using Horizon.
+This unique solution for developing protocols as plugins, doesn't require dedicated developer teams or version releases in order to support a new protocol. Developers, partners, and customers can securely develop protocols and share insights and knowledge using Horizon.
## Do I have to purchase hardware appliances from Microsoft partners? Microsoft Defender for IoT sensor runs on specific hardware specs as described in the [Hardware Specifications Guide](./how-to-identify-required-appliances.md), customers can purchase certified hardware from Microsoft partners or use the supplied bill of materials (BOM) and purchase it on their own.
Microsoft Defender for IoT sensor runs on specific hardware specs as described i
Certified hardware has been tested in our labs for driver stability, packet drops and network sizing.
-## Regulation does not allow us to connect our system to the Internet. Can we still utilize Defender for IoT?
+## Regulation doesn't allow us to connect our system to the Internet. Can we still utilize Defender for IoT?
Yes you can! The Microsoft Defender for IoT platform on-premises solution is deployed as a physical or virtual sensor appliance that passively ingests network traffic (via SPAN, RSPAN, or TAP) to analyze, discover, and continuously monitor IT, OT, and IoT networks. For larger enterprises, multiple sensors can aggregate their data to an on-premises management console.
hdinsight Cluster Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-management-best-practices.md
description: Learn best practices for managing HDInsight clusters.
Previously updated : 04/11/2020 Last updated : 05/30/2022 # HDInsight cluster management best practices
hdinsight Apache Hadoop Connect Excel Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-excel-power-query.md
description: Learn how to take advantage of business intelligence components and
Previously updated : 12/17/2019 Last updated : 05/30/2022 # Connect Excel to Apache Hadoop by using Power Query
hdinsight Apache Hadoop Hive Pig Udf Dotnet Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-pig-udf-dotnet-csharp.md
description: Learn how to use C# user-defined functions (UDF) with Apache Hive a
Previously updated : 12/06/2019 Last updated : 05/30/2022 # Use C# user-defined functions with Apache Hive and Apache Pig on Apache Hadoop in HDInsight
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/connect-install-beeline.md
description: Learn how to connect to the Apache Beeline client to run Hive queri
Previously updated : 04/07/2021 Last updated : 05/30/2022 # Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local
hdinsight Hdinsight Troubleshoot Converting Service Principal Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-converting-service-principal-certificate.md
Title: Converting certificate contents to base-64 - Azure HDInsight
description: Converting service principal certificate contents to base-64 encoded string format in Azure HDInsight Previously updated : 07/31/2019 Last updated : 05/30/2022
namespace ConsoleApplication
## Next steps
hdinsight Hdinsight Troubleshoot Out Disk Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-out-disk-space.md
Title: Cluster node runs out of disk space in Azure HDInsight
description: Troubleshooting Apache Hadoop cluster node disk space issues in Azure HDInsight. Previously updated : 04/30/2020 Last updated : 05/30/2022 # Scenario: Cluster node runs out of disk space in Azure HDInsight
Apache Yarn application cache may have consumed all available disk space. Your S
1. Use Ambari UI to determine which node is running out of disk space.
-1. Determine which folder in the troubling node contributes to most of the disk space. SSH to the node first, then run `df` to list disk usage for all mounts. Usually it is `/mnt` which is a temp disk used by OSS. You can enter into a folder, then type `sudo du -hs` to show summarized file sizes under a folder. If you see a folder similar to `/mnt/resource/hadoop/yarn/local/usercache/livy/appcache/application_1537280705629_0007`, this means the application is still running. This could be due to RDD persistence or intermediate shuffle files.
+1. Determine which folder in the troubling node contributes to most of the disk space. SSH to the node first, then run `df` to list disk usage for all mounts. Usually it's `/mnt` that is a temp disk used by OSS. You can enter into a folder, then type `sudo du -hs` to show summarized file sizes under a folder. If you see a folder similar to `/mnt/resource/hadoop/yarn/local/usercache/livy/appcache/application_1537280705629_0007`, this output means the application is still running. This output could be due to RDD persistence or intermediate shuffle files.
1. To mitigate the issue, kill the application, which will release disk space used by that application.
Apache Yarn application cache may have consumed all available disk space. Your S
Open the Ambari UI Navigate to YARN --> Configs --> Advanced.
- Add the following 2 properties to the custom yarn-site.xml section and save:
+ Add the following two properties to the custom yarn-site.xml section and save:
``` yarn.nodemanager.localizer.cache.target-size-mb=2048 yarn.nodemanager.localizer.cache.cleanup.interval-ms=300000 ```
-1. If the above does not permanently fix the issue, optimize your application.
+1. If the above doesn't permanently fix the issue, optimize your application.
## Next steps
hdinsight Apache Hbase Backup Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-backup-replication.md
description: Set up Backup and replication for Apache HBase and Apache Phoenix i
Previously updated : 12/19/2019 Last updated : 05/30/2022 # Set up backup and replication for Apache HBase and Apache Phoenix on HDInsight
hdinsight Apache Hbase Phoenix Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-phoenix-psql.md
description: Use the psql tool to load bulk load data into Apache Phoenix tables
Previously updated : 12/17/2019 Last updated : 05/30/2022 # Bulk load data into Apache Phoenix using psql
hdinsight Hbase Troubleshoot Bindexception Address Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-bindexception-address-use.md
Title: BindException - Address already in use in Azure HDInsight
description: BindException - Address already in use in Azure HDInsight Previously updated : 08/16/2019 Last updated : 05/30/2022 # Scenario: BindException - Address already in use in Azure HDInsight
hdinsight Hdinsight Hadoop Compare Storage Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-compare-storage-options.md
description: Provides an overview of storage types and how they work with Azure
Previously updated : 04/21/2020 Last updated : 05/30/2022 # Compare storage options for use with Azure HDInsight clusters
hdinsight Hdinsight Sdk Java Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-java-samples.md
description: Find Java examples on GitHub for common tasks using the HDInsight S
Previously updated : 11/29/2019 Last updated : 05/30/2022 # Azure HDInsight: Java samples
You can get these samples for Java by cloning the [hdinsight-java-sdk-samples](h
[!INCLUDE [hdinsight-sdk-additional-functionality](includes/hdinsight-sdk-additional-functionality.md)]
-Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Java reference documentation](/java/api/overview/azure/hdinsight).
+Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Java reference documentation](/java/api/overview/azure/hdinsight).
hdinsight Hdinsight Sdk Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-python-samples.md
Title: 'Azure HDInsight: Python samples'
description: Find Python examples on GitHub for common tasks using the HDInsight SDK for Python. Previously updated : 11/08/2019 Last updated : 05/30/2022
You can get these samples for Python by cloning the [hdinsight-python-sdk-sample
[!INCLUDE [hdinsight-sdk-additional-functionality](includes/hdinsight-sdk-additional-functionality.md)]
-Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Python reference documentation](/python/api/overview/azure/hdinsight).
+Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Python reference documentation](/python/api/overview/azure/hdinsight).
hdinsight Hive Migration Across Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-migration-across-storage-accounts.md
Previously updated : 12/11/2020 Last updated : 05/26/2022 # Hive workload migration to new account in Azure Storage
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
description: Learn how to directly connect to Kafka on HDInsight through an Azur
Previously updated : 03/04/2020 Last updated : 05/30/2022 # Connect to Apache Kafka on HDInsight through an Azure Virtual Network
hdinsight Overview Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-azure-storage.md
description: Overview of Azure Storage in HDInsight.
Previously updated : 04/21/2020 Last updated : 05/30/2022 # Azure Storage overview in HDInsight
Certain MapReduce jobs and packages might create intermediate results that you w
- [Introduction to Azure Storage](../storage/common/storage-introduction.md) - [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md) - [Use Azure storage with Azure HDInsight clusters](hdinsight-hadoop-use-blob-storage.md)-- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)
+- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight Apache Spark Eclipse Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md
description: Use HDInsight Tools in Azure Toolkit for Eclipse to develop Spark a
Previously updated : 12/13/2019 Last updated : 05/30/2022 # Use Azure Toolkit for Eclipse to create Apache Spark applications for an HDInsight cluster
There are two modes to submit the jobs. If storage credential is provided, batch
### Managing resources * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Intellij Tool Debug Remotely Through Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md
description: Step-by-step guidance on how to use HDInsight Tools in Azure Toolki
Previously updated : 12/23/2019 Last updated : 05/30/2022 # Debug Apache Spark applications on an HDInsight cluster with Azure Toolkit for IntelliJ through SSH
This article provides step-by-step guidance on how to use HDInsight Tools in [Az
### Manage resources * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Intellij Tool Plugin Debug Jobs Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-plugin-debug-jobs-remotely.md
description: Learn how to use HDInsight Tools in Azure Toolkit for IntelliJ to r
Previously updated : 11/28/2017 Last updated : 05/30/2022 # Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely in HDInsight through VPN
We recommend that you also create an Apache Spark cluster in Azure HDInsight tha
### Manage resources * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs that run on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs that run on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-settings.md
description: How to view and configure Apache Spark settings for an Azure HDInsi
Previously updated : 04/24/2020 Last updated : 05/30/2022 # Configure Apache Spark settings
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
If you want to remove all of the Azure resources you used for this tutorial, del
1. Review all the resources that are in the resource group to determine which ones you want to clean up.
- * If you want to delete all the resources, use the [az group delete](/cli/azure/groupt#az-group-delete) command.
+ * If you want to delete all the resources, use the [az group delete](/cli/azure/group#az-group-delete) command.
```azurecli-interactive az group delete --name $resourceGroup
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
As a specific example, suppose that you want to run the [Snapshot Blob operation
> [!IMPORTANT] > To access Azure storage accounts behind firewalls by using HTTP requests and managed identities,
-> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-with-managed-identities).
+> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-in-same-region-with-managed-identities).
To run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob), the HTTP action specifies these properties:
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23(*alter table <table name> add column <column name> bigint auto_increment INVISIBLE PRIMARY KEY;*).
- The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server. - Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note] >For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html) if your MySQL version is greater than 8.0.23.
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23(*alter table <table name> add column <column name> bigint auto_increment INVISIBLE PRIMARY KEY;*).
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover. >* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
If you are using an older version of the mysqlnd_azure extension (version 1.0.0-
|`on` or `1`|- If the connection does not use SSL on the driver side, no connection will be made. The following error will be returned: *"mysqlnd_azure.enableRedirect is on, but SSL option is not set in connection string. Redirection is only possible with SSL."*<br>- If SSL is used on the driver side, but redirection is not supported on the server, the first connection is aborted and the following error is returned: *"Connection aborted because redirection is not enabled on the MySQL server or the network package doesn't meet redirection protocol."*<br>- If the MySQL server supports redirection, but the redirected connection failed for any reason, also abort the first proxy connection. Return the error of the redirected connection.| |`preferred` or `2`<br> (default value)|- mysqlnd_azure will use redirection if possible.<br>- If the connection does not use SSL on the driver side, the server does not support redirection, or the redirected connection fails to connect for any non-fatal reason while the proxy connection is still a valid one, it will fall back to the first proxy connection.|
-For successful connection to Azure database for MySQL Single server using `mysqlnd_azure.enableRedirect` you need to follow mandatory steps of combining your root certificate as per the compliance requirements. For more details on please visit [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
+For successful connection to Azure database for MySQL Single server using `mysqlnd_azure.enableRedirect` you need to follow mandatory steps of combining your root certificate as per the compliance requirements. For more details please visit [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
The subsequent sections of the document will outline how to install the `mysqlnd_azure` extension using PECL and set the value of this parameter.
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
+
+ Title: Enable Azure Network Watcher | Microsoft Docs
+description: Learn how to enable Network Watcher.
+
+documentationcenter: na
++++
+ na
+ Last updated : 05/11/2022+++
+# Enable Azure Network Watcher
+
+To analyze traffic, you need to have an existing network watcher, or [enable a network watcher](network-watcher-create.md) in each region that you have NSGs that you want to analyze traffic for. Traffic analytics can be enabled for NSGs hosted in any of the [supported regions](supported-region-traffic-analytics.md).
+
+## Select a network security group
+
+Before enabling NSG flow logging, you must have a network security group to log flows for. If you don't have a network security group, see [Create a network security group](../virtual-network/manage-network-security-group.md#create-a-network-security-group) to create one.
+
+In Azure portal, go to **Network watcher**, and then select **NSG flow logs**. Select the network security group that you want to enable an NSG flow log for, as shown in the following picture:
+
+![Screenshot of portal to select N S G that require enablement of NSG flow log.](./media/traffic-analytics/selection-of-nsgs-that-require-enablement-of-nsg-flow-logging.png)
+
+If you try to enable traffic analytics for an NSG that is hosted in any region other than the [supported regions](supported-region-traffic-analytics.md), you receive a "Not found" error.
+
+## Enable flow log settings
+
+Before enabling flow log settings, you must complete the following tasks:
+
+Register the Azure Insights provider, if it's not already registered for your subscription:
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.Insights
+```
+
+If you don't already have an Azure Storage account to store NSG flow logs in, you must create a storage account. You can create a storage account with the command that follows. Before running the command, replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. You can also change the resource group name, if necessary.
+
+```azurepowershell-interactive
+New-AzStorageAccount `
+ -Location westcentralus `
+ -Name <replace-with-your-unique-storage-account-name> `
+ -ResourceGroupName myResourceGroup `
+ -SkuName Standard_LRS `
+ -Kind StorageV2
+```
+
+Select the following options, as shown in the picture:
+
+1. Select *On* for **Status**
+2. Select *Version 2* for **Flow Logs version**. Version 2 contains flow-session statistics (Bytes and Packets)
+3. Select an existing storage account to store the flow logs in. Ensure that your storage does not have "Data Lake Storage Gen2 Hierarchical Namespace Enabled" set to true.
+4. Set **Retention** to the number of days you want to store data for. If you want to store the data forever, set the value to *0*. You incur Azure Storage fees for the storage account.
+5. Select *On* for **Traffic Analytics Status**.
+6. Select processing interval. Based on your choice, flow logs will be collected from storage account and processed by Traffic Analytics. You can choose processing interval of every 1 hour or every 10 mins.
+7. Select an existing Log Analytics (OMS) Workspace, or select **Create New Workspace** to create a new one. A Log Analytics workspace is used by Traffic Analytics to store the aggregated and indexed data that is then used to generate the analytics. If you select an existing workspace, it must exist in one of the [supported regions](supported-region-traffic-analytics.md) and have been upgraded to the new query language. If you do not wish to upgrade an existing workspace, or do not have a workspace in a supported region, create a new one. For more information about query languages, see [Azure Log Analytics upgrade to new log search](../azure-monitor/logs/log-query-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+ > [!NOTE]
+ > The log analytics workspace hosting the traffic analytics solution and the NSGs do not have to be in the same region. For example, you may have traffic analytics in a workspace in the West Europe region, while you may have NSGs in East US and West US. Multiple NSGs can be configured in the same workspace.
+
+8. Select **Save**.
+
+ ![Screenshot showing selection of storage account, Log Analytics workspace, and Traffic Analytics enablement.](./media/traffic-analytics/ta-customprocessinginterval.png)
+
+Repeat the previous steps for any other NSGs for which you wish to enable traffic analytics for. Data from flow logs is sent to the workspace, so ensure that the local laws and regulations in your country/region permit data storage in the region where the workspace exists. If you have set different processing intervals for different NSGs, data will be collected at different intervals. For example, You can choose to enable processing interval of 10 mins for critical VNETs and 1 hour for noncritical VNETs.
+
+You can also configure traffic analytics using the [Set-AzNetworkWatcherConfigFlowLog](/powershell/module/az.network/set-aznetworkwatcherconfigflowlog) PowerShell cmdlet in Azure PowerShell. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+
+## View traffic analytics
+
+To view Traffic Analytics, search for **Network Watcher** in the portal search bar. Once inside Network Watcher, to explore traffic analytics and its capabilities, select **Traffic Analytics** from the left menu.
+
+![Screenshot that displays how to access the Traffic Analytics dashboard.](./media/traffic-analytics/accessing-the-traffic-analytics-dashboard.png)
+
+The dashboard may take up to 30 minutes to appear the first time because Traffic Analytics must first aggregate enough data for it to derive meaningful insights, before it can generate any reports.
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
# What is Azure Network Watcher?
-Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc. Note: It is not intended for and will not work for PaaS monitoring or Web analytics.
+Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.
+> [!Note]
+> It is not intended for and will not work for PaaS monitoring or Web analytics.
+
+For information about analyzing traffic from a network security group, see [Network Security Group](network-watcher-nsg-flow-logging-overview.md) and [Traffic Analytics](traffic-analytics.md).
## Monitoring
There are [limits](../azure-resource-manager/management/azure-subscription-servi
The information is helpful when planning future resource deployments.
-## Logs
+## Network Monitoring Logs
### Analyze traffic to or from a network security group
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
+
+ Title: Azure Traffic Analytics supported regions | Microsoft Docs
+description: This article provides the list of Traffic Analytics supported regions.
+
+documentationcenter: na
++++
+ na
+ Last updated : 05/11/2022+
+ms.custon: references_regions
++
+# Supported regions: NSG
+
+This article provides the list of regions supported by Traffic Analytics. You can view the list of supported regions of both NSG and Log Analytics Workspaces below.
+
+You can use traffic analytics for NSGs in any of the following supported regions:
+ :::column span="":::
+ Australia Central
+ Australia East
+ Australia Southeast
+ Brazil South
+ Brazil Southeast
+ Canada Central
+ Canada East
+ Central India
+ Central US
+ China East 2
+ China North
+ China North 2
+ :::column-end:::
+ :::column span="":::
+ East Asia
+ East US
+ East US 2
+ East US 2 EUAP
+ France Central
+ Germany West Central
+ Japan East
+ Japan West
+ Korea Central
+ Korea South
+ North Central US
+ North Europe
+ :::column-end:::
+ :::column span="":::
+ Norway East
+ South Africa North
+ South Central US
+ South India
+ Southeast Asia
+ Switzerland North
+ Switzerland West
+ UAE Central
+ UAE North
+ UK South
+ UK West
+ USGov Arizona
+ :::column-end:::
+ :::column span="":::
+ USGov Texas
+ USGov Virginia
+ USNat East
+ USNat West
+ USSec East
+ USSec West
+ West Central US
+ West Europe
+ West US
+ West US 2
+ West US 3
+ :::column-end:::
+
+## Supported regions: Log Analytics Workspaces
+
+The Log Analytics workspace must exist in the following regions:
+ :::column span="":::
+ Australia Central
+ Australia East
+ Australia Southeast
+ Brazil South
+ Brazil Southeast
+ Canada East
+ Canada Central
+ Central India
+ Central US
+ China East 2
+ China North
+ China North 2
+ :::column-end:::
+ :::column span="":::
+ East Asia
+ East US
+ East US 2
+ East US 2 EUAP
+ France Central
+ Germany West Central
+ Japan East
+ Japan West
+ Korea Central
+ Korea South
+ North Central US
+ North Europe
+ :::column-end:::
+ :::column span="":::
+ Norway East
+ South Africa North
+ South Central US
+ South India
+ Southeast Asia
+ Switzerland North
+ Switzerland West
+ UAE Central
+ UAE North
+ UK South
+ UK West
+ USGov Arizona
+ :::column-end:::
+ :::column span="":::
+ USGov Texas
+ USGov Virginia
+ USNat East
+ USNat West
+ USSec East
+ USSec West
+ West Central US
+ West Europe
+ West US
+ West US 2
+ West US 3
+ :::column-end:::
+
+> [!NOTE]
+> If NSGs support a region, but the log analytics workspace does not support that region for traffic analytics as per above lists, then you can use log analytics workspace of any other supported region as a workaround.
+
+## Next steps
+
+- Learn how to [enable flow log settings](enable-network-watcher-flow-log-settings.md).
+- Learn the ways to [use traffic analytics](usage-scenarios-traffic-analytics.md).
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Traffic analytics examines the raw NSG flow logs and captures reduced logs by ag
![Data flow for NSG flow logs processing](./media/traffic-analytics/data-flow-for-nsg-flow-log-processing.png)
-## Supported regions: NSG
-
-You can use traffic analytics for NSGs in any of the following supported regions:
- :::column span="":::
- Australia Central
- Australia East
- Australia Southeast
- Brazil South
- Brazil Southeast
- Canada Central
- Canada East
- Central India
- Central US
- China East 2
- China North
- China North 2
- :::column-end:::
- :::column span="":::
- East Asia
- East US
- East US 2
- East US 2 EUAP
- France Central
- Germany West Central
- Japan East
- Japan West
- Korea Central
- Korea South
- North Central US
- North Europe
- :::column-end:::
- :::column span="":::
- Norway East
- South Africa North
- South Central US
- South India
- Southeast Asia
- Switzerland North
- Switzerland West
- UAE Central
- UAE North
- UK South
- UK West
- USGov Arizona
- :::column-end:::
- :::column span="":::
- USGov Texas
- USGov Virginia
- USNat East
- USNat West
- USSec East
- USSec West
- West Central US
- West Europe
- West US
- West US 2
- West US 3
- :::column-end:::
-
-## Supported regions: Log Analytics Workspaces
-
-The Log Analytics workspace must exist in the following regions:
- :::column span="":::
- Australia Central
- Australia East
- Australia Southeast
- Brazil South
- Brazil Southeast
- Canada East
- Canada Central
- Central India
- Central US
- China East 2
- China North
- China North 2
- :::column-end:::
- :::column span="":::
- East Asia
- East US
- East US 2
- East US 2 EUAP
- France Central
- Germany West Central
- Japan East
- Japan West
- Korea Central
- Korea South
- North Central US
- North Europe
- :::column-end:::
- :::column span="":::
- Norway East
- South Africa North
- South Central US
- South India
- Southeast Asia
- Switzerland North
- Switzerland West
- UAE Central
- UAE North
- UK South
- UK West
- USGov Arizona
- :::column-end:::
- :::column span="":::
- USGov Texas
- USGov Virginia
- USNat East
- USNat West
- USSec East
- USSec West
- West Central US
- West Europe
- West US
- West US 2
- :::column-end:::
-
-> [!NOTE]
-> If NSGs support a region but the log analytics workspace does not support that region for traffic analytics as per above lists, then you can use log analytics workspace of any other supported region as a workaround.
- ## Prerequisites ### User access requirements
If your account is not assigned to one of the built-in roles, it must be assigne
For information on how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
-### Enable Network Watcher
-
-To analyze traffic, you need to have an existing network watcher, or [enable a network watcher](network-watcher-create.md) in each region that you have NSGs that you want to analyze traffic for. Traffic analytics can be enabled for NSGs hosted in any of the [supported regions](#supported-regions-nsg).
-
-### Select a network security group
-
-Before enabling NSG flow logging, you must have a network security group to log flows for. If you don't have a network security group, see [Create a network security group](../virtual-network/manage-network-security-group.md#create-a-network-security-group) to create one.
-
-In Azure portal, go to **Network watcher**, and then select **NSG flow logs**. Select the network security group that you want to enable an NSG flow log for, as shown in the following picture:
-
-![Selection of NSGs that require enablement of NSG flow log](./media/traffic-analytics/selection-of-nsgs-that-require-enablement-of-nsg-flow-logging.png)
-
-If you try to enable traffic analytics for an NSG that is hosted in any region other than the [supported regions](#supported-regions-nsg), you receive a "Not found" error.
-
-## Enable flow log settings
-
-Before enabling flow log settings, you must complete the following tasks:
-
-Register the Azure Insights provider, if it's not already registered for your subscription:
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.Insights
-```
-
-If you don't already have an Azure Storage account to store NSG flow logs in, you must create a storage account. You can create a storage account with the command that follows. Before running the command, replace `<replace-with-your-unique-storage-account-name>` with a name that is unique across all Azure locations, between 3-24 characters in length, using only numbers and lower-case letters. You can also change the resource group name, if necessary.
-
-```azurepowershell-interactive
-New-AzStorageAccount `
- -Location westcentralus `
- -Name <replace-with-your-unique-storage-account-name> `
- -ResourceGroupName myResourceGroup `
- -SkuName Standard_LRS `
- -Kind StorageV2
-```
-
-Select the following options, as shown in the picture:
-
-1. Select *On* for **Status**
-2. Select *Version 2* for **Flow Logs version**. Version 2 contains flow-session statistics (Bytes and Packets)
-3. Select an existing storage account to store the flow logs in. Ensure that your storage does not have "Data Lake Storage Gen2 Hierarchical Namespace Enabled" set to true.
-4. Set **Retention** to the number of days you want to store data for. If you want to store the data forever, set the value to *0*. You incur Azure Storage fees for the storage account.
-5. Select *On* for **Traffic Analytics Status**.
-6. Select processing interval. Based on your choice, flow logs will be collected from storage account and processed by Traffic Analytics. You can choose processing interval of every 1 hour or every 10 mins.
-7. Select an existing Log Analytics (OMS) Workspace, or select **Create New Workspace** to create a new one. A Log Analytics workspace is used by Traffic Analytics to store the aggregated and indexed data that is then used to generate the analytics. If you select an existing workspace, it must exist in one of the [supported regions](#supported-regions-log-analytics-workspaces) and have been upgraded to the new query language. If you do not wish to upgrade an existing workspace, or do not have a workspace in a supported region, create a new one. For more information about query languages, see [Azure Log Analytics upgrade to new log search](../azure-monitor/logs/log-query-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
-
-> [!NOTE]
->The log analytics workspace hosting the traffic analytics solution and the NSGs do not have to be in the same region. For example, you may have traffic analytics in a workspace in the West Europe region, while you may have NSGs in East US and West US. Multiple NSGs can be configured in the same workspace.
-
-8. Select **Save**.
-
- ![Selection of storage account, Log Analytics workspace, and Traffic Analytics enablement](./media/traffic-analytics/ta-customprocessinginterval.png)
-
-Repeat the previous steps for any other NSGs for which you wish to enable traffic analytics for. Data from flow logs is sent to the workspace, so ensure that the local laws and regulations in your country/region permit data storage in the region where the workspace exists. If you have set different processing intervals for different NSGs, data will be collected at different intervals. For example: You can choose to enable processing interval of 10 mins for critical VNETs and 1 hour for noncritical VNETs.
-
-You can also configure traffic analytics using the [Set-AzNetworkWatcherConfigFlowLog](/powershell/module/az.network/set-aznetworkwatcherconfigflowlog) PowerShell cmdlet in Azure PowerShell. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
-
-## View traffic analytics
-
-To view Traffic Analytics, search for **Network Watcher** in the portal search bar. Once inside Network Watcher, to explore traffic analytics and its capabilities, select **Traffic Analytics** from the left menu.
-
-![Accessing the Traffic Analytics dashboard](./media/traffic-analytics/accessing-the-traffic-analytics-dashboard.png)
-
-The dashboard may take up to 30 minutes to appear the first time because Traffic Analytics must first aggregate enough data for it to derive meaningful insights, before it can generate any reports.
-
-## Usage scenarios
-
-Some of the insights you might want to gain after Traffic Analytics is fully configured, are as follows:
-
-### Find traffic hotspots
-
-**Look for**
--- Which hosts, subnets, virtual networks and virtual machine scale set are sending or receiving the most traffic, traversing maximum malicious traffic and blocking significant flows?
- - Check comparative chart for hosts, subnet, virtual network and virtual machine scale set. Understanding which hosts, subnets, virtual networks and virtual machine scale set are sending or receiving the most traffic can help you identify the hosts that are processing the most traffic, and whether the traffic distribution is done properly.
- - You can evaluate if the volume of traffic is appropriate for a host. Is the volume of traffic normal behavior, or does it merit further investigation?
-- How much inbound/outbound traffic is there?
- - Is the host expected to receive more inbound traffic than outbound, or vice-versa?
-- Statistics of blocked traffic.
- - Why is a host blocking a significant volume of benign traffic? This behavior requires further investigation and probably optimization of configuration
-- Statistics of malicious allowed/blocked traffic
- - Why is a host receiving malicious traffic and why are flows from malicious sources allowed? This behavior requires further investigation and probably optimization of configuration.
-
- Select **See all**, under **IP**, as shown in the following picture:
-
- ![Dashboard showcasing host with most traffic details](media/traffic-analytics/dashboard-showcasing-host-with-most-traffic-details.png)
-
- The following picture shows time trending for the top five talking hosts and the flow-related details (allowed ΓÇô inbound/outbound and denied - inbound/outbound flows) for a host:
-
- Select **See more**, under **Details of top 5 talking IPs**, as shown in the following picture to get insights about all the hosts:
-
- ![Top five most-talking host trend](media/traffic-analytics/top-five-most-talking-host-trend.png)
-
-
-**Look for**
--- Which are the most conversing host pairs?
- - Expected behavior like front-end or back-end communication or irregular behavior, like back-end internet traffic.
-- Statistics of allowed/blocked traffic
- - Why a host is allowing or blocking significant traffic volume
-- Most frequently used application protocol among most conversing host pairs:
- - Are these applications allowed on this network?
- - Are the applications configured properly? Are they using the appropriate protocol for communication? Select **See all** under **Frequent conversation**, as show in the following picture:
-
- ![Dashboard showcasing most frequent conversation](./media/traffic-analytics/dashboard-showcasing-most-frequent-conversation.png)
--- The following picture shows time trending for the top five conversations and the flow-related details such as allowed and denied inbound and outbound flows for a conversation pair:-
- ![Top five chatty conversation details and trend](./media/traffic-analytics/top-five-chatty-conversation-details-and-trend.png)
-
-**Look for**
--- Which application protocol is most used in your environment, and which conversing host pairs are using the application protocol the most?
- - Are these applications allowed on this network?
- - Are the applications configured properly? Are they using the appropriate protocol for communication? Expected behavior is common ports such as 80 and 443. For standard communication, if any unusual ports are displayed, they might require a configuration change. Select **See all** under **Application port**, in the following picture:
-
- ![Dashboard showcasing top application protocols](./media/traffic-analytics/dashboard-showcasing-top-application-protocols.png)
--- The following pictures show time trending for the top five L7 protocols and the flow-related details (for example, allowed and denied flows) for an L7 protocol:-
- ![Top five layer 7 protocols details and trend](./media/traffic-analytics/top-five-layer-seven-protocols-details-and-trend.png)
-
- ![Flow details for application protocol in log search](./media/traffic-analytics/flow-details-for-application-protocol-in-log-search.png)
-
-**Look for**
--- Capacity utilization trends of a VPN gateway in your environment.
- - Each VPN SKU allows a certain amount of bandwidth. Are the VPN gateways underutilized?
- - Are your gateways reaching capacity? Should you upgrade to the next higher SKU?
-- Which are the most conversing hosts, via which VPN gateway, over which port?
- - Is this pattern normal? Select **See all** under **VPN gateway**, as shown in the following picture:
-
- ![Dashboard showcasing top active VPN connections](./media/traffic-analytics/dashboard-showcasing-top-active-vpn-connections.png)
--- The following picture shows time trending for capacity utilization of an Azure VPN Gateway and the flow-related details (such as allowed flows and ports):-
- ![VPN gateway utilization trend and flow details](./media/traffic-analytics/vpn-gateway-utilization-trend-and-flow-details.png)
-
-### Visualize traffic distribution by geography
-
-**Look for**
--- Traffic distribution per data center such as top sources of traffic to a datacenter, top rogue networks conversing with the data center, and top conversing application protocols.
- - If you observe more load on a data center, you can plan for efficient traffic distribution.
- - If rogue networks are conversing in the data center, then correct NSG rules to block them.
-
- Select **View map** under **Your environment**, as shown in the following picture:
-
- ![Dashboard showcasing traffic distribution](./media/traffic-analytics/dashboard-showcasing-traffic-distribution.png)
--- The geo-map shows the top ribbon for selection of parameters such as data centers (Deployed/No-deployment/Active/Inactive/Traffic Analytics Enabled/Traffic Analytics Not Enabled) and countries/regions contributing Benign/Malicious traffic to the active deployment:-
- ![Geo map view showcasing active deployment](./media/traffic-analytics/geo-map-view-showcasing-active-deployment.png)
--- The geo-map shows the traffic distribution to a data center from countries/regions and continents communicating to it in blue (Benign traffic) and red (malicious traffic) colored lines:-
- ![Geo map view showcasing traffic distribution to countries/regions and continents](./media/traffic-analytics/geo-map-view-showcasing-traffic-distribution-to-countries-and-continents.png)
-
- ![Flow details for traffic distribution in log search](./media/traffic-analytics/flow-details-for-traffic-distribution-in-log-search.png)
-
-- The **More Insight** blade of a Azure region also shows the total traffic remaining inside that region (i.e. source and destination in same region). It further gives insights about traffic exchanged between availability zones of a datacenter -
- ![Inter Zone and Intra region traffic](./media/traffic-analytics/inter-zone-and-intra-region-traffic.png)
-
-### Visualize traffic distribution by virtual networks
-
-**Look for**
--- Traffic distribution per virtual network, topology, top sources of traffic to the virtual network, top rogue networks conversing to the virtual network, and top conversing application protocols.
- - Knowing which virtual network is conversing to which virtual network. If the conversation is not expected, it can be corrected.
- - If rogue networks are conversing with a virtual network, you can correct NSG rules to block the rogue networks.
-
- Select **View VNets** under **Your environment**, as shown in the following picture:
-
- ![Dashboard showcasing virtual network distribution](./media/traffic-analytics/dashboard-showcasing-virtual-network-distribution.png)
--- The Virtual Network Topology shows the top ribbon for selection of parameters like a virtual network's (Inter virtual network Connections/Active/Inactive), External Connections, Active Flows, and Malicious flows of the virtual network.-- You can filter the Virtual Network Topology based on subscriptions, workspaces, resource groups and time interval. Additional filters that help you understand the flow are:
- Flow Type (InterVNet, IntraVNET, and so on), Flow Direction (Inbound, Outbound), Flow Status (Allowed, Blocked), VNETs (Targeted and Connected), Connection Type (Peering or Gateway - P2S and S2S), and NSG. Use these filters to focus on VNets that you want to examine in detail.
-- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).-- The Virtual Network Topology shows the traffic distribution to a virtual network with regard to flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and network security groups, for example:-
- ![Virtual network topology showcasing traffic distribution and flow details](./media/traffic-analytics/virtual-network-topology-showcasing-traffic-distribution-and-flow-details.png)
-
- ![Virtual network topology showcasing top level and more filters](./media/traffic-analytics/virtual-network-filters.png)
-
- ![Flow details for virtual network traffic distribution in log search](./media/traffic-analytics/flow-details-for-virtual-network-traffic-distribution-in-log-search.png)
-
-**Look for**
--- Traffic distribution per subnet, topology, top sources of traffic to the subnet, top rogue networks conversing to the subnet, and top conversing application protocols.
- - Knowing which subnet is conversing to which subnet. If you see unexpected conversations, you can correct your configuration.
- - If rogue networks are conversing with a subnet, you are able to correct it by configuring NSG rules to block the rogue networks.
-- The Subnets Topology shows the top ribbon for selection of parameters such as Active/Inactive subnet, External Connections, Active Flows, and Malicious flows of the subnet.-- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).-- The Subnet Topology shows the traffic distribution to a virtual network with regard to flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and NSGs, for example:-
- ![Subnet topology showcasing traffic distribution a virtual network subnet with regards to flows](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-virtual-subnet-with-regards-to-flows.png)
-
-**Look for**
-
-Traffic distribution per Application gateway & Load Balancer, topology, top sources of traffic, top rogue networks conversing to the Application gateway & Load Balancer, and top conversing application protocols.
-
-
- ![Screenshot shows a subnet topology with traffic distribution to an application gateway subnet with regard to flows.](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-application-gateway-subnet-with-regards-to-flows.png)
-
-### View ports and virtual machines receiving traffic from the internet
-
-**Look for**
--- Which open ports are conversing over the internet?
- - If unexpected ports are found open, you can correct your configuration:
-
- ![Dashboard showcasing ports receiving and sending traffic to the internet](./media/traffic-analytics/dashboard-showcasing-ports-receiving-and-sending-traffic-to-the-internet.png)
-
- ![Details of Azure destination ports and hosts](./media/traffic-analytics/details-of-azure-destination-ports-and-hosts.png)
-
-**Look for**
-
-Do you have malicious traffic in your environment? Where is it originating from? Where is it destined to?
-
-![Malicious traffic flows detail in log search](./media/traffic-analytics/malicious-traffic-flows-detail-in-log-search.png)
-
-### View information about public IPs interacting with your deployment
-
-**Look for**
--- Which public IPs are conversing with my network? What is the WHOIS data and geographic location of all public IPs?-- Which malicious IPs are sending traffic to my deployments? What is the threat type and threat description for malicious IPs?
- - The Public IP Information section, gives a summary of all types of public IPs present in your network traffic.
- Select the public IP type of interest to view details. This [schema document](./traffic-analytics-schema.md#public-ip-details-schema) defines the data fields presented.
-
- :::image type="content" source="./media/traffic-analytics/public-ip-information.png" alt-text="Public IP information" lightbox="./media/traffic-analytics/public-ip-information.png":::
-
- - On the traffic analytics dashboard, click on any IP to view its information
-
- :::image type="content" source="./media/traffic-analytics/external-public-ip-details.png" alt-text="external IP information in tool tip" lightbox="./media/traffic-analytics/external-public-ip-details.png":::
-
- :::image type="content" source="./media/traffic-analytics/malicious-ip-details.png" alt-text="malicious IP information in tool tip" lightbox="./media/traffic-analytics/malicious-ip-details.png":::
-
-### Visualize the trends in NSG/NSG rules hits
-
-**Look for**
--- Which NSG/NSG rules have the most hits in comparative chart with flows distribution?-- What are the top source and destination conversation pairs per NSG/NSG rules?-
- ![Dashboard showcasing NSG hits statistics](./media/traffic-analytics/dashboard-showcasing-nsg-hits-statistics.png)
--- The following pictures show time trending for hits of NSG rules and source-destination flow details for a network security group:-
- - Quickly detect which NSGs and NSG rules are traversing malicious flows and which are the top malicious IP addresses accessing your cloud environment
- - Identify which NSG/NSG rules are allowing/blocking significant network traffic
- - Select top filters for granular inspection of an NSG or NSG rules
-
- ![Showcasing time trending for NSG rule hits and top NSG rules](./media/traffic-analytics/showcasing-time-trending-for-nsg-rule-hits-and-top-nsg-rules.png)
-
- ![Top NSG rules statistics details in log search](./media/traffic-analytics/top-nsg-rules-statistics-details-in-log-search.png)
- ## Frequently asked questions
-To get answers to frequently asked questions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+For frequent asked questions about Traffic Analytics, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
## Next steps
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
+
+ Title: Usage scenarios of Azure Traffic Analytics | Microsoft Docs
+description: This article describes the usage scenarios of Traffic Analytics.
+
+documentationcenter: na
++++
+ na
+ Last updated : 05/11/2022+++
+# Usage scenarios
+
+Some of the insights you might want to gain after Traffic Analytics is fully configured, are as follows:
+
+## Find traffic hotspots
+
+**Look for**
+
+- Which hosts, subnets, virtual networks, and virtual machine scale set are sending or receiving the most traffic, traversing maximum malicious traffic and blocking significant flows?
+ - Check comparative chart for hosts, subnet, virtual network, and virtual machine scale set. Understanding, which hosts, subnets, virtual networks and virtual machine scale set are sending or receiving the most traffic can help you identify the hosts that are processing the most traffic, and whether the traffic distribution is done properly.
+ - You can evaluate if the volume of traffic is appropriate for a host. Is the volume of traffic normal behavior, or does it merit further investigation?
+- How much inbound/outbound traffic is there?
+ - Is the host expected to receive more inbound traffic than outbound, or vice-versa?
+- Statistics of blocked traffic.
+ - Why is a host blocking a significant volume of benign traffic? This behavior requires further investigation and probably optimization of configuration
+- Statistics of malicious allowed/blocked traffic
+ - Why is a host receiving malicious traffic and why are flows from malicious sources allowed? This behavior requires further investigation and probably optimization of configuration.
+
+ Select **See all** under **IP** as shown in the following image:
+
+ ![Screenshot of dashboard showcasing host with most traffic details.](media/traffic-analytics/dashboard-showcasing-host-with-most-traffic-details.png)
+
+ The following image shows time trending for the top five talking hosts and the flow-related details (allowed ΓÇô inbound/outbound and denied - inbound/outbound flows) for a host:
+
+ Select **See more** under **Details of top 5 talking IPs'** as shown in the following image to get insights about all the hosts:
+
+ ![Screenshot of top five most-talking host trends.](media/traffic-analytics/top-five-most-talking-host-trend.png)
+
+
+**Look for**
+
+- Which are the most conversing host pairs?
+ - Expected behavior like front-end or back-end communication or irregular behavior, like back-end internet traffic.
+- Statistics of allowed/blocked traffic
+ - Why a host is allowing or blocking significant traffic volume
+- Most frequently used application protocol among most conversing host pairs:
+ - Are these applications allowed on this network?
+ - Are the applications configured properly? Are they using the appropriate protocol for communication? Select **See all** under **Frequent conversation**, as show in the following image:
+
+ ![Screenshot of dashboard showcasing most frequent conversations.](./media/traffic-analytics/dashboard-showcasing-most-frequent-conversation.png)
+
+- The following image shows time trending for the top five conversations and the flow-related details such as allowed and denied inbound and outbound flows for a conversation pair:
+
+ ![Screenshot of top five chatty conversation details and trends.](./media/traffic-analytics/top-five-chatty-conversation-details-and-trend.png)
+
+**Look for**
+
+- Which application protocol is most used in your environment, and which conversing host pairs are using the application protocol the most?
+ - Are these applications allowed on this network?
+ - Are the applications configured properly? Are they using the appropriate protocol for communication? Expected behavior is common ports such as 80 and 443. For standard communication, if any unusual ports are displayed, they might require a configuration change. Select **See all** under **Application port**, in the following image:
+
+ ![Screenshot of dashboard showcasing top application protocols.](./media/traffic-analytics/dashboard-showcasing-top-application-protocols.png)
+
+- The following images show time trending for the top five L7 protocols and the flow-related details (for example, allowed and denied flows) for an L7 protocol:
+
+ ![Screenshot of top five layer 7 protocols details and trends.](./media/traffic-analytics/top-five-layer-seven-protocols-details-and-trend.png)
+
+ ![Screenshot of the flow details for application protocol in log search.](./media/traffic-analytics/flow-details-for-application-protocol-in-log-search.png)
+
+**Look for**
+
+- Capacity utilization trends of a VPN gateway in your environment.
+ - Each VPN SKU allows a certain amount of bandwidth. Are the VPN gateways underutilized?
+ - Are your gateways reaching capacity? Should you upgrade to the next higher SKU?
+- Which are the most conversing hosts, via which VPN gateway, over which port?
+ - Is this pattern normal? Select **See all** under **VPN gateway**, as shown in the following image:
+
+ ![Screenshot of dashboard showcasing top active V P N connections.](./media/traffic-analytics/dashboard-showcasing-top-active-vpn-connections.png)
+
+- The following image shows time trending for capacity utilization of an Azure VPN Gateway and the flow-related details (such as allowed flows and ports):
+
+ ![Screenshot of V P N gateway utilization trend and flow details.](./media/traffic-analytics/vpn-gateway-utilization-trend-and-flow-details.png)
+
+## Visualize traffic distribution by geography
+
+**Look for**
+
+- Traffic distribution per data center such as top sources of traffic to a datacenter, top rogue networks conversing with the data center, and top conversing application protocols.
+ - If you observe more load on a data center, you can plan for efficient traffic distribution.
+ - If rogue networks are conversing in the data center, then correct NSG rules to block them.
+
+ Select **View map** under **Your environment**, as shown in the following image:
+
+ ![Screenshot of dashboard showcasing traffic distribution.](./media/traffic-analytics/dashboard-showcasing-traffic-distribution.png)
+
+- The geo-map shows the top ribbon for selection of parameters such as data centers (Deployed/No-deployment/Active/Inactive/Traffic Analytics Enabled/Traffic Analytics Not Enabled) and countries/regions contributing Benign/Malicious traffic to the active deployment:
+
+ ![Screenshot of geo map view showcasing active deployment.](./media/traffic-analytics/geo-map-view-showcasing-active-deployment.png)
+
+- The geo-map shows the traffic distribution to a data center from countries/regions and continents communicating to it in blue (Benign traffic) and red (malicious traffic) colored lines:
+
+ ![Screenshot of geo map view showcasing traffic distribution to countries/regions and continents.](./media/traffic-analytics/geo-map-view-showcasing-traffic-distribution-to-countries-and-continents.png)
+
+ ![Screenshot of flow details for traffic distribution in log search.](./media/traffic-analytics/flow-details-for-traffic-distribution-in-log-search.png)
+
+- The **More Insight** blade of an Azure region also shows the total traffic remaining inside that region (that is, source and destination in same region). It further gives insights about traffic exchanged between availability zones of a datacenter
+
+ ![Screenshot of Inter Zone and Intra region traffic.](./media/traffic-analytics/inter-zone-and-intra-region-traffic.png)
+
+## Visualize traffic distribution by virtual networks
+
+**Look for**
+
+- Traffic distribution per virtual network, topology, top sources of traffic to the virtual network, top rogue networks conversing to the virtual network, and top conversing application protocols.
+ - Knowing which virtual network is conversing to which virtual network. If the conversation is not expected, it can be corrected.
+ - If rogue networks are conversing with a virtual network, you can correct NSG rules to block the rogue networks.
+
+ Select **View VNets** under **Your environment** as shown in the following image:
+
+ ![Screenshot of dashboard showcasing virtual network distribution.](./media/traffic-analytics/dashboard-showcasing-virtual-network-distribution.png)
+
+- The Virtual Network Topology shows the top ribbon for selection of parameters like a virtual network's (Inter virtual network Connections/Active/Inactive), External Connections, Active Flows, and Malicious flows of the virtual network.
+- You can filter the Virtual Network Topology based on subscriptions, workspaces, resource groups and time interval. Extra filters that help you understand the flow are:
+ Flow Type (InterVNet, IntraVNET, and so on), Flow Direction (Inbound, Outbound), Flow Status (Allowed, Blocked), VNETs (Targeted and Connected), Connection Type (Peering or Gateway - P2S and S2S), and NSG. Use these filters to focus on VNets that you want to examine in detail.
+- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).
+- The Virtual Network Topology shows the traffic distribution to a virtual network to flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and network security groups, for example:
+
+ ![Screenshot of virtual network topology showcasing traffic distribution and flow details.](./media/traffic-analytics/virtual-network-topology-showcasing-traffic-distribution-and-flow-details.png)
+
+ ![Screenshot of virtual network topology showcasing top level and more filters.](./media/traffic-analytics/virtual-network-filters.png)
+
+ ![Screenshot of flow details for virtual network traffic distribution in log search.](./media/traffic-analytics/flow-details-for-virtual-network-traffic-distribution-in-log-search.png)
+
+**Look for**
+
+- Traffic distribution per subnet, topology, top sources of traffic to the subnet, top rogue networks conversing to the subnet, and top conversing application protocols.
+ - Knowing which subnet is conversing to which subnet. If you see unexpected conversations, you can correct your configuration.
+ - If rogue networks are conversing with a subnet, you are able to correct it by configuring NSG rules to block the rogue networks.
+- The Subnets Topology shows the top ribbon for selection of parameters such as Active/Inactive subnet, External Connections, Active Flows, and Malicious flows of the subnet.
+- You can zoom-in and zoom-out while viewing Virtual Network Topology using mouse scroll wheel. Left-click and moving the mouse lets you drag the topology in desired direction. You can also use keyboard shortcuts to achieve these actions: A (to drag left), D (to drag right), W (to drag up), S (to drag down), + (to zoom in), - (to zoom out), R (to zoom reset).
+- The Subnet Topology shows the traffic distribution to a virtual network regarding flows (Allowed/Blocked/Inbound/Outbound/Benign/Malicious), application protocol, and NSGs, for example:
+
+ ![Screenshot of subnet topology showcasing traffic distribution a virtual network subnet with regards to flows.](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-virtual-subnet-with-regards-to-flows.png)
+
+**Look for**
+
+Traffic distribution per Application gateway & Load Balancer, topology, top sources of traffic, top rogue networks conversing to the Application gateway & Load Balancer, and top conversing application protocols.
+
+ - Knowing which subnet is conversing to which Application gateway or Load Balancer. If you observe unexpected conversations, you can correct your configuration.
+ - If rogue networks are conversing with an Application gateway or Load Balancer, you are able to correct it by configuring NSG rules to block the rogue networks.
+
+ ![Screenshot shows a subnet topology with traffic distribution to an application gateway subnet regarding flows.](./media/traffic-analytics/subnet-topology-showcasing-traffic-distribution-to-a-application-gateway-subnet-with-regards-to-flows.png)
+
+## View ports and virtual machines receiving traffic from the internet
+
+**Look for**
+
+- Which open ports are conversing over the internet?
+ - If unexpected ports are found open, you can correct your configuration:
+
+ ![Screenshot of dashboard showcasing ports receiving and sending traffic to the internet.](./media/traffic-analytics/dashboard-showcasing-ports-receiving-and-sending-traffic-to-the-internet.png)
+
+ ![Screenshot of Azure destination ports and hosts details.](./media/traffic-analytics/details-of-azure-destination-ports-and-hosts.png)
+
+**Look for**
+
+Do you have malicious traffic in your environment? Where is it originating from? Where is it destined to?
+
+![Screenshot of malicious traffic flows detail in log search.](./media/traffic-analytics/malicious-traffic-flows-detail-in-log-search.png)
+
+## View information about public IPs' interacting with your deployment
+
+**Look for**
+
+- Which public IPs' are conversing with my network? What is the WHOIS data and geographic location of all public IPs'?
+- Which malicious IPs' are sending traffic to my deployments? What is the threat type and threat description for malicious IPs'?
+ - The Public IP Information section, gives a summary of all types of public IPs' present in your network traffic.
+ Select the public IP type of interest to view details. This [schema document](./traffic-analytics-schema.md#public-ip-details-schema) defines the data fields presented.
+
+ :::image type="content" source="./media/traffic-analytics/public-ip-information.png" alt-text="Screenshot that displays the public I P information." lightbox="./media/traffic-analytics/public-ip-information.png":::
+
+ - On the traffic analytics dashboard, click on any IP to view its information
+
+ :::image type="content" source="./media/traffic-analytics/external-public-ip-details.png" alt-text="Screenshot that displays the external I P information in tool tip." lightbox="./media/traffic-analytics/external-public-ip-details.png":::
+
+ :::image type="content" source="./media/traffic-analytics/malicious-ip-details.png" alt-text="Screenshot that displays the malicious I P information in tool tip." lightbox="./media/traffic-analytics/malicious-ip-details.png":::
+
+## Visualize the trends in NSG/NSG rules hits
+
+**Look for**
+
+- Which NSG/NSG rules have the most hits in comparative chart with flows distribution?
+- What are the top source and destination conversation pairs per NSG/NSG rules?
+
+ ![Screenshot of dashboard showcasing N S G hits statistics.](./media/traffic-analytics/dashboard-showcasing-nsg-hits-statistics.png)
+
+- The following images show time trending for hits of NSG rules and source-destination flow details for a network security group:
+
+ - Quickly detect which NSGs and NSG rules are traversing malicious flows and which are the top malicious IP addresses accessing your cloud environment
+ - Identify which NSG/NSG rules are allowing/blocking significant network traffic
+ - Select top filters for granular inspection of an NSG or NSG rules
+
+ ![Screenshot showcasing time trending for N S G rule hits and top N S G rules.](./media/traffic-analytics/showcasing-time-trending-for-nsg-rule-hits-and-top-nsg-rules.png)
+
+ ![Screenshot of top N S G rules statistics details in log search.](./media/traffic-analytics/top-nsg-rules-statistics-details-in-log-search.png)
+
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
|||| | Azure Automation / (Microsoft.Automation/automationAccounts) / Webhook, DSCAndHybridWorker | privatelink.azure-automation.net | azure-automation.net | | Azure SQL Database (Microsoft.Sql/servers) / sqlServer | privatelink.database.windows.net | database.windows.net |
-| **Azure SQL Managed Instance** (Microsoft.Sql/managedInstances) | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
+| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
| Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / SqlOnDemand | privatelink.sql.azuresynapse.net | sqlondemand.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net |
For Azure services, use the recommended zone names as described in the following
| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br />guestconfiguration.azure.com | | Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net |
-| Azure Static Web Apps (Microsoft.Web/Staticsites) / staticSites | privatelink.1.azurestaticapps.net | 1.azurestaticapps.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
+### Government
+
+| Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
+||||
+| Azure Automation / (Microsoft.Automation/automationAccounts) / Webhook, DSCAndHybridWorker | privatelink.azure-automation.us | azure-automation.us |
+| Azure SQL Database (Microsoft.Sql/servers) / sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net |
+| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
+| Storage account (Microsoft.Storage/storageAccounts) / Blob (blob, blob_secondary) | privatelink.blob.core.usgovcloudapi.net | blob.core.usgovcloudapi.net |
+| Storage account (Microsoft.Storage/storageAccounts) / Table (table, table_secondary) | privatelink.table.core.usgovcloudapi.net | table.core.usgovcloudapi.net |
+| Storage account (Microsoft.Storage/storageAccounts) / Queue (queue, queue_secondary) | privatelink.queue.core.usgovcloudapi.net | queue.core.usgovcloudapi.net |
+| Storage account (Microsoft.Storage/storageAccounts) / File (file, file_secondary) | privatelink.file.core.usgovcloudapi.net | file.core.usgovcloudapi.net |
+| Storage account (Microsoft.Storage/storageAccounts) / Web (web, web_secondary) | privatelink.web.core.usgovcloudapi.net | web.core.usgovcloudapi.net |
+| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Sql | privatelink.documents.azure.us | documents.azure.us |
+| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.{region}.batch.usgovcloudapi.net | {region}.batch.usgovcloudapi.net |
+| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
+| Azure Database for MySQL (Microsoft.DBforMySQL/servers) / mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net|
+| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net |
+| Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net |
+| Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.us | search.windows.us |
+| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us |
+| Azure Backup (Microsoft.RecoveryServices/vaults) / AzureBackup | privatelink.{region}.backup.windowsazure.us | {region}.backup.windowsazure.us |
+| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {region}.hypervrecoverymanager.windowsazure.us |
+| Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net|
+| Azure Service Bus (Microsoft.ServiceBus/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net| servicebus.usgovcloudapi.net |
+| Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net |
+| Azure Relay (Microsoft.Relay/namespaces) / namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.us | azurewebsites.us |
+| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.adx.monitor.azure.us <br/> privatelink. oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net |
+| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us |
+| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net |
+| Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.us | azurehdinsight.us |
+ ### China | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
Title: 'Injecting default route to Azure VMware Solution'
-description: Learn about how to advertise a default route to Azure VMware Solution with Azure Route Server.
+ Title: 'Injecting routes to Azure VMware Solution'
+description: Learn about how to advertise routes to Azure VMware Solution with Azure Route Server.
Last updated 02/03/2022
-# Injecting a default route to Azure VMware Solution
+# Injecting routes to Azure VMware Solution with Azure Route Server
-[Azure VMware Solution](../azure-vmware/introduction.md) is an Azure service where native VMware vSphere workloads run and communicate with other Azure services. This communication happens over ExpressRoute, and Azure Route Server can be used to modify the default behavior of Azure VMware Solution networking. For example, a default route can be injected from a Network Virtual Appliance (NVA) in Azure to attract traffic from AVS and inspect it before sending it out to the public Internet, or to analyze traffic between AVS and the on-premises network.
+[Azure VMware Solution](../azure-vmware/introduction.md) is an Azure service where native VMware vSphere workloads run and communicate with other Azure services. This communication happens over ExpressRoute, and Azure Route Server can be used to modify the default behavior of Azure VMware Solution networking. The most frequent patterns for injecting routing information in Azure VMware Solution are either advertising a default route to attract Internet traffic to Azure, or advertising routes to achieve communications to on-premises networks when Global Reach is not available.
-Additionally, similar designs can be used to interconnect AVS and on-premises networks sending traffic through an NVA, either because traffic inspection isn't required or because ExpressRoute Global Reach isn't available in the relevant regions.
+## Injecting a default route to Azure VMware Solution
-## Topology
+Certain deployments require to inspect all egress traffic from AVS towards Internet. While it is possible creating Network Virtual Appliances (NVAs) in AVS, some times those appliances already exist in Azure, and they can be leveraged as well to inspect Internet traffic from AVS. In this case, a default route can be injected from the NVA in Azure to attract traffic from AVS and inspect it before sending it out to the public Internet.
The following diagram describes a basic hub and spoke topology connected to an AVS cloud and to an on-premises network through ExpressRoute. The diagram shows how the default route (`0.0.0.0/0`) is originated by the NVA in Azure, and propagated by Azure Route Server to Azure VMware Solution through ExpressRoute.
The following diagram describes a basic hub and spoke topology connected to an A
Communication between Azure VMware Solution and the on-premises network will typically happen over ExpressRoute Global Reach, as described in [Peer on-premises environments to Azure VMware Solution](../azure-vmware/tutorial-expressroute-global-reach-private-cloud.md).
-## Communication between Azure VMware Solution and the on-premises network via NVA
+## Communication between Azure VMware Solution and the on-premises network via an NVA
-There are two main scenarios for this pattern:
+Similar designs can be used to interconnect AVS and on-premises networks sending traffic through an NVA in Azure. There are two main scenarios for this pattern:
-- ExpressRoute Global Reach might not be available on a particular region to interconnect the ExpressRoute circuits of AVS and the on-premises network. - Some organizations might have the requirement to send traffic between AVS and the on-premises network through an NVA (typically a firewall).
+- ExpressRoute Global Reach might not be available on a particular region to interconnect the ExpressRoute circuits of AVS and the on-premises network.
> [!IMPORTANT] > Global Reach is still the preferred option to connect AVS and on-premises environments, the patterns described in this document add a considerable complexity to the environment.
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
For more information about shared access signatures, see [Azure Storage shared a
To upload a large watchlist file to your Azure Storage account, use AzCopy or the Azure portal.
-1. If you donΓÇÖt already have an Azure Storage account, [create a storage account](../storage/common/storage-account-create.md). The storage account can be in a different resource group or region from your workspace in Microsoft Sentinel.
+1. If you don't already have an Azure Storage account, [create a storage account](../storage/common/storage-account-create.md). The storage account can be in a different resource group or region from your workspace in Microsoft Sentinel.
1. Use either AzCopy or the Azure portal to upload your csv file with your watchlist data into the storage account. #### Upload your file with AzCopy Upload files and directories to Blob storage by using the AzCopy v10 command-line utility. To learn more, see [Upload files to Azure Blob storage by using AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md).
-1. If you donΓÇÖt already have a storage container, create one by running the following command.
+1. If you don't already have a storage container, create one by running the following command.
```azcopy azcopy make
Upload files and directories to Blob storage by using the AzCopy v10 command-lin
If you don't use AzCopy, upload your file by using the Azure portal. Go to your storage account in Azure portal to upload the csv file with your watchlist data.
-1. If you donΓÇÖt already have an existing storage container, [create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). For the level of public access to the container, we recommend the default which is that the level is set to Private (no anonymous access).
+1. If you don't already have an existing storage container, [create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). For the level of public access to the container, we recommend the default which is that the level is set to Private (no anonymous access).
1. Upload your csv file to the storage account by [uploading a block blob](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob). ### Step 2: Create shared access signature URL Create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data.
-1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-for-blobs-in-the-azure-portal).
+1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-in-the-azure-portal).
1. Set the shared access signature token expiry time to be at minimum 6 hours. 1. Copy the value for **Blob SAS URL**.
storsimple Storsimple 8000 Choose Storage Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-choose-storage-solution.md
Title: Options for data transfer to Azure using an appliance | Microsoft Docs
-description: Learn how to choose the right appliance for on-premises data transfer to Azure between Data Box Edge, Azure File Sync, and StorSimple 8000 series.
+description: Learn how to choose the right appliance for on-premises data transfer to Azure between Azure Stack Edge, Azure File Sync, and StorSimple 8000 series.
Last updated 04/01/2019
-# Compare StorSimple with Azure File Sync and Data Box Edge data transfer options
+# Compare StorSimple with Azure File Sync and Azure Stack Edge data transfer options
[!INCLUDE [storsimple-8000-eol-banner](../../includes/storsimple-8000-eol-banner.md)]
-This document provides an overview of options for on-premises data transfer to Azure, comparing: Data Box Edge vs. Azure File Sync vs. StorSimple 8000 series.
+This document provides an overview of options for on-premises data transfer to Azure, comparing: Azure Stack Edge vs. Azure File Sync vs. StorSimple 8000 series.
-- **[Data Box Edge](../databox-online/azure-stack-edge-overview.md)** – Data Box Edge is an on-premises network device that moves data into and out of Azure and has AI-enabled Edge compute to pre-process data during upload. Data Box Gateway is a virtual version of the device with the same data transfer capabilities.
+- **[Azure Stack Edge](../databox-online/azure-stack-edge-overview.md)** – Azure Stack Edge is an on-premises network device that moves data into and out of Azure and has AI-enabled Edge compute to pre-process data during upload. Data Box Gateway is a virtual version of the device with the same data transfer capabilities.
- **[Azure File Sync](../storage/file-sync/file-sync-deployment-guide.md)** ΓÇô Azure File Sync can be used to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. General availability of Azure File Sync was announced earlier in 2018. - **[StorSimple](./storsimple-overview.md)** ΓÇô StorSimple is a hybrid device that helps enterprises consolidate their storage infrastructure for primary storage, data protection, archiving, and disaster recovery on a single solution by tightly integrating with Azure storage. The product lifecycle for StorSimple can be found [here](https://support.microsoft.com/lifecycle/search?alpha=Azure%20StorSimple%208000%20Series). ## Comparison summary
-| |StorSimple 8000 |Azure File Sync |Data Box Edge |
+| |StorSimple 8000 |Azure File Sync |Azure Stack Edge |
||-|-|--| |**Overview** |Tiered hybrid storage and archival|General file server storage with cloud tiering and multi-site sync. |Storage solution to pre-process data and send it over network to Azure. | |**Scenarios** |File server, archival, backup target |File server, archival (multi-site) |Data transfer, data pre-processing including ML inferencing, IoT, archival |
This document provides an overview of options for on-premises data transfer to A
## Next steps - Learn about [Azure Data Box Edge](../databox-online/azure-stack-edge-overview.md) and [Azure Data Box Gateway](../databox-gateway/data-box-gateway-overview.md)-- Learn about [Azure File Sync](../storage/file-sync/file-sync-deployment-guide.md)
+- Learn about [Azure File Sync](../storage/file-sync/file-sync-deployment-guide.md)
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
This section will show you how to install the Teams desktop app on your Windows
### Prepare your image for Teams
-To enable media optimization for Teams, set the following registry key on the host:
+To enable media optimization for Teams, set the following registry key on the host VM:
1. From the start menu, run **RegEdit** as an administrator. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams**. Create the Teams key if it doesn't already exist.
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
>The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md). >[!NOTE]
->Azure Hybrid Benefit for BYOS VMs is in Public Preview now. You can start using the capability on Azure by following steps provided in the [section below](#get-started).
+>Azure Hybrid Benefit for BYOS VMs is in Preview now. [Please fill the form here and wait for email from the AHB team to get started.](https://aka.ms/ahb-linux-form) You can start using the capability on Azure by following steps provided in the [section below](#get-started).
Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom image VMs (VMs generated from on-premises images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.