Updates from: 11/06/2023 02:10:52
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
You can modify the following additional settings in the **Data parameters** sect
|Parameter name | Description | |||
-|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. |
+|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. This is the `topNDocuments` parameter in the API. |
| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. | ## Virtual network support & private endpoint support
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten
| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | | `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. | | `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
-| `retrieved_documents` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. |
-| `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. |
- The following parameters can be used inside of the `parameters` field inside of `dataSources`.
The following parameters can be used inside of the `parameters` field inside of
| `indexName` | string | Required | null | The search index to be used. | | `fieldsMapping` | dictionary | Optional | null | Index data column mapping. | | `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
-| `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. |
+| `topNDocuments` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
The following parameters can be used inside of the `parameters` field inside of
| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.|
+| `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. |
### Start an ingestion job
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
To maintain node performance and functionality, AKS reserves resources on each n
Two types of resources are reserved: -- **CPU**
- Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
-
- | CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
- |||||||||
- |Kube-reserved (millicores)|60|100|140|180|260|420|740|
--- **Memory**
- Memory utilized by AKS includes the sum of two values.
-
- 1. **`kubelet` daemon**
- The `kubelet` daemon is installed on all Kubernetes agent nodes to manage container creation and termination.
-
- By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
-
- 2. **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
- - 25% of the first 4 GB of memory
- - 20% of the next 4 GB of memory (up to 8 GB)
- - 10% of the next 8 GB of memory (up to 16 GB)
- - 6% of the next 112 GB of memory (up to 128 GB)
- - 2% of any memory above 128 GB
+#### CPU
+
+Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
+
+| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
+|||||||||
+|Kube-reserved (millicores)|60|100|140|180|260|420|740|
+
+#### Memory
+
+Memory utilized by AKS includes the sum of two values.
+
+> [!IMPORTANT]
+> AKS 1.28 includes certain changes to memory reservations. These changes are detailed in the following section.
+
+**AKS 1.28 and later**
+
+1. **`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This ensures that a node always has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
+2. **A rate of memory reservations** set according to the lesser value of: *20MB * Max Pods supported on the Node + 50MB* or *25% of the total system memory resources*.
+
+ **Examples**:
+ * If the VM provides 8GB of memory and the node supports up to 30 pods, AKS reserves *20MB * 30 Max Pods + 50MB = 650MB* for kube-reserved. `Allocatable space = 8GB - 0.65GB (kube-reserved) - 0.1GB (eviction threshold) = 7.25GB or 90.625% allocatable.`
+ * If the VM provides 4GB of memory and the node supports up to 70 pods, AKS reserves *25% * 4GB = 1000MB* for kube-reserved, as this is less than *20MB * 70 Max Pods + 50MB = 1450MB*.
+
+ For more information, see [Configure maximum pods per node in an AKS cluster](./azure-cni-overview.md#maximum-pods-per-node).
+
+**AKS versions prior to 1.28**
+
+1. **`kubelet` daemon** is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
+
+2. **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
+ * 25% of the first 4GB of memory
+ * 20% of the next 4GB of memory (up to 8GB)
+ * 10% of the next 8GB of memory (up to 16GB)
+ * 6% of the next 112GB of memory (up to 128GB)
+ * 2% of any memory above 128GB
>[!NOTE] > AKS reserves an additional 2GB for system process in Windows nodes that are not part of the calculated memory.
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
Previously updated : 02/26/2023 Last updated : 09/14/2023
A public IP address isn't required for an internal endpoint that's not exposed t
Only one public IP address and one private IP address is supported. You choose the frontend IP when you create the application gateway.
+ > [!NOTE]
+ > Application Gateway frontend supports dual-stack IP addresses (Public Preview). You can create up to four frontend IPs: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
++ - For a public IP address, you can create a new public IP address or use an existing public IP in the same location as the application gateway. For more information, see [static vs. dynamic public IP address](./application-gateway-components.md#static-versus-dynamic-public-ip-address). - For a private IP address, you can specify a private IP address from the subnet where the application gateway is created. For Application Gateway v2 sku deployments, a static IP address must be defined when adding a private IP address to the gateway. For Application Gateway v1 sku deployments, if you don't specify an IP address, an available IP address is automatically selected from the subnet. The IP address type that you select (static or dynamic) can't be changed later. For more information, see [Create an application gateway with an internal load balancer](./application-gateway-ilb-arm.md).
A frontend IP address is associated to a *listener*, which checks for incoming r
> [!IMPORTANT] > **The default domain name behavior for V1 SKU**:
-> - Deployments before 1st May 2023: These deployments will continue to have the default domain names like <label>.cloudapp.net mapped to the application gateway's Public IP address.
+> - Deployments before 1st May 2023: These deployments will continue to have the default domain names like \<label>.cloudapp.net mapped to the application gateway's Public IP address.
> - Deployments after 1st May 2023: For deployments after this date, there will NOT be any default domain name mapped to the gateway's Public IP address. You must manually configure using your domain name by mapping its DNS record to the gateway's IP address ## Next steps
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
For the v2 SKU, multi-site listeners are processed before basic listeners, unles
Choose the frontend IP address that you plan to associate with this listener. The listener will listen to incoming requests on this IP.
+ > [!NOTE]
+ > Application Gateway frontend supports dual-stack IP addresses (Public Preview). You can create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
++ ## Frontend port Associate a frontend port. You can select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. The same port can be used for public and private listeners (Preview feature).
application-gateway Ipv6 Application Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-portal.md
+
+ Title: Configure Application Gateway with a frontend public IPv6 address using the Azure portal (Preview)
+
+description: Learn how to configure Application Gateway with a frontend public IPv6 address.
+++ Last updated : 11/06/2023+++++
+# Configure Application Gateway with a frontend public IPv6 address using the Azure portal (Preview)
+
+[Azure Application Gateway](overview.md) supports dual stack (IPv4 and IPv6) frontend connections from clients. To use IPv6 frontend connectivity, you need to create a new Application Gateway. Currently you canΓÇÖt upgrade existing IPv4 only Application Gateways to dual stack (IPv4 and IPv6) Application Gateways. Also, currently backend IPv6 addresses aren't supported.
+
+To support IPv6 connectivity, you must create a dual stack VNet. This dual stack VNet has subnets for both IPv4 and IPv6. Azure VNets already [provide dual-stack capability](../virtual-network/ip-services/ipv6-overview.md).
+
+For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
+
+> [!IMPORTANT]
+> Application Gateway IPv6 frontend is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+The Azure portal is used to create an IPv6 Azure Application Gateway. Testing is performed to verify it works correctly.
+
+You learn how to:
+* [Register](#register-to-the-preview) and [unregister](#unregister-from-the-preview) from the preview
+* Set up a [dual-stack network](#dual-stack)
+* Create an application gateway with [IPv6 frontend](#frontends-tab)
+* Create a virtual machine and install IIS for [testing](#test-the-application-gateway)
+
+You can also complete this quickstart using [Azure PowerShell](ipv6-application-gateway-powershell.md).
+
+## Regions and availability
+
+The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported.
+
+## Limitations
+
+* Only v2 SKU supports a frontend with both IPv4 and IPv6 addresses
+* IPv6 backends are currently not supported
+* IPv6 private Link is currently not supported
+* IPv6-only Application Gateway is currently not supported. Application Gateway must be dual stack (IPv6 and IPv4)
+* Deletion of frontend IP addresses aren't supported
+* Existing IPv4 Application Gateways cannot be upgraded to dual stack Application Gateways
+
+> [!NOTE]
+> If you use WAF v2 SKU for a frontend with both IPv4 and IPv6 addresses, WAF rules only apply to IPv4 traffic. IPv6 traffic bypasses WAF and may get blocked by some custom rule.
+
+## Prerequisites
+
+An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+## Register to the preview
+
+> [!NOTE]
+> When you join the preview, all new Application Gateways provision with the ability to define a dual stack frontend connection. If you wish to opt out from the new functionality and return to the current generally available functionality of Application Gateway, you can [unregister from the preview](#unregister-from-the-preview).
+
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+In this article, you use the Azure portal to create an IPv6 Azure Application Gateway and test it to ensure it works correctly. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with two public frontend IP addresses (IPv4 and IPv6), a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
+
+> [!IMPORTANT]
+> If you're registered for [Private Application Gateway](application-gateway-private-deployment.md) preview, you must delete any Private Application Gateways that are provisioned before you can register to the IPv6 Application Gateway preview. Also [unregister](application-gateway-private-deployment.md#unregister-from-the-preview) from the **EnableApplicationGatewayNetworkIsolation** preview feature.
+
+Use the following steps to enroll into the public preview for IPv6 Application Gateway using the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="A screenshot of Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="A screenshot of selecting the Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="A screenshot of the Azure preview features menu.":::
+
+5. You see a list of available preview features and your current registration status.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="A screenshot of the Azure portal list of preview features.":::
+
+6. From **Preview features** type into the filter box **AllowApplicationGatewayIPv6**, check the feature, and click **Register**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="A screenshot of the Azure portal filter preview features.":::
+
+## Create an application gateway
+
+Create the application gateway using the tabs on the **Create application gateway** page.
+
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+2. Under **Categories**, select **Networking** and then select **Application Gateway** in the **Popular Azure services** list.
+
+### Basics tab
+
+1. On the **Basics** tab, enter the following values for the application gateway settings:
+
+ - **Subscription**: Select your subscription. For example, **_mysubscription**.
+ - **Resource group**: Select a resource group. If one doesn't exist, select **Create new** to create it. For example, **myresourcegroupAG**.
+ - **Application gateway name**: Enter a name for the application gateway. For example, **myappgw**.
+ - **IP address type**: Select **Dual stack (IPv4 & IPv6)**.
+
+ ![A screenshot of create new application gateway: Basics.](./media/ipv6-application-gateway-portal/ipv6-app-gateway.png)
+
+2. **Configure virtual network**: For Azure to communicate between the resources that you create, a dual stack virtual network is needed. You can either create a new dual stack virtual network or choose an existing dual stack network. In this example, you create a new dual stack virtual network at the same time that you create the application gateway.
+
+ Application Gateway instances are created in separate subnets. One dual-stack subnet and one IPv4-only are created in this example: The IPv4 and IPv6 subnets (provisioned as one dual-stack subnet) are assigned to the application gateway. The IPv4 subnet is for the backend servers.
+
+ > [!NOTE]
+ > [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
+ <a name="dual-stack"></a>
+ Under **Configure virtual network**, create a new virtual network by selecting **Create new**. In the **Create virtual network** pane, enter the following values to create the virtual network and two subnets:
+
+ - **Name**: Enter a name for the virtual network. For example, **myVNet**.
+ - **Subnet name** (Application Gateway subnet): The **Subnets** grid shows a subnet named **default**. Change the name of this subnet to **myAGSubnet**.
+ - **Address range** - The default IPv4 address ranges for the VNet and the subnet are 10.0.0.0/16 and 10.0.0.0/24, respectively. The default IPv6 address ranges for the VNet and the subnet are ace:cab:deca::/48 and ace:cab:deca::/64, respectively. If you see different default values, you might have an existing subnet that overlaps with these ranges.
+
+ ![A screenshot of create new application gateway: virtual network.](./media/ipv6-application-gateway-portal/ipv6-create-vnet-subnet.png)
+
+ > [!NOTE]
+ > The application gateway subnet can contain only application gateways. No other resources are allowed.
+
+ Select **OK** to close the **Create virtual network** window and save the new virtual network and subnet settings.
+
+3. Select **Next: Frontends**.
+
+### Frontends tab
+
+1. On the **Frontends** tab, verify **Frontend IP address type** is set to **Public**.
+
+ > [!IMPORTANT]
+ > For the Application Gateway v2 SKU, there must be a **Public** frontend IP configuration. A private IPv6 frontend IP configuration (Only ILB mode) is currently not supported for the IPv6 Application Gateway preview.
+
+2. Select **Add new** for the **Public IP address**, enter a name for the public IP address, and select **OK**. For example, **myAGPublicIPAddress**.
+
+ ![A screenshot of create new application gateway: frontends.](./media/ipv6-application-gateway-portal/ipv6-frontends.png)
+
+ > [!NOTE]
+ > IPv6 Application Gateway (preview) supports up to 4 frontend IP addresses: two IPv4 addresses (Public and Private) and two IPv6 addresses (Public and Private)
++
+3. Select **Next: Backends**.
+
+### Backends tab
+
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+
+1. On the **Backends** tab, select **Add a backend pool**.
+
+2. In the **Add a backend pool** pane, enter the following values to create an empty backend pool:
+
+ - **Name**: Enter a name for the backend pool. For example, **myBackendPool**.
+ - **Add backend pool without targets**: Select **Yes** to create a backend pool with no targets. Backend targets are added after creating the application gateway.
+
+3. Select **Add** to save the backend pool configuration and return to the **Backends** tab.
+
+ ![A screenshot of create new application gateway: backends.](./media/ipv6-application-gateway-portal/ipv6-backend.png)
+
+4. On the **Backends** tab, select **Next: Configuration**.
+
+### Configuration tab
+
+On the **Configuration** tab, the frontend and backend pool are connected with a routing rule.
+
+1. Under **Routing rules**, select **Add a routing rule**.
+
+2. In the **Add a routing rule** pane, enter the following values:
+
+ - **Rule name**: Enter a name for the rule. For example, **myRoutingRule**.
+ - **Priority**: Enter a value between 1 and 20000, where 1 represents highest priority and 20000 represents lowest. For example, enter a priority of **100**.
+
+3. A routing rule requires a listener. On the **Listener** tab, enter the following values:
+
+ - **Listener name**: Enter a name for the listener. For example, **myListener**.
+ - **Frontend IP**: Select **Public IPv6**.
+
+ Accept the default values for the other settings on the **Listener** tab and then select the **Backend targets** tab.
+
+ ![A screenshot of create new application gateway: listener.](./media/ipv6-application-gateway-portal/ipv6-listener.png)
+
+4. On the **Backend targets** tab, select your backend pool for the **Backend target**. For example, **myBackendPool**.
+
+5. For the **Backend setting**, select **Add new**. The Backend setting determines the behavior of the routing rule. In the **Add Backend setting** pane, enter a Backend settings name. For example, **myBackendSetting**.
+
+6. Accept the default values for other settings and then select **Add**.
+
+ ![A screenshot of create new application gateway: backend setting.](./media/ipv6-application-gateway-portal/ipv6-backend-setting.png)
+
+7. In the **Add a routing rule** pane, select **Add** to save the routing rule and return to the **Configuration** tab.
+
+ ![A screenshot of create new application gateway: routing rule.](./media/ipv6-application-gateway-portal/ipv6-routing-rule.png)
+
+8. Select **Next: Tags**, select **Next: Review + create**, and then select **Create**. Deployment of the application gateway takes a few minutes.
+
+## Assign a DNS name to the frontend IPv6 address
+
+A DNS name makes testing easier for the IPv6 application gateway. You can assign a public DNS name using your own domain and registrar, or you can create a name in azure.com. To assign a name in azure.com:
+
+1. From the Azure portal Home page, search for **Public IP addresses**.
+2. Select **MyAGPublicIPv6Address**.
+3. Under **Settings**, select **Configuration**.
+4. Under **DNS name label (optional)**, enter a name. For example, **myipv6appgw**.
+5. Select **Save**.
+6. Copy the FQDN to a text editor for access later. In the following example, the FQDN is **myipv6appgw.westcentralus.cloudapp.azure.com**.
+
+ ![A screenshot of assigning a DNS name.](./media/ipv6-application-gateway-portal/assign-dns.png)
+
+## Add a backend subnet
+
+A backend IPv4 subnet is required for the backend targets. The backend subnet is IPv4-only.
+
+1. On the portal Home page, search for Virtual Networks and select the **MyVNet** virtual network.
+2. Next to **Address space**, select **10.0.0.0/16**.
+3. Under **Settings**, select **Subnets**.
+4. Select **+ Subnet** to add a new subnet.
+5. Under **Name**, enter **MyBackendSubnet**.
+6. The default address space is **10.0.1.0/24**. Select **Save** to accept this and all other default settings.
+
+ ![Create backend subnet](./media/ipv6-application-gateway-portal/backend-subnet.png)
+
+## Add backend targets
+
+Next, a backend target is added to test the application gateway:
+
+1. One [VM is created](#create-a-virtual-machine): **myVM** and used as a backend target. You can also use existing virtual machines if they're available.
+2. [IIS is installed](#install-iis-for-testing) on the virtual machine to verify that the application gateway was created successfully.
+3. The backend server (VM) is [added to the backend pool](#add-backend-servers-to-backend-pool).
+
+> [!NOTE]
+> Only one virtual machine is deployed here as backend target because we are only testing connectivity. You can add multiple virtual machines if you also wish to test load balancing.
+
+### Create a virtual machine
+
+Application Gateway can route traffic to any type of virtual machine used in the backend pool. A Windows Server 2019 Datacenter virtual machine is used in this example.
+
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+2. Select **Windows Server 2019 Datacenter** in the **Popular** list. The **Create a virtual machine** page appears.
+3. Enter the following values on the **Basics** tab:
+ - **Resource group**: Select **myResourceGroupAG**.
+ - **Virtual machine name**: Enter **myVM**.
+ - **Region**: Select the same region where you created the application gateway.
+ - **Username**: Enter a name for the administrator user name.
+ - **Password**: Enter a password.
+ - **Public inbound ports**: **None**.
+4. Accept the other defaults and then select **Next: Disks**.
+5. Accept the **Disks** tab defaults and then select **Next: Networking**.
+6. Next to **Virtual network**, verify that **myVNet** is selected.
+7. Next to **Subnet**, verify that **myBackendSubnet** is selected.
+8. Next to **Public IP**, select **None**.
+8. Select **Next: Management**, **Next: Monitoring**, and then next to **Boot diagnostics** select **Disable**.
+7. Select **Review + create**.
+8. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**.
+9. Wait for the virtual machine creation to complete before continuing.
+
+### Install IIS for testing
+
+In this example, you install IIS on the virtual machines to verify Azure created the application gateway successfully.
+
+1. Open Azure PowerShell.
+
+ Select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
+
+2. Run the following command to install IIS on the virtual machine. Change the *Location* parameter if necessary:
+
+ ```azurepowershell
+ Set-AzVMExtension `
+ -ResourceGroupName myResourceGroupAG `
+ -ExtensionName IIS `
+ -VMName myVM `
+ -Publisher Microsoft.Compute `
+ -ExtensionType CustomScriptExtension `
+ -TypeHandlerVersion 1.4 `
+ -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
+ -Location WestCentralUS
+ ```
+
+ See the following example:
+
+ ![A screenshot of installing a custom extension.](./media/ipv6-application-gateway-portal/install-extension.png)
+
+### Add backend servers to backend pool
+
+1. On the Azure portal menu, select **Application gateways** or search for and select **Application gateways*. Then select **myAppGateway**.
+2. Under **Settings**, select **Backend pools** and then select **myBackendPool**.
+4. Under **Backend targets**, **Target type**, select **Virtual machine** from the drop-down list.
+5. Under **Target**, select the **myVM** network interface from the drop-down list.
+
+ ![Add a backend server](./media/ipv6-application-gateway-portal/ipv6-backend-pool.png)
+
+6. Select **Save**.
+7. Wait for the deployment to complete before proceeding to the next step. Deployment takes a few minutes.
+
+## Test the application gateway
+
+IIS isn't required to create the application gateway. It's installed here to verify that you're able to successfully connect to the IPv6 interface of the application gateway.
+
+Previously, we assigned the DNS name **myipv6appgw.westcentralus.cloudapp.azure.com** to the public IPv6 address of the application gateway. To test this connection:
+
+1. Paste the DNS name into the address bar of your browser to connect to it.
+2. Check the response. A valid response of **myVM** verifies that the application gateway was successfully created and can successfully connect with the backend.
+
+ ![Test the IPv6 connection](./media/ipv6-application-gateway-portal/ipv6-test-connection.png)
+
+ > [!IMPORTANT]
+ > If the connection to the DNS name or IPv6 address fails, it might be because you can't browse IPv6 addresses from your device. To check if this is your problem, also test the IPv4 address of the application gateway. If the IPv4 address connects successfully, then it's likely you don't have a public IPv6 address assigned to your device. If this is the case, you can try testing the connection with a [dual-stack VM](../virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md).
+
+## Clean up resources
+
+When you no longer need the resources that you created with the application gateway, delete the resource group. When you delete the resource group, you also remove the application gateway and all the related resources.
+
+To delete the resource group:
+
+1. On the Azure portal menu, select **Resource groups** or search for and select **Resource groups**.
+2. On the **Resource groups** page, search for **myResourceGroupAG** in the list, then select it.
+3. On the **Resource group page**, select **Delete resource group**.
+4. Enter **myResourceGroupAG** under **TYPE THE RESOURCE GROUP NAME** and then select **Delete**
+
+## Unregister from the preview
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Portal, use the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="A screenshot of Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="A screenshot of selecting the Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="A screenshot of the Azure preview features menu.":::
+
+5. A list of available preview features with your current registration status is displayed.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="A screenshot of the Azure portal list of preview features.":::
+
+6. From **Preview features** type **AllowApplicationGatewayIPv6** into the filter box, select the feature, and select **Unregister**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="A screenshot of Azure portal filter preview features.":::
+
+## Next steps
+
+- [What is Azure Application Gateway v2?](overview-v2.md)
application-gateway Ipv6 Application Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-powershell.md
+
+ Title: Configure Application Gateway with a frontend public IPv6 address using Azure PowerShell (Preview)
+description: Learn how to configure Application Gateway with a frontend public IPv6 address using Azure PowerShell.
++++ Last updated : 08/17/2023++++
+# Configure Application Gateway with a frontend public IPv6 address using Azure PowerShell (Preview)
+
+[Azure Application Gateway](overview.md) supports dual stack (IPv4 and IPv6) frontend connections from clients. To use IPv6 frontend connectivity, you need to create a new Application Gateway. Currently you canΓÇÖt upgrade existing IPv4 only Application Gateways to dual stack (IPv4 and IPv6) Application Gateways. Also, currently backend IPv6 addresses aren't supported.
+
+To support IPv6 frontend support, you must create a dual stack VNet. This dual stack VNet will have subnets for both IPv4 and IPv6. Azure VNets already [provide dual-stack capability](../virtual-network/ip-services/ipv6-overview.md).
+
+> [!IMPORTANT]
+> Application Gateway IPv6 frontend is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+Azure PowerShell is used to create an IPv6 Azure Application Gateway. Testing is performed to verify it works correctly.
+
+You learn how to:
+* [Register](#register-to-the-preview) and [unregister](#unregister-from-the-preview) from the preview
+* Set up the [dual-stack network](#configure-a-dual-stack-subnet-and-backend-subnet)
+* Create an application gateway with [IPv6 frontend](#create-application-gateway-frontend-public-ip-addresses)
+* Create a virtual machine scale set with the default [backend pool](#create-the-backend-pool-and-settings)
+
+Azure PowerShell is used to create an IPv6 Azure Application Gateway and perform testing to ensure it works correctly. Application gateway can manage and secure web traffic to servers that you maintain. A [virtual machine scale set](../virtual-machine-scale-sets/overview.md) is for backend servers to manage web traffic. The scale set contains two virtual machine instances that are added to the default backend pool of the application gateway. For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
+
+You can also complete this quickstart using the [Azure portal](ipv6-application-gateway-portal.md)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+
+## Regions and availability
+
+The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported.
+
+## Limitations
+
+* Only v2 SKU supports a frontend with both IPv4 and IPv6 addresses
+* IPv6 backends are currently not supported
+* IPv6 private Link is currently not supported
+* IPv6-only Application Gateway is currently not supported. Application Gateway must be dual stack (IPv6 and IPv4)
+* Deletion of frontend IP addresses aren't supported
+* Existing IPv4 Application Gateways cannot be upgraded to dual stack Application Gateways
+
+> [!NOTE]
+> If you use WAF v2 SKU for a frontend with both IPv4 and IPv6 addresses, WAF rules only apply to IPv4 traffic. IPv6 traffic bypasses WAF and may get blocked by some custom rule.
++
+## Register to the preview
+
+> [!NOTE]
+> When you join the preview, all new Application Gateways provision with the ability to define a dual stack frontend connection. If you wish to opt out from the new functionality and return to the current generally available functionality of Application Gateway, you can [unregister from the preview](#unregister-from-the-preview).
+
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+Use the following commands to enroll into the public preview for IPv6 Application Gateway:
+
+```azurepowershell
+Register-AzProviderFeature -FeatureName "AllowApplicationGatewayIPv6" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+AllowApplicationGatewayIPv6 Microsoft.Network Registered
+```
+
+## Create a resource group
+
+A resource group is a logical container into which Azure resources are deployed and managed. Create an Azure resource group using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myResourceGroupAG -Location eastus
+```
+
+## Configure a dual-stack subnet and backend subnet
+
+Configure the subnets named *myBackendSubnet* and *myAGSubnet* using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig).
++
+```azurepowershell-interactive
+$AppGwSubnetPrefix = @("10.0.0.0/24", "ace:cab:deca::/64")
+$appgwSubnet = New-AzVirtualNetworkSubnetConfig `
+-Name myAGSubnet -AddressPrefix $AppGwSubnetPrefix
+$backendSubnet = New-AzVirtualNetworkSubnetConfig `
+-Name myBackendSubnet -AddressPrefix 10.0.1.0/24
+```
++
+## Create a dual stack virtual network
+
+```azurepowershell-interactive
+$VnetPrefix = @("10.0.0.0/16", "ace:cab:deca::/48")
+$vnet = New-AzVirtualNetwork `
+-Name myVNet `
+-ResourceGroupName myResourceGroupAG `
+-Location eastus `
+-AddressPrefix $VnetPrefix `
+-Subnet @($appgwSubnet, $backendSubnet)
+```
+
+## Create Application Gateway Frontend public IP addresses
+
+```azurepowershell-interactive
+$pipv4 = New-AzPublicIpAddress `
+-Name myAGPublicIPAddress4 `
+-ResourceGroupName myResourceGroupAG `
+-Location eastus `
+-Sku 'Standard' `
+-AllocationMethod 'Static' `
+-IpAddressVersion 'IPv4' `
+-Force
+
+$pipv6 = New-AzPublicIpAddress `
+-Name myAGPublicIPAddress6 `
+-ResourceGroupName myResourceGroupAG `
+-Location eastus `
+-Sku 'Standard' `
+-AllocationMethod 'Static' `
+-IpAddressVersion 'IPv6' `
+-Force
+```
+
+### Create the IP configurations and ports
+
+Associate *myAGSubnet* that you previously created to the application gateway using [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration). Assign *myAGPublicIPAddress* to the application gateway using [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig).
+
+```azurepowershell-interactive
+$vnet = Get-AzVirtualNetwork `
+-ResourceGroupName myResourceGroupAG `
+-Name myVNet
+$subnet = Get-AzVirtualNetworkSubnetConfig `
+-VirtualNetwork $vnet `
+-Name myAGSubnet
+$gipconfig = New-AzApplicationGatewayIPConfiguration `
+-Name myAGIPConfig `
+-Subnet $subnet
+$fipconfigv4 = New-AzApplicationGatewayFrontendIPConfig `
+-Name myAGFrontendIPv4Config `
+-PublicIPAddress $pipv4
+$fipconfigv6 = New-AzApplicationGatewayFrontendIPConfig `
+-Name myAGFrontendIPv6Config `
+-PublicIPAddress $pipv6
+$frontendport = New-AzApplicationGatewayFrontendPort `
+-Name myAGFrontendIPv6Config `
+-Port 80
+```
+
+### Create the backend pool and settings
+
+Create the backend pool named *appGatewayBackendPool* for the application gateway using [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool). Configure the settings for the backend address pools using [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting).
+
+```azurepowershell-interactive
+$backendPool = New-AzApplicationGatewayBackendAddressPool `
+-Name myAGBackendPool
+$poolSettings = New-AzApplicationGatewayBackendHttpSetting `
+-Name myPoolSettings `
+-Port 80 `
+-Protocol Http `
+-CookieBasedAffinity Enabled `
+-RequestTimeout 30
+```
+
+### Create the default listener and rule
+
+A listener is required to enable the application gateway to route traffic appropriately to the backend pool. In this example, you create a basic listener that listens for traffic at the root URL.
+
+Create a listener named *mydefaultListener* using [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) with the frontend configuration and frontend port that you previously created. A rule is required for the listener to know which backend pool to use for incoming traffic. Create a basic rule named *rule1* using [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule).
+
+```azurepowershell-interactive
+$listenerv4 = New-AzApplicationGatewayHttpListener `
+-Name myAGListnerv4 `
+-Protocol Http `
+-FrontendIPConfiguration $fipconfigv4 `
+-FrontendPort $frontendport
+$listenerv6 = New-AzApplicationGatewayHttpListener `
+-Name myAGListnerv6 `
+-Protocol Http `
+-FrontendIPConfiguration $fipconfigv6 `
+-FrontendPort $frontendport
+$frontendRulev4 = New-AzApplicationGatewayRequestRoutingRule `
+-Name ruleIPv4 `
+-RuleType Basic `
+-Priority 10 `
+-HttpListener $listenerv4 `
+-BackendAddressPool $backendPool `
+-BackendHttpSettings $poolSettings
+$frontendRulev6 = New-AzApplicationGatewayRequestRoutingRule `
+-Name ruleIPv6 `
+-RuleType Basic `
+-Priority 1 `
+-HttpListener $listenerv6 `
+-BackendAddressPool $backendPool `
+-BackendHttpSettings $poolsettings
+```
+
+### Create the application gateway
+
+Now that you have created the necessary supporting resources, you can specify parameters for the application gateway using [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku). The new application gateway is created using [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway). Creating the application gateway takes a few minutes.
+
+```azurepowershell-interactive
+$sku = New-AzApplicationGatewaySku `
+ -Name Standard_v2 `
+ -Tier Standard_v2 `
+ -Capacity 2
+New-AzApplicationGateway `
+-Name myipv6AppGW `
+-ResourceGroupName myResourceGroupAG `
+-Location eastus `
+-BackendAddressPools $backendPool `
+-BackendHttpSettingsCollection $poolsettings `
+-FrontendIpConfigurations @($fipconfigv4, $fipconfigv6) `
+-GatewayIpConfigurations $gipconfig `
+-FrontendPorts $frontendport `
+-HttpListeners @($listenerv4, $listenerv6) `
+-RequestRoutingRules @($frontendRulev4, $frontendRulev6) `
+-Sku $sku `
+-Force
+```
+
+## Backend servers
+
+Now that you have created the application gateway, you can create the backend virtual machines to host websites. A backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
+++
+## Create two virtual machines
+
+In this example, you create two virtual machines to use as backend servers for the application gateway. IIS is installed on the virtual machines to verify that Azure successfully created the application gateway. The scale set is assigned to the backend pool when you configure the IP address settings.
+
+To create the virtual machines, we get the recently created Application Gateway backend pool configuration with *Get-AzApplicationGatewayBackendAddressPool*. This information is used to:
+* Create a network interface with *New-AzNetworkInterface*.
+* Create a virtual machine configuration with *New-AzVMConfig*.
+* Create the virtual machines with *New-AzVM*.
+
+> [!NOTE]
+> When you run the following code sample to create virtual machines, Azure prompts you for credentials. Enter your username and password.ΓÇï Creation of the VMs takes a few minutes.
+
+```azurepowershell-interactive
+$appgw = Get-AzApplicationGateway -ResourceGroupName myResourceGroupAG -Name myipv6AppGW
+$backendPool = Get-AzApplicationGatewayBackendAddressPool -Name myAGBackendPool -ApplicationGateway $appgw
+$vnet = Get-AzVirtualNetwork -ResourceGroupName myResourceGroupAG -Name myVNet
+$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name myBackendSubnet
+$cred = Get-Credential
+for ($i=1; $i -le 2; $i++)
+{
+ $nic = New-AzNetworkInterface `
+ -Name myNic$i `
+ -ResourceGroupName myResourceGroupAG `
+ -Location EastUS `
+ -Subnet $subnet `
+ -ApplicationGatewayBackendAddressPool $backendpool
+ $vm = New-AzVMConfig `
+ -VMName myVM$i `
+ -VMSize Standard_DS2_v2
+ Set-AzVMOperatingSystem `
+ -VM $vm `
+ -Windows `
+ -ComputerName myVM$i `
+ -Credential $cred
+ Set-AzVMSourceImage `
+ -VM $vm `
+ -PublisherName MicrosoftWindowsServer `
+ -Offer WindowsServer `
+ -Skus 2016-Datacenter `
+ -Version latest
+ Add-AzVMNetworkInterface `
+ -VM $vm `
+ -Id $nic.Id
+ Set-AzVMBootDiagnostic `
+ -VM $vm `
+ -Disable
+ New-AzVM -ResourceGroupName myResourceGroupAG -Location EastUS -VM $vm
+ Set-AzVMExtension `
+ -ResourceGroupName myResourceGroupAG `
+ -ExtensionName IIS `
+ -VMName myVM$i `
+ -Publisher Microsoft.Compute `
+ -ExtensionType CustomScriptExtension `
+ -TypeHandlerVersion 1.4 `
+ -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
+ -Location EastUS
+}
+```
+## Find the public IP address of Application Gateway
+
+```azurepowershell-interactive
+Get-AzPublicIPAddress -ResourceGroupName myResourceGroupAG -Name myAGPublicIPAddress6
+```
+
+## Assign a DNS name to the frontend IPv6 address
+
+A DNS name makes testing easier for the IPv6 application gateway. You can assign a public DNS name using your own domain and registrar or you can create a name in azure.com.
+
+Use the following commands to assign a name in azure.com. The name is set to the label you specify + the region + cloudapp.azure.com. In this example, the AAAA record **myipv6appgw** is created in the namespace **eastus.cloudapp.azure.com**:
+
+```azurepowershell-interactive
+$publicIp = Get-AzPublicIpAddress -Name myAGPublicIPAddress6 -ResourceGroupName myResourceGroupAG
+$publicIp.DnsSettings = @{"DomainNameLabel" = "myipv6appgw"}
+Set-AzPublicIpAddress -PublicIpAddress $publicIp
+```
+
+## Test the application gateway
+
+Previously, we assigned the DNS name **myipv6appgw.eastus.cloudapp.azure.com** to the public IPv6 address of the application gateway. To test this connection:
+
+1. Using the Invoke-WebRequest cmdlet, issue a request to the IPv6 frontend.
+2. Check the response. A valid response of **myVM1** or **myVM2** verifies that the application gateway was successfully created and can successfully connect with the backend. If you issue the command several times, the gateway load balances and responds to subsequent requests from a different backend server.
+
+```PowerShell
+PS C:\> (Invoke-WebRequest -Uri myipv6appgw.eastus.cloudapp.azure.com).Content
+myVM2
+```
+> [!IMPORTANT]
+> If the connection to the DNS name or IPv6 address fails, it might be because you can't browse IPv6 addresses from your device. To check if this is your problem, also test the IPv4 address of the application gateway. If the IPv4 address connects successfully, then it's likely you don't have a public IPv6 address assigned to your device. If this is the case, you can try testing the connection with a [dual-stack VM](../virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md).
+
+## Clean up resources
+
+When no longer needed, remove the resource group, application gateway, and all related resources using [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup).
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroupAG
+```
+
+## Unregister from the preview
+
+Use the following commands to opt out of the public preview for IPv6 Application Gateway:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName "AllowApplicationGatewayIPv6" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+AllowApplicationGatewayIPv6 Microsoft.Network Unregistered
+```
+
+## Next steps
+
+- [What is Azure Application Gateway v2?](overview-v2.md)
+- [Troubleshoot Bad Gateway](application-gateway-troubleshooting-502.md)
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
Previously updated : 06/07/2023 Last updated : 11/06/2023
az group create --name myResourceGroupAG --location eastus
For Azure to communicate between the resources that you create, it needs a virtual network. The application gateway subnet can contain only application gateways. No other resources are allowed. You can either create a new subnet for Application Gateway or use an existing one. In this example, you create two subnets: one for the application gateway, and another for the backend servers. You can configure the Frontend IP of the Application Gateway to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP address.
+ > [!NOTE]
+ > Application Gateway frontend now supports dual-stack IP addresses (Public Preview). You can now create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
++ To create the virtual network and subnet, use `az network vnet create`. Run `az network public-ip create` to create the public IP address. ```azurecli-interactive
az network public-ip create \
## Create the backend servers
-A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
+A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multitenant backends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
#### Create two virtual machines
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 06/08/2023 Last updated : 11/06/2023
You'll create the application gateway using the tabs on the **Create application
![Create new application gateway: frontends](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png)
+ > [!NOTE]
+ > Application Gateway frontend now supports dual-stack IP addresses (Public Preview). You can now create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
++ 3. Select **Next: Backends**. ### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multitenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
### Review + create tab
-Review the settings on the **Review + create** tab, and then select **Create** to create the virtual network, the public IP address, and the application gateway. It may take several minutes for Azure to create the application gateway. Wait until the deployment finishes successfully before moving on to the next section.
+Review the settings on the **Review + create** tab, and then select **Create** to create the virtual network, the public IP address, and the application gateway. It can take several minutes for Azure to create the application gateway. Wait until the deployment finishes successfully before moving on to the next section.
## Add backend targets
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
description: In this quickstart, you learn how to use Azure PowerShell to create
Previously updated : 07/21/2022 Last updated : 11/06/2022
$frontendport = New-AzApplicationGatewayFrontendPort `
-Name myFrontendPort ` -Port 80 ```
+ > [!NOTE]
+ > Application Gateway frontend now supports dual-stack IP addresses (Public Preview). You can now create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
### Create the backend pool
New-AzApplicationGateway `
### Backend servers
-Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
+Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multitenant backends like Azure App Service.
In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to verify that Azure successfully created the application gateway.
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
For the sake of simplicity, this template creates a simple setup with a public frontend IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+ > [!NOTE]
+ > Application Gateway frontend now supports dual-stack IP addresses (Public Preview). You can now create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
+ The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/) :::code language="json" source="~/quickstart-templates/demos/ag-docs-qs/azuredeploy.json":::
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
These next sections outline all the different tools for reading and writing spat
The `atlas.io.read` function is the main function used to read common spatial data formats such as KML, GPX, GeoRSS, GeoJSON, and CSV files with spatial data. This function can also read compressed versions of these formats, as a zip file or a KMZ file. The KMZ file format is a compressed version of KML that can also include assets such as images. Alternatively, the read function can take in a URL that points to a file in any of these formats. URLs should be hosted on a CORS enabled endpoint, or a proxy service should be provided in the read options. The proxy service is used to load resources on domains that aren't CORS enabled. The read function returns a promise to add the image icons to the map, and processes data asynchronously to minimize impact to the UI thread.
-When reading a compressed file, either as a zip or a KMZ, it's unzipped and scanned for the first valid file. For example, doc.kml, or a file with other valid extension, such as: .kml, .xml, geojson, .json, .csv, .tsv, or .txt. Then, images referenced in KML and GeoRSS files are preloaded to ensure they're accessible. Inaccessible image data may load an alternative fallback image or removed from the styles. Images extracted from KMZ files are converted to data URIs.
+When reading a compressed file, either as a zip or a KMZ, once unzipped it looks for the first valid file. For example, doc.kml, or a file with other valid extension, such as: .kml, .xml, geojson, .json, .csv, .tsv, or .txt. Then, images referenced in KML and GeoRSS files are preloaded to ensure they're accessible. Inaccessible image data can load an alternative fallback image or removed from the styles. Images extracted from KMZ files are converted to data URIs.
-The result from the read function is a `SpatialDataSet` object. This object extends the GeoJSON FeatureCollection class. It can easily be passed into a `DataSource` as-is to render its features on a map. The `SpatialDataSet` not only contains feature information, but it may also include KML ground overlays, processing metrics, and other details as outlined in the following table.
+The result from the read function is a `SpatialDataSet` object. This object extends the GeoJSON FeatureCollection class. It can easily be passed into a `DataSource` as-is to render its features on a map. The `SpatialDataSet` not only contains feature information, but it can also include KML ground overlays, processing metrics, and other details as outlined in the following table.
| Property name | Type | Description | |||-|
The result from the read function is a `SpatialDataSet` object. This object exte
The [Load spatial data] sample shows how to read a spatial data set, and renders it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/yLNXrZx/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
The next code demo shows how to read and load KML, or KMZ, to the map. KML can c
The [Load KML onto map] sample shows how to load KML or KMZ files onto the map. For the source code of this sample, see [Load KML onto map source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/XWbgwxX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] >
-You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. The read function tries to access files on another domain using CORS first. After the first time it fails to access any resource on another domain using CORS it only requests more files if a proxy service has been provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
+You can optionally provide a proxy service for accessing cross domain assets that don't have CORS enabled. The read function tries to access files on another domain using CORS first. The first time it fails to access any resource on another domain using CORS it only requests more files if a proxy service is provided. The read function appends the file URL to the end of the proxy URL provided. This snippet of code shows how to pass a proxy service into the read function:
```javascript //Read a file from a URL or pass in a raw data as a string.
function InitMap()
</script> ``` <!-- > [!VIDEO //codepen.io/azuremaps/embed/ExjXBEb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
There are two main write functions in the spatial IO module. The `atlas.io.write
The [Spatial data write options] sample is a tool that demonstrates most the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/YzXxXPG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
The [Spatial data write options] sample is a tool that demonstrates most the wri
The [Drag and drop spatial files onto map] sample allows you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map. For the source code of this sample, see [Drag and drop spatial files onto map source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/zYGdGoO/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true] >
-You may optionally provide a proxy service for accessing cross domain assets that may not have CORS enabled. This snippet of code shows you could incorporate a proxy service:
+You can optionally provide a proxy service for accessing cross domain assets that don't have CORS enabled. This snippet of code shows you could incorporate a proxy service:
```javascript atlas.io.read(data, {
Well-known text can be read using the `atlas.io.ogc.WKT.read` function, and writ
The [Read Well Known Text] sample shows how to read the well-known text string `POINT(-122.34009 47.60995)` and render it on the map using a bubble layer. For the source code of this sample, see [Read Well Known Text source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
The [Read Well Known Text] sample shows how to read the well-known text string `
The [Read and write Well Known Text] sample demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON. For the source code of this sample, see [Read and write Well Known Text source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/JjdyYav/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true]
The [Read and write Well Known Text] sample demonstrates how to read and write W
## Read and write GML
-GML is a spatial XML file specification that's often used as an extension to other XML specifications. GeoJSON data can be written as XML with GML tags using the `atlas.io.core.GmlWriter.write` function. The XML that contains GML can be read using the `atlas.io.core.GmlReader.read` function. The read function has two options:
+GML is a spatial XML file specification often used as an extension to other XML specifications. GeoJSON data can be written as XML with GML tags using the `atlas.io.core.GmlWriter.write` function. The XML that contains GML can be read using the `atlas.io.core.GmlReader.read` function. The read function has two options:
- The `isAxisOrderLonLat` option - The axis order of coordinates "latitude, longitude" or "longitude, latitude" can vary between data sets, and it isn't always well defined. By default the GML reader reads the coordinate data as "latitude, longitude", but setting this option to `true` reads it as "longitude, latitude". - The `propertyTypes` option - This option is a key value lookup table where the key is the name of a property in the data set. The value is the object type to cast the value to when parsing. The supported type values are: `string`, `number`, `boolean`, and `date`. If a property isn't in the lookup table or the type isn't defined, the property is parsed as a string.
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
The following steps show you how to create and display the Map control in a web
4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your Azure Maps account subscription key.
- :::image type="content" source="./media/tutorial-route-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling `atlas.Map` using your Azure Maps account key.":::
+ :::image type="content" source="./media/tutorial-route-location/basic-map.png" lightbox="./media/tutorial-route-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling `atlas.Map` using your Azure Maps account key.":::
## Define route display rendering
In this tutorial, the route is rendered using a line layer. The start and end po
* This code implements the Map control's `ready` event handler. The rest of the code in this tutorial is placed inside the `ready` event handler. * In the map control's `ready` event handler, a data source is created to store the route from start to end point.
- * To define how the route line is rendered, a line layer is created and attached to the data source. To ensure that the route line doesn't cover up the road labels, we've passed a second parameter with the value of `'labels'`.
+ * To define how the route line is rendered, a line layer is created and attached to the data source. To ensure that the route line doesn't cover up the road labels, pass a second parameter with the value of `'labels'`.
Next, a symbol layer is created and attached to the data source. This layer specifies how the start and end points are rendered. Expressions have been added to retrieve the icon image and text label information from properties on each point object. To learn more about expressions, see [Data-driven style expressions].
In this tutorial, the route is rendered using a line layer. The start and end po
3. Save **MapRoute.html** and refresh your browser. The map is now centered over Seattle. The blue teardrop pin marks the start point. The blue round pin marks the end point.
- :::image type="content" source="./media/tutorial-route-location/map-pins.png" alt-text="A screenshot showing a map with a route containing a blue teardrop pin marking the start point at Microsoft in Redmond Washington and a blue round pin marking the end point at a gas station in Seattle.":::
+ :::image type="content" source="./media/tutorial-route-location/map-pins.png" lightbox="./media/tutorial-route-location/map-pins.png" alt-text="A screenshot showing a map with a route containing a blue teardrop pin marking the start point at Microsoft in Redmond Washington and a blue round pin marking the end point at a gas station in Seattle.":::
<a id="getroute"></a>
This section shows you how to use the Azure Maps Route Directions API to get rou
3. Save the **MapRoute.html** file and refresh your web browser. The map should now display the route from the start to end points.
- :::image type="content" source="./media/tutorial-route-location/map-route.png" alt-text="A screenshot showing a map that demonstrates the Azure Map control and Route service.":::
+ :::image type="content" source="./media/tutorial-route-location/map-route.png" lightbox="./media/tutorial-route-location/map-route.png" alt-text="A screenshot showing a map that demonstrates the Azure Map control and Route service.":::
* For the completed code used in this tutorial, see the [route tutorial] on GitHub. * To view this sample live, see [Route to a destination] on the **Azure Maps Code Samples** site.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Azure Monitor Agent is generally available for data collection. Most services th
The following features and services now have an Azure Monitor Agent version (some are still in Public Preview). This means you can already choose to use Azure Monitor Agent to collect data when you enable the feature or service.
-| Service or feature | Migration recommendation | Other extensions installed | More information |
+| Service or feature | Migration recommendation | Current state | More information |
| : | : | : | : |
-| [VM insights](../vm/vminsights-overview.md) | Generally Available | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
-| [Container insights](../containers/container-insights-overview.md) | Public preview | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) |
-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Moving to an agentless solution | | Many features available now all will be available by April 2024|
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [GA](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | See [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. |
-| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Moving to an agentless solution | | Available November 2023 |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | New service called Connection Monitor: Public preview with Azure Monitor Agent | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| Azure Stack HCI Insights | Private preview | | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
-| Azure Virtual Desktop (AVD) Insights | Generally Available | | |
+| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [Container insights](../containers/container-insights-overview.md) | Migrate to Azure Monitor Agent | **Linux**: Generally available<br>**Windows**:Public preview | [Enable Container Insights](../containers/container-insights-onboard.md) |
+| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
+| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Migrate to Azure Monitor Agent | Generally available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
+| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) |
> [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available. When you migrate the following services, which currently use Log Analytics agent, to their respective replacements (v2), you no longer need either of the monitoring agents:
-| Service | Migration recommendation | Other extensions installed | More information |
+| Service | Migration recommendation | Current state | More information |
| : | : | : | : |
-| [Update Management](../../automation/update-management/overview.md) | Update Manager - Public preview (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Update Manager (Public preview with Azure Monitor Agent) documentation](../../update-center/index.yml) |
-| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension - Generally available (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
+| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)|
+| [Update Management](../../automation/update-management/overview.md) | Migrate to Azure Update Manager (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Update Manager documentation](../../update-manager/update-manager-faq.md#la-agent-also-known-as-mma-is-retiring-and-will-be-replaced-with-ama-is-it-necessary-to-move-to-update-manager-or-can-i-continue-to-use-automation-update-management-with-ama) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
## Frequently asked questions
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. -- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text or JSON file.
+- A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premise client that writes logs to a text or JSON file.
Text and JSON file requirements and best practices: - Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
||| |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.| |Threshold value| A number value for the threshold. |
- |Frequency of evaluation|How often the query is run. Can be set from a minute to a day.|
+ |Frequency of evaluation|How often the query is run. Can be set anywhere from one minute to one day (24 hours).|
> [!NOTE]
- > One-minute alert rule frequency is supported only for queries that can pass an internal optimization manipulation. When you will write the query you will contain the following error message: “Couldn’t optimize the query because …”.
- > The following are the main reasons why a query will not be supported for one-minute frequency:
- > * The query contains the search, ΓÇ£union *ΓÇ¥ or ΓÇ£takeΓÇ¥ (limit)
- > * The query contains the ingestion_time() function
- > * The query uses the adx pattern
- > * The query calls a function that calls other tablesΓÇ¥
+ > There are some limitations to using a <a name="frequency">one minute</a> alert rule frequency. When you set the alert rule frequency to one minute, an internal manipulation is performed to optimize the query. This manipulation can cause the query to fail if it contains unsupported operations. The following are the most common reasons a query are not supported:
+
+ > * The query contains the **search**, **union** or **take** (limit) operations
+ > * The query contains the **ingestion_time()** function
+ > * The query uses the **adx** pattern
+ > * The query calls a function that calls other tables
1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Use this method if your project type isn't supported by the Application Insights
1. Select one of the following packages: - **ILogger**: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg" alt-text="NuGet iLogger banner":::
- **NLog**: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg" alt-text="NuGet NLog banner":::
- **log4net**: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg" alt-text="NuGet Log4Net banner":::
- **System.Diagnostics**: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg" alt-text="NuGet System.Diagnostics banner":::
- [Microsoft.ApplicationInsights.DiagnosticSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg" alt-text="NuGet Diagnostic Source Listener banner":::
- [Microsoft.ApplicationInsights.EtwCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EtwCollector.svg" alt-text="NuGet Etw Collector banner":::
- [Microsoft.ApplicationInsights.EventSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/)
-[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EventSourceListener.svg" alt-text="NuGet Event Source Listener banner":::
The NuGet package installs the necessary assemblies and modifies web.config or app.config if that's applicable.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
The structure of a Log Analytics workspace is described in [Log Analytics worksp
> [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), first go to your Log Analytics workspace. During the preview, selecting **Logs** in the Application Insights pane gives you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
-[:::image type="content" source="../logs/media/data-platform-logs/logs-structure-ai.png" lightbox="../logs/media/data-platform-logs/logs-structure-ai.png" alt-text="Diagram that shows the Azure Monitor Logs structure for Application Insights.":::
### Table structure
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md
Log file output looks similar to these examples. In some cases, it can be extens
> :::image type="content" source="media/java-ipa/jmx/available-mbeans.png" lightbox="media/java-ipa/jmx/available-mbeans.png" alt-text="Screenshot of available JMX metrics in the log file.":::
+You can also use a [command line tool](https://github.com/microsoft/ApplicationInsights-Java/wiki/Troubleshoot-JMX-metrics) to check the available JMX metrics.
## Configuration example
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
To receive, store, and explore your monitoring data, include the SDK in your code. Then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
-The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
+The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis.
+
+All events related to an incoming HTTP request are correlated for faster troubleshooting.
You can use the TelemetryClient API to manually instrument and monitor more aspects of your app and system. We describe the TelemetryClient API in more detail later in this article.
process.env.APPLICATIONINSIGHTS_LOGDIR = "C:\\applicationinsights\\logs";
[!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)]
+For more information, see [Troubleshoot Application Insights monitoring of Node.js apps and services](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-app-insights-nodejs).
+ ## Next steps * [Monitor your telemetry in the portal](./overview-dashboard.md)
azure-monitor Resource Group Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/resource-group-insights.md
Modern applications are often complex and highly distributed with many discrete
1. Select **Resource groups** from the left-side navigation bar. 2. Pick one of your resource groups that you want to explore. (If you have a large number of resource groups filtering by subscription can sometimes be helpful.) 3. To access insights for a resource group, click **Insights** in the left-side menu of any resource group.-
-![Screenshot of resource group insights overview page](./media/resource-group-insights/0001-overview.png)
+<!-- convertborder later -->
## Resources with active alerts and health issues The overview page shows how many alerts have been fired and are still active, along with the current Azure Resource Health of each resource. Together, this information can help you quickly spot any resources that are experiencing issues. Alerts help you detect issues in your code and how you've configured your infrastructure. Azure Resource Health surfaces issue with the Azure platform itself, that aren't specific to your individual applications.-
-![Screenshot of Azure Resource Health pane](./media/resource-group-insights/0002-overview.png)
+<!-- convertborder later -->
### Azure Resource Health To display Azure Resource Health, check the **Show Azure Resource Health** box above the table. This column is hidden by default to help the page load quickly.-
-![Screenshot with resource health graph added](./media/resource-group-insights/0003-overview.png)
+<!-- convertborder later -->
By default, the resources are grouped by app layer and resource type. **App layer** is a simple categorization of resource types, that only exists within the context of the resource group insights overview page. There are resource types related to application code, compute infrastructure, networking, storage + databases. Management tools get their own app layers, and every other resource is categorized as belonging to the **Other** app layer. This grouping can help you see at-a-glance what subsystems of your application are healthy and unhealthy.
Most resource types will open a gallery of Azure Monitor Workbook templates. Eac
To test out the Failures tab select **Failures** under **Investigate** in the left-hand menu. The left-side menu bar changes after your selection is made, offering you new options.-
-![Screenshot of Failure overview pane](./media/resource-group-insights/00004-failures.png)
+<!-- convertborder later -->
When App Service is chosen, you are presented with a gallery of Azure Monitor Workbook templates.-
-![Screenshot of application workbook gallery](./media/resource-group-insights/0005-failure-insights-workbook.png)
+<!-- convertborder later -->
Choosing the template for Failure Insights will open the workbook.-
-![Screenshot of failure report](./media/resource-group-insights/0006-failure-visual.png)
+<!-- convertborder later -->
You can select any of the rows. The selection is then displayed in a graphical details view.-
-![Screenshot of failure details](./media/resource-group-insights/0007-failure-details.png)
+<!-- convertborder later -->
Workbooks abstract away the difficult work of creating custom reports and visualizations into an easily consumable format. While some users may only want to adjust the prebuilt parameters, workbooks are completely customizable. To get a sense of how this workbook functions internally, select **Edit** in the top bar.-
-![Screenshot of additional edit option](./media/resource-group-insights/0008-failure-edit.png)
+<!-- convertborder later -->
A number of **Edit** boxes appear near the various elements of the workbook. Select the **Edit** box below the table of operations.-
-![Screenshot of edit boxes](./media/resource-group-insights/0009-failure-edit-graph.png)
+<!-- convertborder later -->
This reveals the underlying log query that is driving the table visualization.-
- ![Screenshot of log query window](./media/resource-group-insights/0010-failure-edit-query.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/resource-group-insights/0010-failure-edit-query.png" lightbox="./media/resource-group-insights/0010-failure-edit-query.png" alt-text="Screenshot of log query window." border="false":::
You can modify the query directly. Or you can use it as a reference and borrow from it when designing your own custom parameterized workbook. ### Investigate performance Performance offers its own gallery of workbooks. For App Service the prebuilt Application Performance workbook offers the following view:-
- ![Screenshot of performance view](./media/resource-group-insights/0011-performance.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/resource-group-insights/0011-performance.png" lightbox="./media/resource-group-insights/0011-performance.png" alt-text="Screenshot of performance view." border="false":::
In this case, if you select edit you will see that this set of visualizations is powered by Azure Monitor Metrics.-
- ![Screenshot of performance view with Azure Metrics](./media/resource-group-insights/0012-performance-metrics.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/resource-group-insights/0012-performance-metrics.png" lightbox="./media/resource-group-insights/0012-performance-metrics.png" alt-text="Screenshot of performance view with Azure Metrics." border="false":::
## Troubleshooting
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
# Optimize your environment with the System Center Operations Manager Health Check (Preview) solution
-![System Center Operations Manager Health Check symbol](./media/scom-assessment/scom-assessment-symbol.png)
You can use the System Center Operations Manager Health Check solution to assess the risk and health of your System Center Operations Manager management group on a regular interval. This article helps you install, configure, and use the solution so that you can take corrective actions for potential problems.
The recommendations made are based on the knowledge and experience gained by Mic
You can choose focus areas that are most important to your organization and track your progress toward running a risk free and healthy environment. After you've added the solution and an assessment is performed, summary information for focus areas is shown on the **System Center Operations Manager Health Check** dashboard for your infrastructure. The following sections describe how to use the information on the **System Center Operations Manager Health Check** dashboard, where you can view and then take recommended actions for your Operations Manager environment.-
-![System Center Operations Manager solution tile](./media/scom-assessment/log-analytics-scom-healthcheck-tile.png)
-
-![System Center Operations Manager Health Check dashboard](./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-01.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
## Installing and configuring the solution
Use the following information to install and configure the solution.
- Before you can use the Health Check solution in Log Analytics, you must have the solution installed. Install the solution from [Azure marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.SCOMAssessmentOMS?tab=Overview). - After adding the solution to the workspace, the **System Center Operations Manager Health Check** tile on the dashboard displays an additional configuration required message. Click on the tile and follow the configuration steps mentioned in the page-
- ![System Center Operations Manager dashboard tile](./media/scom-assessment/scom-configrequired-tile.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/scom-assessment/scom-configrequired-tile.png" lightbox="./media/scom-assessment/scom-configrequired-tile.png" alt-text="System Center Operations Manager dashboard tile." border="false":::
> [!NOTE] > Configuration of System Center Operations Manager can be done using a script by following the steps mentioned in the configuration page of the solution in Log Analytics.
By default, the Microsoft System Center Operations Manager Run Health Check Rule
2. In the search results, select the one that includes the text *Type: Management Server*. 3. Right-click the rule and then click **Overrides** > **For a specific object of class: Management Server**. 4. In the available management servers list, select the management server where the rule should run. This should be the same management server you configured earlier to associate the Run As account with.
-5. Ensure that you change override value to **True** for the **Enabled** parameter value.<br><br> ![override parameter](./media/scom-assessment/rule.png)
+5. Ensure that you change override value to **True** for the **Enabled** parameter value.
+ <!-- convertborder later -->
+ :::image type="content" source="./media/scom-assessment/rule.png" lightbox="./media/scom-assessment/rule.png" alt-text="override parameter" border="false":::
While still in this window, configure the run frequency using the next procedure.
The assessment is configured to run every 10,080 minutes (or seven days) by defa
1. In the **Authoring** workspace of the Operations Manager console, search for the rule *Microsoft System Center Operations Manager Run Health Check Rule* in the **Rules** section. 2. In the search results, select the one that includes the text *Type: Management Server*. 3. Right-click the rule and then click **Override the Rule** > **For all objects of class: Management Server**.
-4. Change the **Interval** parameter value to your desired interval value. In the example below, the value is set to 1440 minutes (one day).<br><br> ![interval parameter](./media/scom-assessment/interval.png)<br>
+4. Change the **Interval** parameter value to your desired interval value. In the example below, the value is set to 1440 minutes (one day).
+ <!-- convertborder later -->
+ :::image type="content" source="./media/scom-assessment/interval.png" lightbox="./media/scom-assessment/interval.png" alt-text="interval parameter" border="false":::
- If the value is set to less than 1440 minutes, then the rule runs on a one day interval. In this example, the rule ignores the interval value and runs at a frequency of one day.
+ If the value is set to less than 1440 minutes, then the rule runs on a one day interval. In this example, the rule ignores the interval value and runs at a frequency of one day.
## Understanding how recommendations are prioritized
View the summarized compliance assessments for your infrastructure and then dril
3. In the Log Analytics subscriptions pane, select a workspace and then click the **Workspace summary (deprecated)** menu item. 4. On the **Overview** page, click the **System Center Operations Manager Health Check** tile. 5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
-6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> ![focus area](./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-02.png)<br>
+6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.
+ <!-- convertborder later -->
+ :::image type="content" source="./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-02.png" lightbox="./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-02.png" alt-text="focus area" border="false":::<br>
7. You can take corrective actions suggested in **Suggested Actions**. When the item has been addressed, later assessments will record that recommended actions were taken and your compliance score will increase. Corrected items appear as **Passed Objects**. ## Ignore recommendations
If you have recommendations that you want to ignore, you can create a text file
> > `SCOMAssessmentRecommendationRecommendation | where RecommendationResult == "Failed" | sort by Computer asc | project Computer, RecommendationId, Recommendation`
- Here's a screenshot showing the Log Search query:<br><br> ![log search](./media/scom-assessment/scom-log-search.png)<br>
+ Here's a screenshot showing the Log Search query:
+ <!-- convertborder later -->
+ :::image type="content" source="./media/scom-assessment/scom-log-search.png" lightbox="./media/scom-assessment/scom-log-search.png" alt-text="log search" border="false":::<br>
3. Choose recommendations that you want to ignore. You'll use the values for RecommendationId in the next procedure.
azure-monitor Troubleshoot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/troubleshoot-workbooks.md
The number of selected resources has a limit of 200, regardless of the number of
## Why donΓÇÖt I see all my subscriptions in the portal The portal will show data only for selected subscriptions on portal launch. To change what subscriptions are selected, go to the top right and select the notebook with a filter icon. This option will show the **Directory + subscriptions** tab.-
-![Screenshot of the section to select the directory + subscription.](./media/storage-insights-overview/fqa3.png)
+<!-- convertborder later -->
## What is time range
The default time granularity is set to automatic, it currently can't be changed
## How do I change the timespan/ time range of the workbook step on my dashboard By default the timespan/time range on your dashboard tile is set to 24 hours, to change this select the ellipses in the top right, select **Customize tile data**, check "override the dashboard time settings at the title level" box and then pick a timespan using the dropdown menu. -
-![Screenshot showing the ellipses and the Customize this data section in the right corner of the tile.](./media/storage-insights-overview/fqa-data-settings.png)
-
-![Screenshot of the Configure tile settings, with the timespan dropdown to change the timespan/time range.](./media/storage-insights-overview/fqa-timespan.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
## How do I change the title of the workbook or a workbook step I pinned to a dashboard The title of the workbook or workbook step that is pinned to a dashboard retains the same name it had in the workbook. To change the title, you must save your own copy of the workbook. Then you'll be able to name the workbook before you press save.-
-![Screenshot showing the save icon at the top of the workbook to save a copy of the workbook and to change the name.](./media/storage-insights-overview/fqa-change-workbook-name.png)
+<!-- convertborder later -->
To change the name of a step in your saved workbook, select edit under the step and then select the gear at the bottom of settings.-
-![Screenshot of the edit icon at the bottom of a workbook.](./media/storage-insights-overview/fqa-edit.png)
-![Screenshot of the settings icon at the bottom of a workbook.](./media/storage-insights-overview/fqa-change-name.png)
+<!-- convertborder later -->
+<!-- convertborder later -->
## Next steps
azure-monitor Vminsights Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-change-analysis.md
To onboard change analysis in VM insights, you must register the *Microsoft.Chan
## View change analysis Change analysis is available from the **Performance** or **Map** tab of VM insights by selecting the **Change** option.-
-[![Screenshot that shows investigating changes.](media/vminsights-change-analysis/investigate-changes-screenshot.png)](media/vminsights-change-analysis/investigate-changes-screenshot-zoom.png#lightbox)
+<!-- convertborder later -->
Select **Investigate Changes** to open the Application Change Analysis page filtered for the VM. Review the listed changes to see if there are any that could have caused the issue. If you're unsure about a particular change, look at the **Changed by** column to identify the person who made the change.-
-[![Screenshot that shows the Change details screen.](media/vminsights-change-analysis/change-details-screenshot.png)](media/vminsights-change-analysis/change-details-screenshot.png#lightbox)
+<!-- convertborder later -->
## Next steps - Learn more about [Application Change Analysis](../app/change-analysis.md).
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
After the workspace is configured, you can use any of the available options to i
>The information described in this section also applies to the [Service Map solution](service-map.md). To access Log Analytics workspaces in the Azure portal, use the **Log Analytics workspaces** menu.-
-[![Screenshot that shows a Log Analytics workspace.](media/vminsights-configure-workspace/log-analytics-workspaces.png)](media/vminsights-configure-workspace/log-analytics-workspaces.png#lightbox)
+<!-- convertborder later -->
You can create a new Log Analytics workspace by using any of the following methods:
Before a Log Analytics workspace can be used with VM insights, it must have the
There are three options for configuring an existing workspace by using the Azure portal: - To configure a single workspace, on the **Azure Monitor** menu, select **Virtual Machines**. Select **Other onboarding options** and then select **Configure a workspace**. Select a subscription and a workspace and then select **Configure**.-
- [![Screenshot that shows configuring a workspace.](../vm/media/vminsights-enable-policy/configure-workspace.png)](../vm/media/vminsights-enable-policy/configure-workspace.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="../vm/media/vminsights-enable-policy/configure-workspace.png" lightbox="../vm/media/vminsights-enable-policy/configure-workspace.png" alt-text="Screenshot that shows configuring a workspace." border="false":::
- To configure multiple workspaces, on the **Monitor** menu, select **Virtual Machines**. Then select the **Workspace configuration** tab. Set the filter values to display a list of existing workspaces. Select the checkbox next to each workspace to enable it and then select **Configure selected**.-
- [![Screenshot that shows workspace configuration.](../vm/media/vminsights-enable-policy/workspace-configuration.png)](../vm/media/vminsights-enable-policy/workspace-configuration.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="../vm/media/vminsights-enable-policy/workspace-configuration.png" lightbox="../vm/media/vminsights-enable-policy/workspace-configuration.png" alt-text="Screenshot that shows workspace configuration." border="false":::
- When you enable VM insights on a single virtual machine or virtual machine scale set by using the Azure portal, you can select an existing workspace or create a new one. The VMInsights solution is installed in this workspace if it isn't already. You can then use this workspace for other agents.-
- [![Screenshot that shows enabling a single VM in the portal.](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png)](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png" lightbox="../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png" alt-text="Screenshot that shows enabling a single VM in the portal." border="false":::
### Resource Manager template The Azure Resource Manager templates for VM insights are provided in an archive file (.zip) that you can [download from our GitHub repo](https://aka.ms/VmInsightsARMTemplates). A template called **ConfigureWorkspace** configures a Log Analytics workspace for VM insights. You deploy this template by using any of the standard methods, including the following sample PowerShell and CLI commands.
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
To assign a VM insights policy initiative to a subscription or management group
The **Assign initiative** screen appears.
- [![Screenshot that shows Assign initiative.](media/vminsights-enable-policy/assign-initiative.png)](media/vminsights-enable-policy/assign-initiative.png#lightbox)
+ :::image type="content" source="media/vminsights-enable-policy/assign-initiative.png" lightbox="media/vminsights-enable-policy/assign-initiative.png" alt-text="Screenshot that shows Assign initiative.":::
1. Configure the initiative assignment:
To assign a VM insights policy initiative to a subscription or management group
If you're assigning a legacy initiative, the workspace must have the *VMInsights* solution installed, as described in [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md).
- [![Screenshot that shows a workspace.](media/vminsights-enable-policy/assignment-workspace.png)](media/vminsights-enable-policy/assignment-workspace.png#lightbox)
+ :::image type="content" source="media/vminsights-enable-policy/assignment-workspace.png" lightbox="media/vminsights-enable-policy/assignment-workspace.png" alt-text="Screenshot that shows a workspace.":::
> [!NOTE] > If you select a workspace that's not within the scope of the assignment, grant *Log Analytics Contributor* permissions to the policy assignment's principal ID. Otherwise, you might get a deployment failure like:
To see how many virtual machines exist in each of the management groups or subsc
:::image type="content" source="media/vminsights-enable-policy/other-onboarding-options.png" lightbox="media/vminsights-enable-policy/other-onboarding-options.png" alt-text="Screenshot that shows other onboarding options page of VM insights with the Enable using policy option."::: The **Azure Monitor for VMs Policy Coverage** page appears.-
- [![Screenshot that shows the VM insights Azure Monitor for VMs Policy Coverage page.](media/vminsights-enable-policy/manage-policy-page-01.png)](media/vminsights-enable-policy/manage-policy-page-01.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/vminsights-enable-policy/manage-policy-page-01.png" lightbox="media/vminsights-enable-policy/manage-policy-page-01.png" alt-text="Screenshot that shows the VM insights Azure Monitor for VMs Policy Coverage page." border="false":::
The following table describes the compliance information presented on the **Azure Monitor for VMs Policy Coverage** page.
To see how many virtual machines exist in each of the management groups or subsc
| **Compliance State** | **Compliant**: All VMs in the scope have Azure Monitor Agent or the Log Analytics agent and Dependency agent deployed to them, or any new VMs in the scope haven't yet been evaluated.<br>**Noncompliant**: There are VMs that aren't enabled and might need remediation.<br>**Not Started**: A new assignment was added.<br>**Lock**: You don't have sufficient privileges to the management group.<br>**Blank**: No policy assigned. | 1. Select the ellipsis (**...**) > **View Compliance**.-
- [![Screenshot that shows View Compliance.](media/vminsights-enable-policy/view-compliance.png)](media/vminsights-enable-policy/view-compliance.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/vminsights-enable-policy/view-compliance.png" lightbox="media/vminsights-enable-policy/view-compliance.png" alt-text="Screenshot that shows View Compliance." border="false":::
The **Compliance** page appears. It lists assignments that match the specified filter and indicates whether they're compliant.
- [![Screenshot that shows Policy compliance for Azure VMs.](./media/vminsights-enable-policy/policy-view-compliance.png)](./media/vminsights-enable-policy/policy-view-compliance.png#lightbox)
+ :::image type="content" source="./media/vminsights-enable-policy/policy-view-compliance.png" lightbox="./media/vminsights-enable-policy/policy-view-compliance.png" alt-text="Screenshot that shows Policy compliance for Azure VMs.":::
1. Select an assignment to view its details. The **Initiative compliance** page appears. It lists the policy definitions in the initiative and whether each is in compliance.
- [![Screenshot that shows Compliance details.](media/vminsights-enable-policy/compliance-details.png)](media/vminsights-enable-policy/compliance-details.png#lightbox)
+ :::image type="content" source="media/vminsights-enable-policy/compliance-details.png" lightbox="media/vminsights-enable-policy/compliance-details.png" alt-text="Screenshot that shows Compliance details.":::
Policy definitions are considered noncompliant if:
To create a remediation task:
1. On the **Initiative compliance** page, select **Create Remediation Task**.
- [![Screenshot that shows Policy compliance details.](media/vminsights-enable-policy/policy-compliance-details.png)](media/vminsights-enable-policy/policy-compliance-details.png#lightbox)
+ :::image type="content" source="media/vminsights-enable-policy/policy-compliance-details.png" lightbox="media/vminsights-enable-policy/policy-compliance-details.png" alt-text="Screenshot that shows Policy compliance details.":::
The **New remediation task** page appears.
- [![Screenshot that shows the New remediation task page.](media/vminsights-enable-policy/new-remediation-task.png)](media/vminsights-enable-policy/new-remediation-task.png#lightbox)
+ :::image type="content" source="media/vminsights-enable-policy/new-remediation-task.png" lightbox="media/vminsights-enable-policy/new-remediation-task.png" alt-text="Screenshot that shows the New remediation task page.":::
1. Review **Remediation settings** and **Resources to remediate** and modify as necessary. Then select **Remediate** to create the task.
To create a remediation task:
To track the progress of remediation tasks, on the **Policy** menu, select **Remediation** and select the **Remediation tasks** tab.
-[![Screenshot that shows the Policy Remediation page for Monitor | Virtual Machines.](media/vminsights-enable-policy/remediation.png)](media/vminsights-enable-policy/remediation.png#lightbox)
## Next steps
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
The Map feature visualizes the VM dependencies by discovering running processes
Expand a VM to show process details and only those processes that communicate with the VM. The client group shows the count of front-end clients that connect into the VM. The server-port groups show the count of back-end servers the VM connects to. Expand a server-port group to see the detailed list of servers that connect over that port. When you select the VM, the **Properties** pane shows the VM's properties. Properties include system information reported by the operating system, properties of the Azure VM, and a doughnut chart that summarizes the discovered connections.-
-![Screenshot that shows the Properties pane.](./media/vminsights-maps/properties-pane-01.png)
+<!-- convertborder later -->
On the right side of the pane, select **Log Events** to show a list of data that the VM has sent to Azure Monitor. This data is available for querying. Select any record type to open the **Logs** page, where you see the results for that record type. You also see a preconfigured query that's filtered against the VM.-
-![Screenshot that shows the Log Events pane.](./media/vminsights-maps/properties-pane-logs-01.png)
+<!-- convertborder later -->
Close the **Logs** page and return to the **Properties** pane. There, select **Alerts** to view VM health-criteria alerts. The Map feature integrates with Azure alerts to show alerts for the selected server in the selected time range. The server displays an icon for current alerts, and the **Machine Alerts** pane lists the alerts.-
-![Screenshot that shows the Alerts pane.](./media/vminsights-maps/properties-pane-alerts-01.png)
+<!-- convertborder later -->
To make the Map feature display relevant alerts, create an alert rule that applies to a specific computer:
In the upper-right corner, the **Legend** option describes the symbols and roles
## Connection metrics The **Connections** pane displays standard metrics for the selected connection from the VM over the TCP port. The metrics include response time, requests per minute, traffic throughput, and links.-
-![Screenshot that shows the Network connectivity charts on the Connections pane.](./media/vminsights-maps/map-group-network-conn-pane-01.png)
+<!-- convertborder later -->
### Failed connections The map shows failed connections for processes and computers. A dashed red line indicates a client system is failing to reach a process or port. For systems that use the Dependency agent, the agent reports on failed connection attempts. The Map feature monitors a process by observing TCP sockets that fail to establish a connection. This failure could result from a firewall, a misconfiguration in the client or server, or an unavailable remote service.
-![Screenshot that shows a failed connection on the map.](./media/vminsights-maps/map-group-failed-connection-01.png)
Understanding failed connections can help you troubleshoot, validate migration, analyze security, and understand the overall architecture of the service. Failed connections are sometimes harmless, but they often point to a problem. Connections might fail, for example, when a failover environment suddenly becomes unreachable or when two application tiers can't communicate with each other after a cloud migration. ### Client groups On the map, client groups represent client machines that connect to the mapped machine. A single client group represents the clients for an individual process or machine.
-![Screenshot that shows a client group on the map.](./media/vminsights-maps/map-group-client-groups-01.png)
To see the monitored clients and IP addresses of the systems in a client group, select the group. The contents of the group appear in the following image.
-![Screenshot that shows a client group's list of IP addresses on the map.](./media/vminsights-maps/map-group-client-group-iplist-01.png)
If the group includes monitored and unmonitored clients, you can select the appropriate section of the group's doughnut chart to filter the clients. ### Server-port groups Server-port groups represent ports on servers that have inbound connections from the mapped machine. The group contains the server port and a count of the number of servers that have connections to that port. Select the group to see the individual servers and connections.
-![Screenshot that shows a server-port group on the map.](./media/vminsights-maps/map-group-server-port-groups-01.png)
If the group includes monitored and unmonitored servers, you can select the appropriate section of the group's doughnut chart to filter the servers.
To access VM insights directly from a VM:
The map visualizes the VM's dependencies by discovering running process groups and processes that have active network connections over a specified time range. By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector in the upper-left corner. You might run a query, for example, during an incident or to see the status before a change.-
-![Screenshot that shows the Map tab in the Monitoring Insights section of the Azure portal showing a diagram of the dependencies between virtual machines.](./media/vminsights-maps/map-direct-vm-01.png)
+<!-- convertborder later -->
## View a map from a virtual machine scale set
The map visualizes all instances in the scale set as a group node along with the
To load a map for a specific instance, first select that instance on the map. Then select the **ellipsis** button **(...**) and select **Load Server Map**. In the map that appears, you see process groups and processes that have active network connections over a specified time range. By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector. You might run a query, for example, during an incident or to see the status before a change.-
-![Screenshot that shows the Map tab in the Monitoring Insights section of the Azure portal showing a diagram of dependencies between virtual machine scale sets.](./media/vminsights-maps/map-direct-vmss-01.png)
+<!-- convertborder later -->
>[!NOTE] >You can also access a map for a specific instance from the **Instances** view for your virtual machine scale set. In the **Settings** section, go to **Instances** > **Insights**.
In Azure Monitor, the Map feature provides a global view of your VMs and their d
1. In the Azure portal, select **Monitor**. 1. In the **Insights** section, select **Virtual Machines**. 1. Select the **Map** tab.-
- ![Screenshot that shows an Azure Monitor overview map of multiple VMs.](./media/vminsights-maps/map-multivm-azure-monitor-01.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/vminsights-maps/map-multivm-azure-monitor-01.png" lightbox="./media/vminsights-maps/map-multivm-azure-monitor-01.png" alt-text="Screenshot that shows an Azure Monitor overview map of multiple VMs." border="false":::
Choose a workspace by using the **Workspace** selector at the top of the page. If you have more than one Log Analytics workspace, choose the workspace that's enabled with the solution and that has VMs reporting to it.
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
VM insights supports Windows and Linux operating systems on:
VM insights provides a set of predefined workbooks that allow you to view trending of [collected performance data](vminsights-log-query.md#performance-records) over time. You can view this data in a single VM from the virtual machine directly, or you can use Azure Monitor to deliver an aggregated view of multiple VMs.
-![Screenshot that shows the VM insights perspective in the Azure portal.](media/vminsights-overview/vminsights-azmon-directvm.png)
## Pricing
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
To access from Azure Monitor:
1. In the Azure portal, select **Monitor**. 1. In the **Solutions** section, select **Virtual Machines**. 1. Select the **Performance** tab.-
-![Screenshot that shows a VM insights Performance Top N List view.](media/vminsights-performance/vminsights-performance-aggview-01.png)
+<!-- convertborder later -->
On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
Five capacity utilization charts are shown on the page:
Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the correct scope and view. Select the icon to the left of the pushpin icon on a chart to open the **Top N List** view. This list view shows the resource utilization for a performance metric by individual VM. It also shows which machine is trending the highest.-
-![Screenshot that shows a Top N List view for a selected performance metric.](media/vminsights-performance/vminsights-performance-topnlist-01.png)
+<!-- convertborder later -->
When you select the virtual machine, the **Properties** pane opens on the right side. It shows properties like system information reported by the operating system and the properties of the Azure VM. Selecting an option under the **Quick Links** section redirects you to that feature directly from the selected VM.-
-![Screenshot that shows a virtual machine Properties pane.](./media/vminsights-performance/vminsights-properties-pane-01.png)
+<!-- convertborder later -->
You can switch to the **Aggregated Charts** tab to view the performance metrics filtered by average or percentiles measured.-
-![Screenshot that shows a VM insights Performance Aggregate view.](./media/vminsights-performance/vminsights-performance-aggview-02.png)
+<!-- convertborder later -->
The following capacity utilization charts are provided:
To view the resource utilization by individual VM and see which machine is trend
>[!NOTE] >The list can't show more than 500 machines at a time. >-
-![Screenshot that shows a Top N List page example.](./media/vminsights-performance/vminsights-performance-topnlist-01.png)
+<!-- convertborder later -->
To filter the results on a specific virtual machine in the list, enter its computer name in the **Search by name** text box.
The following capacity utilization charts are provided:
* **Bytes Receive Rate**: Defaults show the average bytes received. Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.-
-![Screenshot that shows VM insights Performance directly from the VM view.](./media/vminsights-performance/vminsights-performance-directvm-01.png)
+<!-- convertborder later -->
## View performance directly from an Azure virtual machine scale set
This page loads the Azure Monitor performance view scoped to the selected scale
Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
-![Screenshot that shows VM insights Performance directly from the virtual machine scale set view.](./media/vminsights-performance/vminsights-performance-directvmss-01.png)
>[!NOTE] >You can also access a detailed performance view for a specific instance from the **Instances** view for your scale set. Under the **Settings** section, go to **Instances** and select **Insights**.
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
If you don't see both the extensions for your operating system in the list of in
### Do you have connectivity issues? For Windows machines, you can use the TestCloudConnectivity tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemDrive%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It returns results and highlights where the test fails.-
-![Screenshot that shows the TestCloudConnectivity tool.](media/vminsights-troubleshoot/test-cloud-connectivity.png)
+<!-- convertborder later -->
### More agent troubleshooting
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
A workbook is made up of sections that consist of independently editable charts,
1. Select a VM. 1. On the VM insights page, select the **Performance** or **Map** tab and then select **View Workbooks** from the link on the page. From the dropdown list, select **Go To Gallery**.-
- :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot that shows a workbook dropdown list in V M insights.":::
+ <!-- convertborder later -->
+ :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot that shows a workbook dropdown list in V M insights." border="false":::
The workbook gallery opens with prebuilt workbooks to help you get started.
A workbook is made up of sections that consist of independently editable charts,
## Edit workbook sections Workbooks have two modes: editing and reading. A new workbook opens in editing mode. This mode shows all the content of the workbook, including any steps and parameters that are otherwise hidden. Reading mode presents a simplified report-style view. Reading mode allows you to abstract away the complexity that went into creating a report while still having the underlying mechanics only a few clicks away when needed for modification.-
-![Screenshot that shows the Virtual Machines Workbook section in Azure Monitor showing a new workbook in editing mode with editing controls highlighted.](media/vminsights-workbooks/workbook-new-workbook-editor-01.png)
+<!-- convertborder later -->
1. After you finish editing a section, select **Done Editing** in the lower-left corner of the section.
Adding headings, explanations, and commentary to your workbooks helps turn a set
To add a text section to your workbook, select **Add text** in the lower left of the workbook or section. ## Add query sections-
-![Screenshot that shows the Query section in workbooks.](media/vminsights-workbooks/005-workbook-query-section.png)
+<!-- convertborder later -->
To add a query section to your workbook, select **Add query** in the lower left of the workbook or section.
To include data from other Log Analytics workspaces or from a specific Applicati
### Advanced analytic query settings
-Each section has its own advanced settings, which are accessible via the settings ![Workbooks section editing controls](media/vminsights-workbooks/006-settings.png) icon located to the right of **Add parameters**.
-
-![Screenshot that shows the Advanced Settings dialog with the icon highlighted in the Virtual Machines Workbook section of Azure Monitor.](media/vminsights-workbooks/007-settings-expanded.png)
+Each section has its own advanced settings, which are accessible via the settings :::image type="content" source="media/vminsights-workbooks/006-settings.png" alt-text="Workbooks section editing controls"::: icon located to the right of **Add parameters**.
+<!-- convertborder later -->
| Setting | Description | | - |:--|
Most of these settings are fairly intuitive, but to understand **Export a parame
One of the prebuilt workbooks, **TCP Traffic**, provides information on connection metrics from a VM. The first section of the workbook is based on log query data. The second section is also based on log query data, but selecting a row in the first table interactively updates the contents of the charts.-
-![Screenshot that shows the Virtual Machines section in Azure Monitor showing the prebuilt workbook TCP Traffic.](media/vminsights-workbooks/008-workbook-tcp-traffic.png)
+<!-- convertborder later -->
The behavior is possible through use of the **When an item is selected, export a parameter** advanced settings, which are enabled in the table's log query.-
-![Screenshot that shows the Advanced Settings dialog for a Virtual Machines workbook with the "When an item is selected, export a parameter" option checked.](media/vminsights-workbooks/009-settings-export.png)
+<!-- convertborder later -->
The second log query then utilizes the exported values when a row is selected to create a set of values that are used by the section heading and charts. If no row is selected, it hides the section heading and charts.
VMConnection
Metrics sections give you full access to incorporate Azure Monitor metrics data into your interactive reports. In VM insights, the prebuilt workbooks typically contain analytic query data rather than metric data. You can create workbooks with metric data, which allows you to take full advantage of the best of both features all in one place. You also have the ability to pull in metric data from resources in any of the subscriptions to which you have access. Here's an example of VM data being pulled into a workbook to provide a grid visualization of CPU performance.-
-![Screenshot that shows the metrics section of a virtual machine workbook in Azure Monitor. The C P U performance for each virtual machine is shown graphically.](media/vminsights-workbooks/010-metrics-grid.png)
+<!-- convertborder later -->
## Add parameter sections
The dropdown is populated by a log query or JSON. If the query returns one colum
If the column is a string type, null/empty string is considered false. Any other value is considered true. For single-selection dropdowns, the first value with a true value is used as the default selection. For multiple-selection dropdowns, all values with a true value are used as the default selected set. The items in the dropdown are shown in whatever order the query returned rows. Let's look at the parameters present in the Connections Overview report. Select the edit symbol next to **Direction**.-
-![Screenshot that shows the section for adding and editing report parameters in Azure Monitor. The Edit icon for the Direction parameter is selected.](media/vminsights-workbooks/011-workbook-using-dropdown.png)
+<!-- convertborder later -->
This action opens the **Edit Parameter** pane.-
-![Screenshot that shows the Edit Parameter pane. The Parameter name is Direction, the Parameter type is Drop down, and Get data from JSON is selected.](media/vminsights-workbooks/012-workbook-edit-parameter.png)
+<!-- convertborder later -->
The JSON lets you generate an arbitrary table populated with content. For example, the following JSON generates two values in the dropdown:
Perf
``` The query shows the following results:-
-![Screenshot that shows the Perf counter dropdown.](media/vminsights-workbooks/013-workbook-edit-parameter-perf-counters.png)
+<!-- convertborder later -->
Dropdown lists are powerful tools you can use to customize and create interactive reports.
Time range parameter types have 15 default ranges that go from five minutes to t
### Resource picker The resource picker parameter type gives you the ability to scope your report to certain types of resources. An example of a prebuilt workbook that uses the resource picker type is the **Performance** workbook.-
-![Screenshot that shows the Workspaces dropdown.](media/vminsights-workbooks/014-workbook-edit-parameter-workspaces.png)
+<!-- convertborder later -->
## Save and share workbooks with your team
communication-services Send Email Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/send-email-smtp.md
+
+ Title: How to use SMTP to send an email with Azure Communication Services.
+
+description: Learn about how to use SMTP to send emails to Email Communication Services.
+++ Last updated : 10/18/2023++
+zone_pivot_groups: acs-smtp-sending-method
+
+# Quickstart: Send email with SMTP
+
+In this quick start, you learn about how to send email using SMTP.
++
communication-services Smtp Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/smtp-authentication.md
+
+ Title: How to create authentication credentials for sending emails using SMTP
+
+description: Learn about how to use a service principal to create authentication credentials for sending emails using SMTP.
+++ Last updated : 10/18/2023++++
+# Quickstart: How to create authentication credentials for sending emails using SMTP
+In this quick start, you learn about how to use an Entra application to create the authentication credentials for using SMTP to send an email using Azure Communication Services.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Communication Email Resource created and ready with a provisioned domain [Get started with Creating Email Communication Resource](../create-email-communication-resource.md)
+- An active Azure Communication Services Resource connected with Email Domain and a Connection String. [Get started by Connecting Email Resource with a Communication Resource](../connect-email-communication-resource.md)
+- An Entra application with access to the Azure Communication Services Resource. [Register an application with Microsoft Entra ID and create a service principal](../../../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-microsoft-entra-id-and-create-a-service-principal)
+- A client secret for the Entra application with access to the Azure Communication Service Resource. [Create a new client secret](../../../../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret)
+
+## Using a Microsoft Entra application with access to the Azure Communication Services Resource for SMTP
+
+Application developers who build apps that send email using the SMTP protocol need to implement secure, modern authentication. Azure Communication Services does this by leveraging Entra application service principals. Combining the Azure Communication Services Resource and the Entra application service principal's information, the SMTP services undertakes authentication with Entra on the user's behalf to ensure a secure and seamless email transmission.
+
+### Creating a custom email role for the Entra application
+
+The Entra application must be assigned a role with both the **Microsoft.Communication/CommunicationServices/Read** and the **Microsoft.Communication/EmailServices/write** permissions on the Azure Communication Service Resource. This can be done either by using the **Contributor** role, or by creating a **custom role**. Follow these steps to create a custom role by cloning an existing role.
+
+1. In the portal, a custom role can be created by first navigating to the subscription, resource group, or Azure Communication Service Resource where you want the custom role to be assignable and then open **Access control (IAM)**.
+ :::image type="content" source="../media/smtp-custom-role-iam.png" alt-text="Screenshot that shows Access control.":::
+1. Click the **Roles** tab to see a list of all the built-in and custom roles.
+1. Search for a role you want to clone such as the Reader role.
+1. At the end of the row, click the ellipsis (...) and then click **Clone**.
+ :::image type="content" source="../media/smtp-custom-role-clone.png" alt-text="Screenshot that shows cloning a role.":::
+1. Click the **Basics** tab and give a name to the new role.
+ :::image type="content" source="../media/smtp-custom-role-basics.png" alt-text="Screenshot that shows creating a name for a new custom role.":::
+1. Click the **Permissions** tab and click **Add permissions**. Search for **Microsoft.Communication** and select **Azure Communication Services**
+ :::image type="content" source="../media/smtp-custom-role-permissions.png" alt-text="Screenshot that shows adding permissions for a new custom role.":::
+1. Select the **Microsoft.Communication/CommunicationServices** **Read** and the **Microsoft.Communication/EmailServices** **Write*** permissions. Click **Add**.
+ :::image type="content" source="../media/smtp-custom-role-add-permissions.png" alt-text="Screenshot that shows adding Azure Communication Services' permissions.":::
+1. Review the permissions for the new role. Click **Review + create** and then **Create** on the next page.
+ :::image type="content" source="../media/smtp-custom-role-review.png" alt-text="Screenshot that shows reviewing the new custom role.":::
+
+When assigning the Entra application a role for the Azure Communication Services Resource, the new custom role will be available. For more information on creating custom roles, see [Create or update Azure custom roles using the Azure portal](../../../../role-based-access-control/custom-roles-portal.md)
+
+### Assigning the custom email role to the Entra application
+1. In the portal, navigate to the subscription, resource group, or Azure Communication Service Resource where you want the custom role to be assignable and then open **Access control (IAM)**.
+ :::image type="content" source="../media/smtp-custom-role-iam.png" alt-text="Screenshot that shows Access control.":::
+1. Click **+Add** and then select **Add role assignment**.
+ :::image type="content" source="../media/email-smtp-add-role-assignment.png" alt-text="Screenshot that shows selecting Add role assignment.":::
+1. On the **Role** tab, select the custom role created for sending emails using SMTP and click **Next**.
+ :::image type="content" source="../media/email-smtp-select-custom-role.png" alt-text="Screenshot that shows selecting the custom role.":::
+1. On the **Members** tab, choose **User, group, or service principal** and then click **+Select members**.
+ :::image type="content" source="../media/email-smtp-select-members.png" alt-text="Screenshot that shows choosing select members.":::
+1. Use the search box to find the **Entra** application that you'll use for authentication and select it. Then click **Select**.
+ :::image type="content" source="../media/email-smtp-select-entra.png" alt-text="Screenshot that shows selecting the Entra application.":::
+1. After confirming the selection, click **Next**.
+ :::image type="content" source="../media/email-smtp-select-review.png" alt-text="Screenshot that shows reviewing the assignment.":::
+1. After confirming the scope and members, click **Review + assign**.
+ :::image type="content" source="../media/email-smtp-select-assign.png" alt-text="Screenshot that shows assigning the custom role.":::
+
+### Creating the SMTP credentials from the Entra application information.
+#### SMTP Authentication Username
+Azure Communication Services allows the credentials for an Entra application to be used as the SMTP username and password. The username consists of three pipe-delimited parts.
+1. The Azure Communication Service Resource name.
+ :::image type="content" source="../media/email-smtp-resource-name.png" alt-text="Screenshot that shows finding the resource name.":::
+1. The Entra Application ID.
+ :::image type="content" source="../media/email-smtp-entra-details.png" alt-text="Screenshot that shows finding the Entra Application ID.":::
+1. The Entra Tenant ID.
+ :::image type="content" source="../media/email-smtp-entra-tenant.png" alt-text="Screenshot that shows finding the Entra Tenant ID.":::
+
+**Format:**
+```
+username: <Azure Communication Services Resource name>|<Entra Application ID>|<Entra Tenant ID>
+```
+#### SMTP Authentication Password
+The password is one of the Entra application's client secrets.
+ :::image type="content" source="../media/email-smtp-entra-secret.png" alt-text="Screenshot that shows finding the Entra client secret.":::
+
+### Requirements for SMTP AUTH client submission
+
+- **Authentication**: Username and password authentication is supported using Entra application details as the credentials. The Azure Communication Services SMTP service will use the Entra application details to get an access token on behalf of the user and use that to submit the email. Because the Entra token isn't cached, access can be revoked immediately by either changing the Entra application client secret or by changing the access controls for the Azure Communication Services Resource.
+- **Azure Communication Service**: An Azure Communication Services Resource with a connected Azure Communication Email Resource and domain is required.
+- **Transport Layer Security (TLS)**: Your device must be able to use TLS version 1.2 and above.
+- **Port**: Port 587 (recommended) or port 25 is required and must be unblocked on your network. Some network firewalls or ISPs block ports, especially port 25, because that's the port that email servers use to send mail.
+- **DNS**: Use the DNS name smtp.azurecomm.net. Don't use an IP address for the Microsoft 365 or Office 365 server, as IP Addresses aren't supported.
+
+### How to set up SMTP AUTH client submission
+
+Enter the following settings directly on your device or in the application as their guide instructs (it might use different terminology than this article). Provided your scenario aligns with the prerequisites for SMTP AUTH client submission, these settings allow you to send emails from your device or application using SMTP Commands.
+
+| Device or Application setting | Value |
+|--|--|
+|Server / smart host | smtp.azurecomm.net |
+|Port |Port 587 (recommended) or port 25|
+|TLS / StartTLS | Enabled|
+|Username and password | Enter the Entra application credentials from an application with access to the Azure Communication Services Resource |
communication-services Calling Widget Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-overview.md
- Title: Get started with a click to call experience using Azure Communication Services-
-description: Learn how to create a Calling Widget widget experience with the Azure Communication Services CallComposite to facilitate click to call.
----- Previously updated : 06/05/2023----
-# Get started with a click to call experience using Azure Communication Services
--
-![Home page of Calling Widget sample app](../media/calling-widget/sample-app-splash-widget-open.png)
-
-This project aims to guide developers on creating a seamless click to call experience using the Azure Communication UI Library.
-
-As per your requirements, you may need to offer your customers an easy way to reach out to you without any complex setup.
-
-Click to call is a simple yet effective concept that facilitates instant interaction with, customer support, financial advisor, and other customer-facing teams. The goal of this tutorial is to assist you in making interactions with your customers just a click away.
-
-If you wish to try it out, you can download the code from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-click-to-call).
-
-Following this tutorial will:
--- Allow you to control your customers audio and video experience depending on your customer scenario-- Move your customers call into a new window so they can continue browsing while on the call--
-This tutorial is broken down into three parts:
--- Creating your widget-- using post messaging to start a calling experience in a new window-- Embed your calling experience-
-## Prerequisites
--- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).-- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.--
-### Set up the project
-
-Only use this step if you are creating a new application.
-
-To set up the react App, we use the `create-react-app` command line tool. This tool
-creates an easy to run TypeScript application powered by React. This command will create a simple react application using TypeScript.
-
-```bash
-# Create an Azure Communication Services App powered by React.
-npx create-react-app ui-library-click-to-call-app --template typescript
-
-# Change to the directory of the newly created App.
-cd ui-library-click-to-call-app
-```
-
-### Get your dependencies
-
-Then you need to update the dependency array in the `package.json` to include some beta and alpha packages from Azure Communication Services for this to work:
-```json
-"@azure/communication-calling": "1.14.1-beta.1",
-"@azure/communication-chat": "1.3.2-beta.2",
-"@azure/communication-react": "1.7.0-beta.1",
-"@azure/communication-calling-effects": "1.0.1",
-"@fluentui/react-icons": "~2.0.203",
-"@fluentui/react": "~8.98.3",
-```
-
-Once you run these commands, youΓÇÖre all set to start working on your new project. In this tutorial, we are modifying the files in the `src` directory.
--
-## Initial app setup
-
-To get started, we replace the provided `App.tsx` content with a main page that will:
--- Store all of the Azure Communication information that we need to create a CallAdapter to power our Calling experience-- Control the different pages of our application-- Register the different fluent icons we use in the UI library and some new ones for our purposes-
-`src/App.tsx`
-
-```ts
-// imports needed
-import { CallAdapterLocator } from '@azure/communication-react';
-import './App.css';
-import { useEffect, useMemo, useState } from 'react';
-import { CommunicationIdentifier, CommunicationUserIdentifier } from '@azure/communication-common';
-import { Spinner, Stack, initializeIcons, registerIcons } from '@fluentui/react';
-import { CallAdd20Regular, Dismiss20Regular } from '@fluentui/react-icons';
-```
-
-```ts
-type AppPages = "calling-widget" | "new-window-call";
-
-registerIcons({
- icons: { dismiss: <Dismiss20Regular />, callAdd: <CallAdd20Regular /> },
-});
-initializeIcons();
-function App() {
- const [page, setPage] = useState<AppPages>("calling-widget");
-
- /**
- * Token for local user.
- */
- const token = "<Enter your Azure Communication Services token here>";
-
- /**
- * User identifier for local user.
- */
- const userId: CommunicationIdentifier = {
- communicationUserId: "<Enter your user Id>",
- };
-
- /**
- * This decides where the call will be going. This supports many different calling modalities in the Call Composite.
- *
- * - Teams meeting locator: {meetingLink: 'url to join link for a meeting'}
- * - Azure Communication Services group call: {groupId: 'GUID that defines the call'}
- * - Azure Communication Services Rooms call: {roomId: 'guid that represents a rooms call'}
- * - Teams adhoc, Azure communications 1:n, PSTN calls all take a participants locator: {participantIds: ['Array of participant id's to call']}
- *
- * You can call teams voice apps like a Call queue with the participants locator.
- */
- const locator: CallAdapterLocator = {
- participantIds: ["<Enter Participant Id's here>"],
- };
-
- /**
- * The phone number needed from your Azure Communication Services resource to start a PSTN call. Can be created under the phone numbers.
- *
- * For more information on phone numbers and Azure Communication Services go to this link: https://learn.microsoft.com/en-us/azure/communication-services/concepts/telephony/plan-solution
- *
- * This can be left alone if not making a PSTN call.
- */
- const alternateCallerId = "<Enter your alternate CallerId here>";
-
- switch (page) {
- case "calling-widget": {
- return (
- <Stack verticalAlign='center' style={{ height: "100%", width: "100%" }}>
- <Spinner
- label={"Getting user credentials from server"}
- ariaLive="assertive"
- labelPosition="top"
- />
- </Stack>
- );
- }
- case "new-window-call": {
- return (
- <Stack verticalAlign='center' style={{ height: "100%", width: "100%" }}>
- <Spinner
- label={"Getting user credentials from server"}
- ariaLive="assertive"
- labelPosition="top"
- />
- </Stack>
- );
- }
- default: {
- return <>Something went wrong!</>
- }
- }
-}
-
-export default App;
-```
-In this snippet we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we are creating later.
-
-### Running the app
-
-We can then test to see that the basic application is working by running:
-
-```bash
-# Install the newe dependencies
-npm install
-
-# run the React app
-npm run start
-```
-
-Once the app is running, you can see it on `http://localhost:3000` in your browser. You should see a little spinner saying: `getting credentials from server` as
-a test message.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Part 1: Creating your widget](./calling-widget-tutorial-part-1-creating-your-widget.md)
communication-services Calling Widget Tutorial Part 1 Creating Your Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-1-creating-your-widget.md
- Title: Part 1 creating your widget-
-description: Learn how to construct your own custom widget for your click to call experience - Part 1.
----- Previously updated : 06/05/2023----
-# Part 1 creating your widget
--
-To begin, we're going to make a new component. This component will serve as the widget for initiating the click to call experience.
-
-We are using our own widget setup for this tutorial but you can expand the functionality to suit your needs. For us, we have the widget perform the following actions:
-- Display a custom logo. This can be replaced with another image or branding of your choosing. Feel free to download the image from the code if you would like to use our image.-- Let the user decide if they want to include video in the call.-- Obtain the userΓÇÖs consent regarding the possibility of the call being recorded.-
-First step will be to create a new directory called `src/components`. Within this directory, we're going to create a new file named `CallingWidgetComponent.tsx`. We'll then proceed to set up the widget component with the following imports:
-
-`CallingWidgetComponent.tsx`
-```ts
-// imports needed
-import { IconButton, PrimaryButton, Stack, TextField, useTheme, Checkbox, Icon } from '@fluentui/react';
-import React, { useEffect, useState } from 'react';
-```
-
-Now let's introduce an interface containing the props that the component uses.
-
-`CallingWidgetComponent.tsx`
-```ts
-export interface clickToCallComponentProps {
- /**
- * Handler to start a new call.
- */
- onRenderStartCall: () => void;
- /**
- * Custom render function for displaying logo.
- * @returns
- */
- onRenderLogo?: () => JSX.Element;
- /**
- * Handler to set displayName for the user in the call.
- * @param displayName
- * @returns
- */
- onSetDisplayName?: (displayName: string | undefined) => void;
- /**
- * Handler to set whether to use video in the call.
- */
- onSetUseVideo?: (useVideo: boolean) => void;
-}
-```
-
-Each callback controls different behaviors for the calling experience.
--- `onRenderStartCall` - This callback is used to trigger any handlers in your app to do things like create a new window for your click to call experience.-- `onRenderLogo` - This is used as a rendering callback to have a custom logo or image render inside the widget when getting user information.-- `onSetDisplayName` - We use this callback to set the `displayName` of the participant when they're calling your support center.-- `onSetUseVideo` - Finally, this callback is used to control for our tutorial whether the user will have camera and screen sharing controls (more on that later).-
-Finally, we add the body of the component.
-
-`src/views/CallingWidgetComponent.tsx`
-```ts
-/**
- * Widget for Calling Widget
- * @param props
- */
-export const CallingWidgetComponent = (
- props: clickToCallComponentProps
-): JSX.Element => {
- const { onRenderStartCall, onRenderLogo, onSetDisplayName, onSetUseVideo } =
- props;
-
- const [widgetState, setWidgetState] = useState<"new" | "setup">();
- const [displayName, setDisplayName] = useState<string>();
- const [consentToData, setConsentToData] = useState<boolean>(false);
-
- const theme = useTheme();
-
- useEffect(() => {
- if (widgetState === "new" && onSetUseVideo) {
- onSetUseVideo(false);
- }
- }, [widgetState, onSetUseVideo]);
-
- /** widget template for when widget is open, put any fields here for user information desired */
- if (widgetState === "setup" && onSetDisplayName && onSetUseVideo) {
- return (
- <Stack
- styles={clicktoCallSetupContainerStyles(theme)}
- tokens={{ childrenGap: "1rem" }}
- >
- <IconButton
- styles={collapseButtonStyles}
- iconProps={{ iconName: "Dismiss" }}
- onClick={() => setWidgetState("new")}
- />
- <Stack tokens={{ childrenGap: "1rem" }} styles={logoContainerStyles}>
- <Stack style={{ transform: "scale(1.8)" }}>
- {onRenderLogo && onRenderLogo()}
- </Stack>
- </Stack>
- <TextField
- label={"Name"}
- required={true}
- placeholder={"Enter your name"}
- onChange={(_, newValue) => {
- setDisplayName(newValue);
- }}
- />
- <Checkbox
- styles={checkboxStyles(theme)}
- label={
- "Use video - Checking this box will enable camera controls and screen sharing"
- }
- onChange={(_, checked?: boolean | undefined) => {
- onSetUseVideo(!!checked);
- }}
- ></Checkbox>
- <Checkbox
- required={true}
- styles={checkboxStyles(theme)}
- label={
- "By checking this box you are consenting that we collect data from the call for customer support reasons"
- }
- onChange={(_, checked?: boolean | undefined) => {
- setConsentToData(!!checked);
- }}
- ></Checkbox>
- <PrimaryButton
- styles={startCallButtonStyles(theme)}
- onClick={() => {
- if (displayName && consentToData) {
- onSetDisplayName(displayName);
- onRenderStartCall();
- }
- }}
- >
- StartCall
- </PrimaryButton>
- </Stack>
- );
- }
-
- /** default waiting state for the widget */
- return (
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- styles={clickToCallContainerStyles(theme)}
- onClick={() => {
- setWidgetState("setup");
- }}
- >
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- style={{
- height: "4rem",
- width: "4rem",
- borderRadius: "50%",
- background: theme.palette.themePrimary,
- }}
- >
- <Icon iconName="callAdd" styles={callIconStyles(theme)} />
- </Stack>
- </Stack>
- );
-};
-```
-
-### Time for some styles
-
-Once you have your component, you need some styles to give it a visually appealing look. For this, we'll create a new folder named `src/styles`. Within this folder we'll create a new file called `CallingWidgetComponent.styles.ts` and add the following styles.
-
-`src/styles/CallingWidgetComponent.styles.ts`
-
-```ts
-// needed imports
-import { IButtonStyles, ICheckboxStyles, IIconStyles, IStackStyles, Theme } from '@fluentui/react';
-```
-`CallingWidgetComponent.styles.ts`
-```ts
-export const checkboxStyles = (theme: Theme): ICheckboxStyles => {
- return {
- label: {
- color: theme.palette.neutralPrimary,
- },
- };
-};
-
-export const clickToCallContainerStyles = (theme: Theme): IStackStyles => {
- return {
- root: {
- width: "5rem",
- height: "5rem",
- padding: "0.5rem",
- boxShadow: theme.effects.elevation16,
- borderRadius: "50%",
- bottom: "1rem",
- right: "1rem",
- position: "absolute",
- overflow: "hidden",
- cursor: "pointer",
- ":hover": {
- boxShadow: theme.effects.elevation64,
- },
- },
- };
-};
-
-export const clicktoCallSetupContainerStyles = (theme: Theme): IStackStyles => {
- return {
- root: {
- width: "18rem",
- minHeight: "20rem",
- maxHeight: "25rem",
- padding: "0.5rem",
- boxShadow: theme.effects.elevation16,
- borderRadius: theme.effects.roundedCorner6,
- bottom: 0,
- right: "1rem",
- position: "absolute",
- overflow: "hidden",
- cursor: "pointer",
- },
- };
-};
-
-export const callIconStyles = (theme: Theme): IIconStyles => {
- return {
- root: {
- paddingTop: "0.2rem",
- color: theme.palette.white,
- transform: "scale(1.6)",
- },
- };
-};
-
-export const startCallButtonStyles = (theme: Theme): IButtonStyles => {
- return {
- root: {
- background: theme.palette.themePrimary,
- borderRadius: theme.effects.roundedCorner6,
- borderColor: theme.palette.themePrimary,
- },
- textContainer: {
- color: theme.palette.white,
- },
- };
-};
-
-export const logoContainerStyles: IStackStyles = {
- root: {
- margin: "auto",
- padding: "0.2rem",
- height: "5rem",
- width: "10rem",
- zIndex: 0,
- },
-};
-
-export const collapseButtonStyles: IButtonStyles = {
- root: {
- position: "absolute",
- top: "0.2rem",
- right: "0.2rem",
- zIndex: 1,
- },
-};
-```
-
-These styles should already be added to the widget as seen in the snippet earlier. If you added the snippet as is, these styles just need importing into the `CallingWidgetComponent.tsx` file.
-
-`CallingWidgetComponent.tsx`
-```ts
-
-// add to other imports
-import {
- clicktoCallSetupContainerStyles,
- checkboxStyles,
- startCallButtonStyles,
- clickToCallContainerStyles,
- callIconStyles,
- logoContainerStyles,
- collapseButtonStyles
-} from '../styles/CallingWidgetComponent.styles';
-
-```
-
-### Adding the widget to the app
-
-Now we create a new folder `src/views` and add a new file for one of our pages `CallingWidgetScreen.tsx`. This screen acts as our home page for the app where the user can start a new call.
-
-We want to add the following props to the page:
-
-`CallingWidgetScreen.tsx`
-
-```ts
-export interface CallingWidgetPageProps {
- token: string;
- userId:
- | CommunicationUserIdentifier
- | MicrosoftTeamsUserIdentifier;
- callLocator: CallAdapterLocator;
- alternateCallerId?: string;
-}
-```
-
-These properties are fed by the values that we set in `App.tsx`. We'll use these props to make post messages to the app when we want to start a call in a new window (More on this later).
-
-Next, lets add the page content:
-
-`CallingWidgetScreen.tsx`
-```ts
-// imports needed
-import { CommunicationUserIdentifier, MicrosoftTeamsUserIdentifier } from '@azure/communication-common';
-import { Stack, Text } from '@fluentui/react';
-import React, { useCallback, useEffect, useMemo, useState } from 'react';
-import { CallingWidgetComponent } from '../components/CallingWidgetComponent';
-import { CallAdapterLocator } from '@azure/communication-react';
-import hero from '../hero.svg';
-```
-```ts
-export const CallingWidgetScreen = (props: CallingWidgetPageProps): JSX.Element => {
- const { token, userId, callLocator, alternateCallerId } = props;
-
- const [userDisplayName, setUserDisplayName] = useState<string>();
- const [useVideo, setUseVideo] = useState<boolean>(false);
- // we also want to make this memoized version of the args for the new window.
- const adapterParams = useMemo(() => {
- const args = {
- userId: userId as CommunicationUserIdentifier,
- displayName: userDisplayName ?? "",
- token,
- locator: callLocator,
- alternateCallerId,
- };
- return args;
- }, [userId, userDisplayName, token, callLocator, alternateCallerId]);
-
- return (
- <Stack
- style={{ height: "100%", width: "100%", padding: "3rem" }}
- tokens={{ childrenGap: "1.5rem" }}
- >
- <Stack style={{ margin: "auto" }}>
- <Stack
- style={{ padding: "3rem" }}
- horizontal
- tokens={{ childrenGap: "2rem" }}
- >
- <Text style={{ marginTop: "auto" }} variant="xLarge">
- Welcome to a Calling Widget sample
- </Text>
- <img
- style={{ width: "7rem", height: "auto" }}
- src={hero}
- alt="kcup logo"
- />
- </Stack>
-
- <Text>
- Welcome to a Calling Widget sample for the Azure Communication Services UI
- Library. Sample has the ability to:
- </Text>
- <ul>
- <li>
- Adhoc call teams users with a tenant set that allows for external
- calls
- </li>
- <li>Joining Teams interop meetings as a Azure Communication Services user</li>
- <li>Make a calling Widget PSTN call to a help phone line</li>
- <li>Join a Azure Communication Services group call</li>
- </ul>
- <Text>
- As a user all you need to do is click the widget below, enter your
- display name for the call - this will act as your caller id, and
- action the <b>start call</b> button.
- </Text>
- </Stack>
- <Stack
- horizontal
- tokens={{ childrenGap: "1.5rem" }}
- style={{ overflow: "hidden", margin: "auto" }}
- >
- <CallingWidgetComponent
- onRenderStartCall={() => {}}
- onRenderLogo={() => {
- return (
- <img
- style={{ height: "4rem", width: "4rem", margin: "auto" }}
- src={hero}
- alt="logo"
- />
- );
- }}
- onSetDisplayName={setUserDisplayName}
- onSetUseVideo={setUseVideo}
- />
- </Stack>
- </Stack>
- );
-};
-```
-This page provides general information on the current capabilities of our calling experiences, along with the addition of our previously created widget component.
-
-To integrate the widget screen, we simply update the existing `'calling-widget'` case in the root of the app `App.tsx`, by adding the new view.
-
-`App.tsx`
-```ts
-// add this with the other imports
-
-import { CallingWidgetScreen } from './views/CallingWidgetScreen';
-
-```
-
-```ts
-
- case 'calling-widget': {
- if (!token || !userId || !locator) {
- return (
- <Stack verticalAlign='center' style={{height: '100%', width: '100%'}}>
- <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />;
- </Stack>
- )
- }
- return <CallingWidgetScreen token={token} userId={userId} callLocator={locator} alternateCallerId={alternateCallerId}/>;
-}
-
-```
-
-Once you have set the arguments defined in `App.tsx`, run the app with `npm run start` to see the changes:
-
-![Screenshot of calling widget sample app home page widget closed](../media/calling-widget/sample-app-splash-widget-closed.png)
-
-Then when you action the widget button, you should see:
-
-![Screenshot of calling widget sample app home page widget open](../media/calling-widget/sample-app-splash-widget-open.png)
-
-Yay! We have made the control surface for the widget! Next, we'll discuss what we need to add to make this widget start a call in a new window.
-
-> [!div class="nextstepaction"]
-> [Part 2: Creating a new window calling experience](./calling-widget-tutorial-part-2-creating-new-window-experience.md)
communication-services Calling Widget Tutorial Part 2 Creating New Window Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-2-creating-new-window-experience.md
- Title: Part 2 creating a new window calling experience-
-description: Learn how to deal with post messaging and React to create a new window calling experience with the CallComposite - Part 2.
----- Previously updated : 06/05/2023----
-# Part 2 creating a new window calling experience
--
-Now that we have a running application with our widget on the home page, we'll talk about starting the calling experience for your users with a new window. This scenario allows you to give your customer the ability to browse while still seeing your call in a new window. This can be useful in situations similar to when your users use video and screen sharing.
-
-To begin, we'll create a new view in the `src/views` folder called `NewWindowCallScreen.tsx`. This new screen will be used by the `App.tsx` file to go into a new call with the arguments provided to it using our `CallComposite`. If desired, the `CallComposite` can be swapped with a stateful client and UI component experience if desired as well, but that won't be covered in this tutorial. For more information see our [storybook documentation](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-statefulcallclient--page) about the stateful client.
-
-`src/views/NewWindowCallScreen.tsx`
-```ts
-// imports needed
-import { CommunicationUserIdentifier, AzureCommunicationTokenCredential } from '@azure/communication-common';
-import {
- CallAdapter,
- CallAdapterLocator,
- CallComposite,
- useAzureCommunicationCallAdapter
-} from '@azure/communication-react';
-import { Spinner, Stack } from '@fluentui/react';
-import React, { useMemo } from 'react';
-```
-```ts
-export const NewWindowCallScreen = (props: {
- adapterArgs: {
- userId: CommunicationUserIdentifier;
- displayName: string;
- token: string;
- locator: CallAdapterLocator;
- alternateCallerId?: string;
- };
- useVideo: boolean;
-}): JSX.Element => {
- const { adapterArgs, useVideo } = props;
-
- const credential = useMemo(() => {
- try {
- return new AzureCommunicationTokenCredential(adapterArgs.token);
- } catch {
- console.error("Failed to construct token credential");
- return undefined;
- }
- }, [adapterArgs.token]);
-
- const args = useMemo(() => {
- return {
- userId: adapterArgs.userId,
- displayName: adapterArgs.displayName,
- credential,
- token: adapterArgs.token,
- locator: adapterArgs.locator,
- alternateCallerId: adapterArgs.alternateCallerId,
- };
- }, [
- adapterArgs.userId,
- adapterArgs.displayName,
- credential,
- adapterArgs.token,
- adapterArgs.locator,
- adapterArgs.alternateCallerId,
- ]);
--
- const afterCreate = (adapter: CallAdapter): Promise<CallAdapter> => {
- adapter.on("callEnded", () => {
- adapter.dispose();
- window.close();
- });
- adapter.joinCall(true);
- return new Promise((resolve, reject) => resolve(adapter));
- };
-
- const adapter = useAzureCommunicationCallAdapter(args, afterCreate);
-
- if (!adapter) {
- return (
- <Stack
- verticalAlign="center"
- styles={{ root: { height: "100vh", width: "100vw" } }}
- >
- <Spinner
- label={"Creating adapter"}
- ariaLive="assertive"
- labelPosition="top"
- />
- </Stack>
- );
- }
- return (
- <Stack styles={{ root: { height: "100vh", width: "100vw" } }}>
- <CallComposite
- options={{
- callControls: {
- cameraButton: useVideo,
- screenShareButton: useVideo,
- moreButton: false,
- peopleButton: false,
- displayType: "compact",
- },
- localVideoTileOptions: {
- position: !useVideo ? "hidden" : "floating",
- },
- }}
- adapter={adapter}
- />
- </Stack>
- );
-};
-```
-
-To configure our `CallComposite` to fit in the Calling Widget, we need to make some changes. Depending on your use case, we have a number of customizations that can change the user experience. This sample chooses to hide the local video tile, camera, and screen sharing controls if the user opts out of video for their call. In addition to these configurations on the `CallComposite`, we use the `afterCreate` function defined in the snippet to automatically join the call. This bypasses the configuration screen and drop the user into the call with their mic live, as well auto close the window when the call ends. Just remove the call to `adapter.join(true);` from the `afterCreate` function and the configuration screen shows as normal. Next let's talk about how to get this screen the information once we have our `CallComposite` configured.
-
-To make sure we are passing around data correctly, let's create some handlers to send post messages between the parent window and child window to signal that we want some information. See diagram:
-
-![Diagram illustrating the flow of data between windows](../media/calling-widget/mermaid-charts-window-messaging.png)
-
-This flow illustrates that if the child window has spawned, it needs to ask for the arguments. This behavior has to do with React and that if the parent window just sends a message right after creation, the call adapter arguments needed are lost before the application mounts. The adapter arguments are lost because in the new window the listener is not set yet until after a render pass completes. More on where these event handlers are made to come.
-
-Now we want to update the splash screen we created earlier. First we add a reference to the new child window that we create.
-
-`CallingWidgetScreen.tsx`
-
-```ts
-
- const [userDisplayName, setUserDisplayName] = useState<string>();
- const newWindowRef = useRef<Window | null>(null);
- const [useVideo, setUseVideo] = useState<boolean>(false);
-
-```
-
-Next we create a handler that we pass to our widget that creates a new window that starts the process of sending the post messages.
-
-`CallingWidgetScreen.tsx`
-```ts
-
- const startNewWindow = useCallback(() => {
- const startNewSessionString = 'newSession=true';
- newWindowRef.current = window.open(
- window.origin + `/?${startNewSessionString}`,
- 'call screen',
- 'width=500, height=450'
- );
- }, []);
-
-```
-
-This handler starts a new window position and place a new query arg in the window URL so that the main application knows that it's time to start a new call. The path that you give the window can be a new path in your application where your calling experience exists. For us this will be the `NewWindowCallScreen.tsx` file but this can also be a React app on its own.
-
-Next we add a `useEffect` hook that is creating an event handler listening for new post messages from the child window.
-
-`CallingWidgetScreen.tsx`
-```ts
-
- useEffect(() => {
- window.addEventListener('message', (event) => {
- if (event.origin !== window.origin) {
- return;
- }
- if (event.data === 'args please') {
- const data = {
- userId: adapterParams.userId,
- displayName: adapterParams.displayName,
- token: adapterParams.token,
- locator: adapterParams.locator,
- alternateCallerId: adapterParams.alternateCallerId,
- useVideo: useVideo
- };
- console.log(data);
- newWindowRef.current?.postMessage(data, window.origin);
- }
- });
- }, [adapterParams, adapterParams.locator, adapterParams.displayName, useVideo]);
-
-```
-
-This handler listens for events from the child window. (**NOTE: make sure that if the origin of the message is not from your app then return**) If the child window asks for arguments, we send it with the arguments needed to construct a `AzureCommunicationsCallAdapter`.
-
-Finally on this screen, let's add the `startNewWindow` handler to the widget so that it knows to create the new window. We do this by adding the property to the template of the widget screen like below.
-
-`CallingWidgetScreen.tsx`
-```ts
-
- <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}>
- <CallingWidgetComponent
- onRenderStartCall={startNewWindow}
- onRenderLogo={() => {
- return (
- <img
- style={{ height: '4rem', width: '4rem', margin: 'auto' }}
- src={hero}
- alt="logo"
- />
- );
- }}
- onSetDisplayName={setUserDisplayName}
- onSetUseVideo={setUseVideo}
- />
- </Stack>
-
-```
-
-Next, we need to make sure that our application can listen for and ask for the messages from what would be the parent window. First to start, you might recall that we added a new query parameter to the URL of the application `newSession=true`. To use this and have our app look for that in the URL, we need to create a utility function to parse out that parameter. Once we do that, we'll use it to make our application behave differently when it's received.
-
-To do that, let's add a new folder `src/utils` and in this folder, we add the file `AppUtils.ts`. In this file let's put the following function:
-
-`AppUtils.ts`
-```ts
-/**
- * get go ahead to request for adapter args from url
- * @returns
- */
-export const getStartSessionFromURL = (): boolean | undefined => {
- const urlParams = new URLSearchParams(window.location.search);
- return urlParams.get("newSession") === "true";
-};
-```
-
-This function will look into our application's URL and see if the parameters we're looking for are there. If desired, you can also stick some other parameters in there to extend other functionality for your application.
-
-As well, we'll want to add a new type in here to track the different pieces needed to create a `AzureCommunicationCallAdapter`. This type can also be simplified if you are using our calling stateful client, this approach won't be covered in this tutorial though.
-
-`AppUtils.ts`
-```ts
-/**
- * Properties needed to create a call screen for a Azure Communication Services CallComposite.
- */
-export type AdapterArgs = {
- token: string;
- userId: CommunicationIdentifier;
- locator: CallAdapterLocator;
- displayName?: string;
- alternateCallerId?: string;
-};
-```
-
-Once we have added these two things, we can go back to the `App.tsx` file to make some more updates.
-
-First thing we want to do is update `App.tsx` to use that new utility function that we created in `AppUtils.ts`. We want to use a `useMemo` hook for the `startSession` parameter so that it's fetched exactly once and not at every render. The fetch of `startSession` is done like so:
-
-`App.tsx`
-```ts
-// you will need to add these imports
-import { useMemo } from 'react';
-import { AdapterArgs, getStartSessionFromURL } from './utils/AppUtils';
-
-```
-
-```ts
-
- const startSession = useMemo(() => {
- return getStartSessionFromURL();
- }, []);
-
-```
-
-Following this, we want to add some state to make sure that we're tracking the new arguments for the adapter. We pass these arguments to the `NewWindowCallScreen.tsx` view that we made earlier so it can construct an adapter. As well state to track whether the user wants to use video controls or not.
-
-`App.tsx`
-```ts
-/**
- * Properties needed to start an Azure Communication Services CallAdapter. When these are set the app will go to the Call screen for the
- * click to call scenario. Call screen should create the credential that will be used in the call for the user.
- */
- const [adapterArgs, setAdapterArgs] = useState<AdapterArgs | undefined>();
- const [useVideo, setUseVideo] = useState<boolean>(false);
-```
-
-We now want to add an event listener to `App.tsx` to listen for post messages. Insert a `useEffect` hook with an empty dependency array so that we add the listener only once on the initial render.
-
-`App.tsx`
-```ts
-import { CallAdapterLocator } from "@azure/communication-react";
-import { CommunicationIdentifier } from '@azure/communication-common';
-```
-```ts
-
- useEffect(() => {
- window.addEventListener('message', (event) => {
- if (event.origin !== window.location.origin) {
- return;
- }
-
- if ((event.data as AdapterArgs).userId && (event.data as AdapterArgs).displayName !== '') {
- console.log(event.data);
- setAdapterArgs({
- userId: (event.data as AdapterArgs).userId as CommunicationUserIdentifier,
- displayName: (event.data as AdapterArgs).displayName,
- token: (event.data as AdapterArgs).token,
- locator: (event.data as AdapterArgs).locator,
- alternateCallerId: (event.data as AdapterArgs).alternateCallerId
- });
- setUseVideo(!!event.data.useVideo);
- }
- });
- }, []);
-
-```
-Next, we want to add two more `useEffect` hooks to `App.tsx`. These two hooks will:
-- Ask the parent window of the application for arguments for the `AzureCommunicationCallAdapter`, we use the `window.opener` reference provided since this hook checks to see if it's the child window.-- Checks to see if we have the arguments appropriately set from the event listener fetching the arguments from the post message to start a call and change the app page to be the call screen.-
-`App.tsx`
-```ts
-
- useEffect(() => {
- if (startSession) {
- console.log('asking for args');
- if (window.opener) {
- window.opener.postMessage('args please', window.opener.origin);
- }
- }
- }, [startSession]);
-
- useEffect(() => {
- if (adapterArgs) {
- console.log('starting session');
- setPage('new-window-call');
- }
- }, [adapterArgs]);
-
-```
-Finally, once we have done that, we want to add the new screen that we made earlier to the template as well. We also want to make sure that we do not show the Calling widget screen if the `startSession` parameter is found. Using this parameter this way avoids a flash for the user.
-
-`App.tsx`
-```ts
-// add with other imports
-
-import { NewWindowCallScreen } from './views/NewWindowCallScreen';
-
-```
-
-```ts
-
- switch (page) {
- case 'calling-widget': {
- if (!token || !userId || !locator || startSession !== false) {
- return (
- <Stack verticalAlign='center' style={{height: '100%', width: '100%'}}>
- <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />;
- </Stack>
- )
-
- }
- return <CallingWidgetScreen token={token} userId={userId} callLocator={locator} alternateCallerId={alternateCallerId}/>;
- }
- case 'new-window-call': {
- if (!adapterArgs) {
- return (
- <Stack verticalAlign='center' style={{ height: '100%', width: '100%' }}>
- <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />;
- </Stack>
- )
- }
- return (
- <NewWindowCallScreen
- adapterArgs={{
- userId: adapterArgs.userId as CommunicationUserIdentifier,
- displayName: adapterArgs.displayName ?? '',
- token: adapterArgs.token,
- locator: adapterArgs.locator,
- alternateCallerId: adapterArgs.alternateCallerId
- }}
- useVideo={useVideo}
- />
- );
- }
- }
-
-```
-Now, when the application runs in a new window, it sees that it's supposed to start a call so it will:
-- Ask for the different Adapter arguments from the parent window-- Make sure that the adapter arguments are set appropriately and start a call-
-Now when you pass in the arguments, set your `displayName`, and click `Start Call` you should see the following screens:
-
-![Screenshot of click to call sample app home page with calling experience in new window](../media/calling-widget/calling-screen-new-window.png)
-
-With this new window experience, your users are able to:
-- continue using other tabs in their browser or other applications and still be able to see your call-- resize the window to fit their viewing needs such as increasing the size to better see a screen share-
-This concludes the tutorial for click to call with a new window experience. Next will be an optional step to embed the calling surface into the widget itself keeping your users on their current page.
-
-If you would like to learn more about the Azure Communication Services UI library, check out our [storybook documentation](https://azure.github.io/communication-ui-library/?path=/story/overview--page).
-
-> [!div class="nextstepaction"]
-> [Part 3: Embedding your calling experience](./calling-widget-tutorial-part-3-embedding-your-calling-experience.md)
communication-services Calling Widget Tutorial Part 3 Embedding Your Calling Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial-part-3-embedding-your-calling-experience.md
- Title: Part 3 (optional) embedding your calling experience-
-description: Learn how to embed a calling experience inside your new widget - Part 3.
----- Previously updated : 06/05/2023-----
-# Part 3 (optional) embedding your calling experience
--
-Finally in this optional section of the tutorial we'll talk about making an embedded version of the Calling surface. We'll continue from where we left off in the last section and make some modifications to our existing screens.
-
-To start, let's take a look at the props for the `CallingWidgetComponent.tsx` props, these will need to be updated to have the widget hold the Calling surface. We'll make two changes.
-- Add a new prop for the adapter arguments needed for the `AzureCommunicationCallAdapter` we'll call this `adapterArgs`.-- Make `onRenderStartCall` optional, this will allow us to come back to using a new window easier in the future.-
-`CallingWidgetComponent.tsx`
-
-```ts
-export interface CallingWidgetComponentProps {
- /**
- * arguments for creating an AzureCommunicationCallAdapter for your Calling experience
- */
- adapterArgs: AdapterArgs;
- /**
- * if provided, will be used to create a new window for call experience. if not provided
- * will use the current window.
- */
- onRenderStartCall?: () => void;
- /**
- * Custom render function for displaying logo.
- * @returns
- */
- onRenderLogo?: () => JSX.Element;
- /**
- * Handler to set displayName for the user in the call.
- * @param displayName
- * @returns
- */
- onSetDisplayName?: (displayName: string | undefined) => void;
- /**
- * Handler to set whether to use video in the call.
- */
- onSetUseVideo?: (useVideo: boolean) => void;
-}
-```
-
-Now, we'll need to introduce some logic to use these arguments to make sure that we're starting a call appropriately. This will include adding state to create an `AzureCommunicationCallAdapter` inside the widget itself so it will look a lot like the logic in `NewWindowCallScreen.tsx` adding the adapter to the widget will look something like this:
-
-`CallingWidgetComponent.tsx`
-```ts
-// add this to the other imports
-
-import { CommunicationUserIdentifier, AzureCommunicationTokenCredential } from '@azure/communication-common';
-import {
- CallAdapter,
- CallAdapterLocator,
- CallComposite,
- useAzureCommunicationCallAdapter,
- AzureCommunicationCallAdapterArgs
-} from '@azure/communication-react';
-import { AdapterArgs } from '../utils/AppUtils';
-// lets update our react imports as well
-import React, { useCallback, useEffect, useMemo, useState } from 'react';
-
-```
-```ts
-
- const credential = useMemo(() => {
- try {
- return new AzureCommunicationTokenCredential(adapterArgs.token);
- } catch {
- console.error('Failed to construct token credential');
- return undefined;
- }
- }, [adapterArgs.token]);
-
- const callAdapterArgs = useMemo(() => {
- return {
- userId:adapterArgs.userId,
- credential: credential,
- locator: adapterArgs.locator,
- displayName: displayName,
- alternateCallerId: adapterArgs.alternateCallerId
- }
- },[adapterArgs.locator, adapterArgs.userId, credential, displayName])
-
- const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs);
-
-```
-
-Let's also add a `afterCreate` function like before, to do a few things with our adapter once it's constructed. Since we're now interacting with state in the widget we'll want to use a React `useCallback` just to make sure we're not defining this function every time we do a render pass. In our case, our function will reset the widget to the `'new'` state when the call ends and clear the user's `displayName` so they can start a new session. You can however return it to the `'setup'` state with the old displayName so that the app can easily call again as well.
-
-`CallingWidgetComponent.tsx`
-```ts
-
- const afterCreate = useCallback(async (adapter: CallAdapter): Promise<CallAdapter> => {
- adapter.on('callEnded',() => {
- setDisplayName(undefined);
- setWidgetState('new');
- adapter.dispose();
- });
- return adapter;
- },[])
-
- const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs, afterCreate);
-
-```
-
-Once we again have an adapter we'll need to update the template to account for a new widget state, so on that note we'll also need to add to the different modes that the widget itself can hold. We'll add a new `'inCall'` state like so:
-
-`CallingWidgetComponent.tsx`
-```ts
-
-const [widgetState, setWidgetState] = useState<'new' | 'setup' | 'inCall'>('new');
-
-```
-
-Next, we'll need to add a new logic to our Start Call button in the widget that will check to see which mode it will start the call, new window or embedded. That logic is as follows:
-
-`CallingWidgetComponent.tsx`
-```ts
-
- <PrimaryButton
- styles={startCallButtonStyles(theme)}
- onClick={() => {
- if (displayName && consentToData && onRenderStartCall) {
- onSetDisplayName(displayName);
- onRenderStartCall();
- } else if (displayName && consentToData && adapter) {
- setWidgetState('inCall');
- adapter?.joinCall();
- }
- }}
- >
- StartCall
- </PrimaryButton>
-
-```
-
-We'll also want to introduce some internal state to the widget about the local user's video controls.
-
-`CallingWidgetComponent.tsx`
-```ts
-const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false);
-```
-
-Next, lets go back to our style sheet for the widget. We'll need to add new styles to allow the `CallComposite` to grow to its minimum size.
-
-`CallingWidgetComponent.styles.ts`
-```ts
-export const clickToCallInCallContainerStyles = (theme: Theme): IStackStyles => {
- return {
- root: {
- width: '35rem',
- height: '25rem',
- padding: '0.5rem',
- boxShadow: theme.effects.elevation16,
- borderRadius: theme.effects.roundedCorner6,
- bottom: 0,
- right: '1rem',
- position: 'absolute',
- overflow: 'hidden',
- cursor: 'pointer',
- background: theme.semanticColors.bodyBackground
- }
- }
-}
-```
-
-Finally, in the widget we'll need to add a section to the template that is when the widget is in the `'inCall'` state that we added earlier. So now we should have our template looking as follows:
-
-`CallingWidgetComponent.tsx`
-```ts
-if (widgetState === 'setup' && onSetDisplayName && onSetUseVideo) {
- return (
- <Stack styles={clicktoCallSetupContainerStyles(theme)} tokens={{ childrenGap: '1rem' }}>
- <IconButton
- styles={collapseButtonStyles}
- iconProps={{ iconName: 'Dismiss' }}
- onClick={() => setWidgetState('new')}
- />
- <Stack tokens={{ childrenGap: '1rem' }} styles={logoContainerStyles}>
- <Stack style={{ transform: 'scale(1.8)' }}>{onRenderLogo && onRenderLogo()}</Stack>
- </Stack>
- <TextField
- label={'Name'}
- required={true}
- placeholder={'Enter your name'}
- onChange={(_, newValue) => {
- setDisplayName(newValue);
- }}
- />
- <Checkbox
- styles={checkboxStyles(theme)}
- label={'Use video - Checking this box will enable camera controls and screen sharing'}
- onChange={(_, checked?: boolean | undefined) => {
- onSetUseVideo(!!checked);
- setUseLocalVideo(true);
- }}
- ></Checkbox>
- <Checkbox
- required={true}
- styles={checkboxStyles(theme)}
- label={
- 'By checking this box, you are consenting that we'll collect data from the call for customer support reasons'
- }
- onChange={(_, checked?: boolean | undefined) => {
- setConsentToData(!!checked);
- }}
- ></Checkbox>
- <PrimaryButton
- styles={startCallButtonStyles(theme)}
- onClick={() => {
- if (displayName && consentToData && onRenderStartCall) {
- onSetDisplayName(displayName);
- onRenderStartCall();
- } else if (displayName && consentToData && adapter) {
- setWidgetState('inCall');
- adapter?.joinCall();
- }
- }}
- >
- StartCall
- </PrimaryButton>
- </Stack>
- );
- }
-
- if(widgetState === 'inCall' && adapter){
- return(
- <Stack styles={clickToCallInCallContainerStyles(theme)}>
- <CallComposite adapter={adapter} options={{
- callControls: {
- cameraButton: useLocalVideo,
- screenShareButton: useLocalVideo,
- moreButton: false,
- peopleButton: false,
- displayType: 'compact'
- },
- localVideoTileOptions: { position: !useLocalVideo ? 'hidden' : 'floating' }
- }}></CallComposite>
- </Stack>
- )
- }
-
- return (
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- styles={clickToCallContainerStyles(theme)}
- onClick={() => {
- setWidgetState('setup');
- }}
- >
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- style={{ height: '4rem', width: '4rem', borderRadius: '50%', background: theme.palette.themePrimary }}
- >
- <Icon iconName="callAdd" styles={callIconStyles(theme)} />
- </Stack>
- </Stack>
- );
-```
-Now that we have updated our widget to be more versatile, we'll want to take another look at the `CallingWidgetScreen.tsx` to make some adjustments to how we're calling the widget. We'll turn on the new embedded experience do two things:
-- Remove the start call handler that we provided earlier-- provide the adapter arguments to the widget that we would normally be emitting through our post messages.-
-That looks like this:
-
-`CallingWidgetScreen.tsx`
-```ts
-
- <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}>
- <CallingWidgetComponent
- adapterArgs={adapterParams}
- onRenderLogo={() => {
- return (
- <img
- style={{ height: '4rem', width: '4rem', margin: 'auto' }}
- src={hero}
- alt="logo"
- />
- );
- }}
- onSetDisplayName={setUserDisplayName}
- onSetUseVideo={setUseVideo}
- />
- </Stack>
-
-```
-Now that we have made these changes we can start our app again if it's shut down with `npm run start`. If we go through the start call process like we did before we should see the following when starting the call:
-
-![Screenshot of click to call sample app home page with calling experience embedded in widget](../media/calling-widget/calling-widget-embedded-start.png)
-
-Like before, this is a call starting with the video controls enabled.
-
-Thanks for following the different tutorials here. This concludes the quickstart guide for click to call with the Azure Communication Services UI Library.
communication-services Calling Widget Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial.md
+
+ Title: Get started with Azure Communication Services UI library calling to Teams Call Queue and Auto Attendant
+
+description: Learn how to create a Calling Widget widget experience with the Azure Communication Services CallComposite to facilitate calling a Teams Call Queue or Auto Attendant.
+++++ Last updated : 06/05/2023+++++
+# Get started with Azure Communication Services UI library calling to Teams Voice Apps
++
+![Home page of Calling Widget sample app](../media/calling-widget/sample-app-splash-widget-open.png)
+
+This project aims to guide developers to initiate a call from the ACS Calling Web SDK to Teams Call Queue and Auto Attendant using the Azure Communication UI Library.
+
+As per your requirements, you might need to offer your customers an easy way to reach out to you without any complex setup.
+
+Calling to Teams Call Queue and Auto Attendant is a simple yet effective concept that facilitates instant interaction with customer support, financial advisor, and other customer-facing teams. The goal of this tutorial is to assist you in initiating interactions with your customers when they click a button on the web.
+
+If you wish to try it out, you can download the code from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-click-to-call).
+
+Following this tutorial will:
+
+- Allow you to control your customers audio and video experience depending on your customer scenario
+- Teach you how to build a simple widget for starting calls on your webapp using the UI library.
+
+## Prerequisites
+
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions [Node 18 LTS](https://nodejs.org/en) is recommended. Use the `node --version` command to check your version.
+- An Azure Communication Services resource. [Create a Communications Resource](../../quickstarts/create-communication-resource.md)
+- Complete the Teams tenant setup in [Teams calling and chat interoperability](../../concepts/interop/calling-chat.md)
+- Working with [Teams Call Queues](../../quickstarts/voice-video-calling/get-started-teams-call-queue.md) and Azure Communication Services.
+- Working with [Teams Auto Attendants](../../quickstarts/voice-video-calling/get-started-teams-auto-attendant.md) and Azure Communication Services.
+
+### Set up the project
+
+Only use this step if you are creating a new application.
+
+To set up the react App, we use the `create-react-app` command line tool. This tool
+creates an easy to run TypeScript application powered by React. This command creates a simple react application using TypeScript.
+
+```bash
+# Create an Azure Communication Services App powered by React.
+npx create-react-app ui-library-calling-widget-app --template typescript
+
+# Change to the directory of the newly created App.
+cd ui-library-calling-widget-app
+```
+
+### Get your dependencies
+
+Then you need to update the dependency array in the `package.json` to include some packages from Azure Communication Services for the widget experience we are going to build to work:
+
+```json
+"@azure/communication-calling": "1.19.1-beta.2",
+"@azure/communication-chat": "1.4.0-beta.1",
+"@azure/communication-react": "1.10.0-beta.1",
+"@azure/communication-calling-effects": "1.0.1",
+"@azure/communication-common": "2.3.0",
+"@fluentui/react-icons": "~2.0.203",
+"@fluentui/react": "~8.98.3",
+```
+
+Once you add these packages to your `package.json`, youΓÇÖre all set to start working on your new project. In this tutorial, we are modifying the files in the `src` directory.
+
+## Initial app setup
+
+To get started, we replace the provided `App.tsx` content with a main page that will:
+
+- Store all of the Azure Communication information that we need to create a CallAdapter to power our Calling experience
+- Display our widget that is exposed to the end user.
+
+Your `App.tsx` file should look like this:
+
+`src/App.tsx`
+
+```ts
+
+import './App.css';
+import { CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common';
+import { Spinner, Stack, initializeIcons, registerIcons, Text } from '@fluentui/react';
+import { CallAdd20Regular, Dismiss20Regular } from '@fluentui/react-icons';
+import logo from './logo.svg';
+
+import { CallingWidgetComponent } from './components/CallingWidgetComponent';
+
+registerIcons({
+ icons: { dismiss: <Dismiss20Regular />, callAdd: <CallAdd20Regular /> },
+});
+initializeIcons();
+function App() {
+ /**
+ * Token for local user.
+ */
+ const token = "<Enter your ACS token here>";
+
+ /**
+ * User identifier for local user.
+ */
+ const userId: CommunicationIdentifier = {
+ communicationUserId: "<Enter your ACS ID here>",
+ };
+
+ /**
+ * Enter your Teams voice app identifier from the Teams admin center here
+ */
+ const teamsAppIdentifier: MicrosoftTeamsAppIdentifier = {
+ teamsAppId: '<Enter your teams voice app ID here>', cloud: 'public'
+ }
+
+ const widgetParams = {
+ userId,
+ token,
+ teamsAppIdentifier,
+ };
++
+ if (!token || !userId || !teamsAppIdentifier) {
+ return (
+ <Stack verticalAlign='center' style={{ height: '100%', width: '100%' }}>
+ <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />;
+ </Stack>
+ )
+
+ }
++
+ return (
+ <Stack
+ style={{ height: "100%", width: "100%", padding: "3rem" }}
+ tokens={{ childrenGap: "1.5rem" }}
+ >
+ <Stack tokens={{ childrenGap: '1rem' }} style={{ margin: "auto" }}>
+ <Stack
+ style={{ padding: "3rem" }}
+ horizontal
+ tokens={{ childrenGap: "2rem" }}
+ >
+ <Text style={{ marginTop: "auto" }} variant="xLarge">
+ Welcome to a Calling Widget sample
+ </Text>
+ <img
+ style={{ width: "7rem", height: "auto" }}
+ src={logo}
+ alt="logo"
+ />
+ </Stack>
+
+ <Text>
+ Welcome to a Calling Widget sample for the Azure Communication Services UI
+ Library. Sample has the ability to connect you through Teams voice apps to a agent to help you.
+ </Text>
+ <Text>
+ As a user all you need to do is click the widget below, enter your
+ display name for the call - this will act as your caller id, and
+ action the <b>start call</b> button.
+ </Text>
+ </Stack>
+ <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}>
+ <CallingWidgetComponent
+ widgetAdapterArgs={widgetParams}
+ onRenderLogo={() => {
+ return (
+ <img
+ style={{ height: '4rem', width: '4rem', margin: 'auto' }}
+ src={logo}
+ alt="logo"
+ />
+ );
+ }}
+ />
+ </Stack>
+ </Stack>
+ );
+}
+
+export default App;
+
+```
+
+In this snippet we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we are creating in the next section.
+
+### Create the widget
+
+Now we need to make a widget that can show in three different modes:
+- Waiting: This widget state is how the component will be in before and after a call is made
+- Setup: This state is when the widget asks for information from the user like their name.
+- In a call: The widget is replaced here with the UI library Call Composite. This is the mode when the user is calling the Voice app or talking with an agent.
+
+Lets create a folder called `src/components`. In this folder make a new file called `CallingWidgetComponent.tsx`. This file should look like the following snippet:
+
+`CallingWidgetComponent.tsx`
+
+```ts
+import { IconButton, PrimaryButton, Stack, TextField, useTheme, Checkbox, Icon } from '@fluentui/react';
+import React, { useState } from 'react';
+import {
+ callingWidgetSetupContainerStyles,
+ checkboxStyles,
+ startCallButtonStyles,
+ callingWidgetContainerStyles,
+ callIconStyles,
+ logoContainerStyles,
+ collapseButtonStyles,
+ callingWidgetInCallContainerStyles
+} from '../styles/CallingWidgetComponent.styles';
+import { AzureCommunicationTokenCredential, CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common';
+import {
+ CallAdapter,
+ CallComposite,
+ useAzureCommunicationCallAdapter,
+ AzureCommunicationCallAdapterArgs
+} from '@azure/communication-react';
+import { useCallback, useMemo } from 'react';
+
+/**
+ * Properties needed for our widget to start a call.
+ */
+export type WidgetAdapterArgs = {
+ token: string;
+ userId: CommunicationIdentifier;
+ teamsAppIdentifier: MicrosoftTeamsAppIdentifier;
+};
+
+export interface CallingWidgetComponentProps {
+ /**
+ * arguments for creating an AzureCommunicationCallAdapter for your Calling experience
+ */
+ widgetAdapterArgs: WidgetAdapterArgs;
+ /**
+ * Custom render function for displaying logo.
+ * @returns
+ */
+ onRenderLogo?: () => JSX.Element;
+}
+
+/**
+ * Widget for Calling Widget
+ * @param props
+ */
+export const CallingWidgetComponent = (
+ props: CallingWidgetComponentProps
+): JSX.Element => {
+ const { onRenderLogo, widgetAdapterArgs } = props;
+
+ const [widgetState, setWidgetState] = useState<'new' | 'setup' | 'inCall'>('new');
+ const [displayName, setDisplayName] = useState<string>();
+ const [consentToData, setConsentToData] = useState<boolean>(false);
+ const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false);
+
+ const theme = useTheme();
+
+ const credential = useMemo(() => {
+ try {
+ return new AzureCommunicationTokenCredential(widgetAdapterArgs.token);
+ } catch {
+ console.error('Failed to construct token credential');
+ return undefined;
+ }
+ }, [widgetAdapterArgs.token]);
+
+ const callAdapterArgs = useMemo(() => {
+ return {
+ userId: widgetAdapterArgs.userId,
+ credential: credential,
+ locator: {participantIds: [`28:orgid:${widgetAdapterArgs.teamsAppIdentifier.teamsAppId}`]},
+ displayName: displayName
+ }
+ }, [widgetAdapterArgs.userId, widgetAdapterArgs.teamsAppIdentifier.teamsAppId, credential, displayName]);
+
+ const afterCreate = useCallback(async (adapter: CallAdapter): Promise<CallAdapter> => {
+ adapter.on('callEnded', () => {
+ setDisplayName(undefined);
+ setWidgetState('new');
+ });
+ return adapter;
+ }, [])
+
+ const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs, afterCreate);
+
+ // Widget template for when widget is open, put any fields here for user information desired
+ if (widgetState === 'setup' ) {
+ return (
+ <Stack styles={callingWidgetSetupContainerStyles(theme)} tokens={{ childrenGap: '1rem' }}>
+ <IconButton
+ styles={collapseButtonStyles}
+ iconProps={{ iconName: 'Dismiss' }}
+ onClick={() => setWidgetState('new')} />
+ <Stack tokens={{ childrenGap: '1rem' }} styles={logoContainerStyles}>
+ <Stack style={{ transform: 'scale(1.8)' }}>{onRenderLogo && onRenderLogo()}</Stack>
+ </Stack>
+ <TextField
+ label={'Name'}
+ required={true}
+ placeholder={'Enter your name'}
+ onChange={(_, newValue) => {
+ setDisplayName(newValue);
+ }} />
+ <Checkbox
+ styles={checkboxStyles(theme)}
+ label={'Use video - Checking this box will enable camera controls and screen sharing'}
+ onChange={(_, checked?: boolean | undefined) => {
+ setUseLocalVideo(true);
+ }}
+ ></Checkbox>
+ <Checkbox
+ required={true}
+ styles={checkboxStyles(theme)}
+ label={'By checking this box, you are consenting that we will collect data from the call for customer support reasons'}
+ onChange={(_, checked?: boolean | undefined) => {
+ setConsentToData(!!checked);
+ }}
+ ></Checkbox>
+ <PrimaryButton
+ styles={startCallButtonStyles(theme)}
+ onClick={() => {
+ if (displayName && consentToData && adapter && widgetAdapterArgs.teamsAppIdentifier) {
+ setWidgetState('inCall');
+ adapter.startCall([widgetAdapterArgs.teamsAppIdentifier]);
+ }
+ }}
+ >
+ StartCall
+ </PrimaryButton>
+ </Stack>
+ );
+ }
+
+ if (widgetState === 'inCall' && adapter) {
+ return (
+ <Stack styles={callingWidgetInCallContainerStyles(theme)}>
+ <CallComposite
+ adapter={adapter}
+ options={{
+ callControls: {
+ cameraButton: useLocalVideo,
+ screenShareButton: useLocalVideo,
+ moreButton: false,
+ peopleButton: false,
+ displayType: 'compact'
+ },
+ localVideoTile: !useLocalVideo ? false : { position: 'floating' }
+ }}/>
+ </Stack>
+ )
+ }
+
+ return (
+ <Stack
+ horizontalAlign="center"
+ verticalAlign="center"
+ styles={callingWidgetContainerStyles(theme)}
+ onClick={() => {
+ setWidgetState('setup');
+ }}
+ >
+ <Stack
+ horizontalAlign="center"
+ verticalAlign="center"
+ style={{ height: '4rem', width: '4rem', borderRadius: '50%', background: theme.palette.themePrimary }}
+ >
+ <Icon iconName="callAdd" styles={callIconStyles(theme)} />
+ </Stack>
+ </Stack>
+ );
+};
+```
+
+#### Style the widget
+
+We need to write some styles to make sure the widget looks appropriate and can hold our call composite. These styles should already be used in the widget if copying the snippet above.
+
+lets make a new folder called `src/styles` in this folder create a file called `CallingWidgetComponent.styles.ts`. The file should look like the following snippet:
+
+```ts
+import { IButtonStyles, ICheckboxStyles, IIconStyles, IStackStyles, Theme } from '@fluentui/react';
+
+export const checkboxStyles = (theme: Theme): ICheckboxStyles => {
+ return {
+ label: {
+ color: theme.palette.neutralPrimary,
+ },
+ };
+};
+
+export const callingWidgetContainerStyles = (theme: Theme): IStackStyles => {
+ return {
+ root: {
+ width: "5rem",
+ height: "5rem",
+ padding: "0.5rem",
+ boxShadow: theme.effects.elevation16,
+ borderRadius: "50%",
+ bottom: "1rem",
+ right: "1rem",
+ position: "absolute",
+ overflow: "hidden",
+ cursor: "pointer",
+ ":hover": {
+ boxShadow: theme.effects.elevation64,
+ },
+ },
+ };
+};
+
+export const callingWidgetSetupContainerStyles = (theme: Theme): IStackStyles => {
+ return {
+ root: {
+ width: "18rem",
+ minHeight: "20rem",
+ maxHeight: "25rem",
+ padding: "0.5rem",
+ boxShadow: theme.effects.elevation16,
+ borderRadius: theme.effects.roundedCorner6,
+ bottom: 0,
+ right: "1rem",
+ position: "absolute",
+ overflow: "hidden",
+ cursor: "pointer",
+ background: theme.palette.white
+ },
+ };
+};
+
+export const callIconStyles = (theme: Theme): IIconStyles => {
+ return {
+ root: {
+ paddingTop: "0.2rem",
+ color: theme.palette.white,
+ transform: "scale(1.6)",
+ },
+ };
+};
+
+export const startCallButtonStyles = (theme: Theme): IButtonStyles => {
+ return {
+ root: {
+ background: theme.palette.themePrimary,
+ borderRadius: theme.effects.roundedCorner6,
+ borderColor: theme.palette.themePrimary,
+ },
+ textContainer: {
+ color: theme.palette.white,
+ },
+ };
+};
+
+export const logoContainerStyles: IStackStyles = {
+ root: {
+ margin: "auto",
+ padding: "0.2rem",
+ height: "5rem",
+ width: "10rem",
+ zIndex: 0,
+ },
+};
+
+export const collapseButtonStyles: IButtonStyles = {
+ root: {
+ position: "absolute",
+ top: "0.2rem",
+ right: "0.2rem",
+ zIndex: 1,
+ },
+};
+
+export const callingWidgetInCallContainerStyles = (theme: Theme): IStackStyles => {
+ return {
+ root: {
+ width: '35rem',
+ height: '25rem',
+ padding: '0.5rem',
+ boxShadow: theme.effects.elevation16,
+ borderRadius: theme.effects.roundedCorner6,
+ bottom: 0,
+ right: '1rem',
+ position: 'absolute',
+ overflow: 'hidden',
+ cursor: 'pointer',
+ background: theme.semanticColors.bodyBackground
+ }
+ }
+}
+```
+
+### Run the app
+
+Finally we can run the application to make our calls! Run the following commands to install our dependencies and run our app.
+
+```bash
+# Install the newe dependencies
+npm install
+
+# run the React app
+npm run start
+```
+
+Once the app is running, you can see it on `http://localhost:3000` in your browser. You should see the following splash screen:
+
+![Screenshot of calling widget sample app home page widget closed.](../media/calling-widget/sample-app-splash-widget-closed.png)
+
+Then when you action the widget button, you should see a little menu:
+
+![Screenshot of calling widget sample app home page widget open.](../media/calling-widget/sample-app-splash-widget-open.png)
+
+after you fill out your name click start call and the call should begin. The widget should look like so after starting a call:
+
+![Screenshot of click to call sample app home page with calling experience embedded in widget.](../media/calling-widget/calling-widget-embedded-start.png)
+
+## Next steps
+
+If you haven't had the chance, check out our documentation on Teams auto attendants and Teams call queues.
+
+> [!div class="nextstepaction"]
+
+> [Quickstart: Join your calling app to a Teams call queue](../../quickstarts/voice-video-calling/get-started-teams-call-queue.md)
+
+> [Quickstart: Join your calling app to a Teams Auto Attendant](../../quickstarts/voice-video-calling/get-started-teams-auto-attendant.md)
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
| Protocol | Source | Source ports | Destination | Destination ports | Description | |--|--|--|--|--|--|
-| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `30,000-32,676`<sup>2</sup> | Allow your Client IPs to access Azure Container Apps. |
-| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `80`, `31080` | Allow your Client IPs to access Azure Container Apps when using HTTP. |
+| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `31443` | Allow your Client IPs to access Azure Container Apps when using HTTPS. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
# [Consumption only environment](#tab/consumption-only) | Protocol | Source | Source ports | Destination | Destination ports | Description | |--|--|--|--|--|--|
-| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443` | Allow your Client IPs to access Azure Container Apps. |
-| TCP | Your client IPs | \* | The `staticIP` of your container app environment | `443` | Allow your Client IPs to access Azure Container Apps. |
-| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+| TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `80`, `443` | Allow your Client IPs to access Azure Container Apps. Use port `80` for HTTP and `443` for HTTPS. |
+| TCP | Your client IPs | \* | The `staticIP` of your container app environment | `80`, `443` | Allow your Client IPs to access Azure Container Apps. Use port `80` for HTTP and `443` for HTTPS. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
| TCP | Your container app's subnet | \* | Your container app's subnet | \* | Required to allow the container app envoy sidecar to connect to envoy service. |
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Change feed is available for partition key ranges of an Azure Cosmos DB containe
### Sort order of items in change feed
-Change feed items come in the order of their modification time. This sort order is guaranteed per physical partition, and there's no guaranteed order across the partition key values.
+Change feed items come in the order of their modification time. This sort order is guaranteed per partition key, and there's no guaranteed order across the partition key values.
### Change feed in multi-region Azure Cosmos DB accounts
cosmos-db Database Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-encryption-at-rest.md
Title: Encryption at rest in Azure Cosmos DB
-description: Learn how Azure Cosmos DB provides encryption of data at rest and how it is implemented.
+description: Learn how Azure Cosmos DB provides encryption of data at rest and how it's implemented.
Last updated 10/26/2021
-# Data encryption in Azure Cosmos DB
+# Data encryption in Azure Cosmos DB
+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Encryption at rest is a phrase that commonly refers to the encryption of data on nonvolatile storage devices, such as solid state drives (SSDs) and hard disk drives (HDDs). Azure Cosmos DB stores its primary databases on SSDs. Its media attachments and backups are stored in Azure Blob storage, which is generally backed up by HDDs. With the release of encryption at rest for Azure Cosmos DB, all your databases, media attachments, and backups are encrypted. Your data is now encrypted in transit (over the network) and at rest (nonvolatile storage), giving you end-to-end encryption.
+"Encryption at rest" is a phrase that commonly refers to the encryption of data on nonvolatile storage devices, such as solid-state drives (SSDs) and hard-disk drives (HDDs). Azure Cosmos DB stores its primary databases on SSDs. Its media attachments and backups are stored in Azure Blob Storage, which are generally backed up by HDDs. With the release of encryption at rest for Azure Cosmos DB, all your databases, media attachments, and backups are encrypted. Your data is now encrypted in transit (over the network) and at rest (nonvolatile storage), giving you end-to-end encryption.
+
+As a platform as a service (PaaS), Azure Cosmos DB is easy to use. Because all user data stored in Azure Cosmos DB is encrypted at rest and in transport, you don't have to take any action. In other words, encryption at rest is "on" by default. There are no controls to turn it off or on. Azure Cosmos DB uses AES-256 encryption on all regions where the account is running.
-As a PaaS service, Azure Cosmos DB is very easy to use. Because all user data stored in Azure Cosmos DB is encrypted at rest and in transport, you don't have to take any action. Another way to put this is that encryption at rest is "on" by default. There are no controls to turn it off or on. Azure Cosmos DB uses AES-256 encryption on all regions where the account is running. We provide this feature while we continue to meet our [availability and performance SLAs](https://azure.microsoft.com/support/legal/sl) article.
+We provide this feature while we continue to meet our [availability and performance service-level agreements (SLAs)](https://azure.microsoft.com/support/legal/sl) article.
## Implementation of encryption at rest for Azure Cosmos DB
-Encryption at rest is implemented by using a number of security technologies, including secure key storage systems, encrypted networks, and cryptographic APIs. Systems that decrypt and process data have to communicate with systems that manage keys. The diagram shows how storage of encrypted data and the management of keys is separated.
+Encryption at rest is implemented by using several security technologies, including secure key storage systems, encrypted networks, and cryptographic APIs. Systems that decrypt and process data have to communicate with systems that manage keys. The diagram shows how storage of encrypted data and the management of keys is separated.
+
+The basic flow of a user request is:
-The basic flow of a user request is as follows:
- The user database account is made ready, and storage keys are retrieved via a request to the Management Service Resource Provider. - A user creates a connection to Azure Cosmos DB via HTTPS/secure transport. (The SDKs abstract the details.) - The user sends a JSON document to be stored over the previously created secure connection.
The basic flow of a user request is as follows:
## Frequently asked questions
-### Q: How much more does Azure Storage cost if Storage Service Encryption is enabled?
-A: There is no additional cost.
+Find answers to commonly asked questions about encryption.
+
+### How much more does Azure Storage cost if Storage Service Encryption is enabled?
+
+There's no extra cost.
+
+### Who manages the encryption keys?
-### Q: Who manages the encryption keys?
-A: Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
+Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft by using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage by using [customer-managed keys](how-to-setup-cmk.md).
-### Q: How often are encryption keys rotated?
-A: Microsoft has a set of internal guidelines for encryption key rotation, which Azure Cosmos DB follows. The specific guidelines are not published. Microsoft does publish the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
+### How often are encryption keys rotated?
-### Q: Can I use my own encryption keys?
-A: Yes, this feature is now available for new Azure Cosmos DB accounts and this should be done at the time of account creation. Please go through [Customer-managed Keys](./how-to-setup-cmk.md) document for more information.
+Microsoft has a set of internal guidelines for encryption key rotation, which Azure Cosmos DB follows. The specific guidelines aren't published. Microsoft does publish the [Security Development Lifecycle](https://www.microsoft.com/sdl/default.aspx), which is seen as a subset of internal guidance and has useful best practices for developers.
+
+### Can I use my own encryption keys?
+
+Yes, this feature is available for new Azure Cosmos DB accounts. It should be deployed at the time of account creation. For more information, see the [customer-managed keys](./how-to-setup-cmk.md) document.
> [!WARNING]
-> The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
+> The following field names are reserved on Cassandra API tables in accounts by using customer-managed keys:
> > - `id` > - `ttl`
A: Yes, this feature is now available for new Azure Cosmos DB accounts and this
> - `_attachments` > - `_epk` >
-> When Customer-managed Keys are not enabled, only field names beginning with `__sys_` are reserved.
+> When customer-managed keys aren't enabled, only field names beginning with `__sys_` are reserved.
+
+### What regions have encryption turned on?
+
+All Azure Cosmos DB regions have encryption turned on for all user data.
+
+### Does encryption affect the performance latency and throughput SLAs?
-### Q: What regions have encryption turned on?
-A: All Azure Cosmos DB regions have encryption turned on for all user data.
+There's no effect or changes to the performance SLAs because encryption at rest is now enabled for all existing and new accounts. To see the latest guarantees, see [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db).
-### Q: Does encryption affect the performance latency and throughput SLAs?
-A: There is no impact or changes to the performance SLAs now that encryption at rest is enabled for all existing and new accounts. You can read more on the [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db) page to see the latest guarantees.
+### Does the local emulator support encryption at rest?
-### Q: Does the local emulator support encryption at rest?
-A: The emulator is a standalone dev/test tool and does not use the key management services that the managed Azure Cosmos DB service uses. Our recommendation is to enable BitLocker on drives where you are storing sensitive emulator test data. The [emulator supports changing the default data directory](emulator.md) as well as using a well-known location.
+The emulator is a standalone dev/test tool and doesn't use the key management services that the managed Azure Cosmos DB service uses. We recommend that you enable BitLocker on drives where you're storing sensitive emulator test data. The [emulator supports changing the default data directory](emulator.md) and using a well-known location.
## Next steps
-* You can choose to add a second layer of encryption with your own keys, to learn more, see the [customer-managed keys](how-to-setup-cmk.md) article.
+* To learn more about adding a second layer of encryption with your own keys, see the [customer-managed keys](how-to-setup-cmk.md) article.
* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos DB database security](database-security.md). * For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md
Last updated 11/21/2022
This article discusses database security best practices and key features offered by Azure Cosmos DB to help you prevent, detect, and respond to database breaches.
-## What's new in Azure Cosmos DB security
+## What's new in Azure Cosmos DB security?
-Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There's no need to configure anything. You get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md).
+Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There's no need to configure anything. You get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage by using [customer-managed keys or CMK](how-to-setup-cmk.md).
-## How do I secure my database
+## How do I secure my database?
-Data security is a shared responsibility between you, the customer, and your database provider. Depending on the database provider you choose, the amount of responsibility you carry can vary. If you choose an on-premises solution, you need to provide everything from end-point protection to physical security of your hardware - which is no easy task. If you choose a PaaS cloud database provider such as Azure Cosmos DB, your area of concern shrinks considerably. The following image, borrowed from Microsoft's [Shared Responsibilities for Cloud Computing](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/) white paper, shows how your responsibility decreases with a PaaS provider like Azure Cosmos DB.
+Data security is a shared responsibility between you, the customer, and your database provider. Depending on the database provider you choose, the amount of responsibility you carry can vary. If you choose an on-premises solution, you need to provide everything from endpoint protection to physical security of your hardware, which is no easy task. If you choose a platform as a service (PaaS) cloud database provider, such as Azure Cosmos DB, your area of concern shrinks considerably. The following image, borrowed from Microsoft's [Shared Responsibilities for Cloud Computing](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/) white paper, shows how your responsibility decreases with a PaaS provider like Azure Cosmos DB.
-The preceding diagram shows high-level cloud security components, but what items do you need to worry about specifically for your database solution? And how can you compare solutions to each other?
+The preceding diagram shows high-level cloud security components, but what items do you need to worry about specifically for your database solution? How can you compare solutions to each other?
We recommend the following checklist of requirements on which to compare database systems: - Network security and firewall settings-- User authentication and fine grained user controls
+- User authentication and fine-grained user controls
- Ability to replicate data globally for regional failures-- Ability to fail over from one data center to another-- Local data replication within a data center
+- Ability to fail over from one datacenter to another
+- Local data replication within a datacenter
- Automatic data backups - Restoration of deleted data from backups - Protect and isolate sensitive data - Monitoring for attacks - Responding to attacks - Ability to geo-fence data to adhere to data governance restrictions-- Physical protection of servers in protected data centers
+- Physical protection of servers in protected datacenters
- Certifications
-And although it may seem obvious, recent [large-scale database breaches](https://thehackernews.com/2017/01/mongodb-database-security.html) remind us of the simple but critical importance of the following requirements:
+Although it might seem obvious, recent [large-scale database breaches](https://thehackernews.com/2017/01/mongodb-database-security.html) remind us of the simple but critical importance of the following requirements:
-- Patched servers that are kept up-to-date
+- Patched servers that are kept up to date
- HTTPS by default/TLS encryption - Administrative accounts with strong passwords
-## How does Azure Cosmos DB secure my database
+## How does Azure Cosmos DB secure my database?
-Let's look back at the preceding list - how many of those security requirements does Azure Cosmos DB provide? Every single one.
+Let's look back at the preceding list. How many of those security requirements does Azure Cosmos DB provide? Every single one.
-Let's dig into each one in detail.
+Let's explore each one in detail.
|Security requirement|Azure Cosmos DB's security approach| |||
-|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems. However, they're expanded so that an Azure Cosmos DB database account is only accessible from an approved set of machines or cloud services. Learn more in [Azure Cosmos DB firewall support](how-to-configure-firewall.md) article.<br><br>Azure Cosmos DB enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.|
-|Authorization|Azure Cosmos DB uses hash-based message authentication code (HMAC) for authorization. <br><br>Each request is hashed using the secret account key, and the subsequent base-64 encoded hash is sent with each call to Azure Cosmos DB. To validate the request, the Azure Cosmos DB service uses the correct secret key and properties to generate a hash, then it compares the value with the one in the request. If the two values match, the operation is authorized successfully, and the request is processed. If they don't match, there's an authorization failure and the request is rejected.<br><br>You can use either a [primary key](#primary-keys), or a [resource token](secure-access-to-data.md#resource-tokens) allowing fine-grained access to a resource such as a document.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).|
-|Users and permissions|Using the primary key for the account, you can create user resources and permission resources per database. A resource token is associated with a permission in a database and determines whether the user has access (read-write, read-only, or no access) to an application resource in the database. Application resources include container, documents, attachments, stored procedures, triggers, and UDFs. The resource token is then used during authentication to provide or deny access to the resource.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).|
-|Active directory integration (Azure role-based access control)| You can also provide or restrict access to the Azure Cosmos DB account, database, container, and offers (throughput) using Access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built in roles or custom roles for individuals and groups. For more information, see [Active Directory integration](role-based-access-control.md).|
-|Global replication|Azure Cosmos DB offers turnkey global distribution, which enables you to replicate your data to any one of Azure's world-wide datacenters in a turnkey way. Global replication lets you scale globally and provide low-latency access to your data around the world.<br><br>In the context of security, global replication ensures data protection against regional failures.<br><br>Learn more in [Distribute data globally](distribute-data-globally.md).|
-|Regional failovers|If you've replicated your data in more than one data center, Azure Cosmos DB automatically rolls over your operations should a regional data center go offline. You can create a prioritized list of failover regions using the regions in which your data is replicated. <br><br>Learn more in [Regional Failovers in Azure Cosmos DB](high-availability.md).|
-|Local replication|Even within a single data center, Azure Cosmos DB automatically replicates data for high availability giving you the choice of [consistency levels](consistency-levels.md). This replication guarantees a 99.99% [availability SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db) for all single region accounts and all multi-region accounts with relaxed consistency, and 99.999% read availability on all multi-region database accounts.|
-|Automated online backups|Azure Cosmos DB databases are backed up regularly and stored in a geo redundant store. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).|
-|Restore deleted data|The automated online backups can be used to recover data you may have accidentally deleted up to ~30 days after the event. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md)|
-|Protect and isolate sensitive data|All data in the regions listed in What's new? is now encrypted at rest.<br><br>Personal data and other confidential data can be isolated to specific container and read-write, or read-only access can be limited to specific users.|
-|Monitor for attacks|By using [audit logging and activity logs](./monitor.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources. This data includes; who initiated the operation, when the operation occurred, the status of the operation, and much more.|
-|Respond to attacks|Once you have contacted Azure support to report a potential attack, a five-step incident response process is kicked off. The goal of the five-step process is to restore normal service security and operations. The five-step process restores services as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).|
-|Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, US Gov).|
-|Protected facilities|Data in Azure Cosmos DB is stored on SSDs in Azure's protected data centers.<br><br>Learn more in [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters)|
-|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.2 (included).<br>It's possible to enforce a minimum TLS level on server-side. To do so, refer to self service guide [Self-serve minimum TLS version enforcement in Azure Cosmos DB](./self-serve-minimum-tls-enforcement.md).|
-|Encryption at rest|All data stored into Azure Cosmos DB is encrypted at rest. Learn more in [Azure Cosmos DB encryption at rest](./database-encryption-at-rest.md)|
-|Patched servers|As a managed database, Azure Cosmos DB eliminates the need to manage and patch servers, that's done for you, automatically.|
-|Administrative accounts with strong passwords|It's hard to believe we even need to mention this requirement, but unlike some of our competitors, it's impossible to have an administrative account with no password in Azure Cosmos DB.<br><br> Security via TLS and HMAC secret based authentication is baked in by default.|
-|Security and data protection certifications| For the most up-to-date list of certifications, see [Azure compliance](https://www.microsoft.com/en-us/trustcenter/compliance/complianceofferings) and the latest [Azure compliance document](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-azure-compliance-offerings/Microsoft%20Azure%20Compliance%20Offerings.pdf) with all Azure certifications including Azure Cosmos DB.
-
-The following screenshot shows how you can use audit logging and activity logs to monitor your account:
+|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy-driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems. However, they're expanded so that an Azure Cosmos DB database account is only accessible from an approved set of machines or cloud services. To learn more, see [Azure Cosmos DB firewall support](how-to-configure-firewall.md).<br><br>With Azure Cosmos DB, you can enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>Azure Cosmos DB blocks all requests that originate from machines outside this allowed list. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, `AzureCosmosDB`) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.|
+|Authorization|Azure Cosmos DB uses hash-based message authentication code (HMAC) for authorization. <br><br>Each request is hashed by using the secret account key, and the subsequent base-64 encoded hash is sent with each call to Azure Cosmos DB. To validate the request, Azure Cosmos DB uses the correct secret key and properties to generate a hash, and then it compares the value with the one in the request. If the two values match, the operation is authorized successfully and the request is processed. If they don't match, there's an authorization failure and the request is rejected.<br><br>You can use either a [primary key](#primary-keys) or a [resource token](secure-access-to-data.md#resource-tokens), allowing fine-grained access to a resource such as a document.<br><br>To learn more, see [Secure access to Azure Cosmos DB resources](secure-access-to-data.md).|
+|Users and permissions|By using the primary key for the account, you can create user resources and permission resources per database. A resource token is associated with a permission in a database and determines whether the user has access (read-write, read-only, or no access) to an application resource in the database. Application resources include containers, documents, attachments, stored procedures, triggers, and UDFs. The resource token is then used during authentication to provide or deny access to the resource.<br><br>To learn more, see [Secure access to Azure Cosmos DB resources](secure-access-to-data.md).|
+|Active Directory integration (Azure role-based access control)| You can also provide or restrict access to the Azure Cosmos DB account, database, container, and offers (throughput) by using access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built-in roles or custom roles for individuals and groups. To learn more, see [Active Directory integration](role-based-access-control.md).|
+|Global replication|Azure Cosmos DB offers turnkey global distribution, which enables you to replicate your data to any one of Azure's worldwide datacenters in a turnkey way. Global replication lets you scale globally and provide low-latency access to your data around the world.<br><br>In the context of security, global replication ensures data protection against regional failures.<br><br>To learn more, see [Distribute data globally](distribute-data-globally.md).|
+|Regional failovers|If you've replicated your data in more than one datacenter, Azure Cosmos DB automatically rolls over your operations if a regional datacenter goes offline. You can create a prioritized list of failover regions by using the regions in which your data is replicated. <br><br>To learn more, see [Regional failovers in Azure Cosmos DB](high-availability.md).|
+|Local replication|Even within a single datacenter, Azure Cosmos DB automatically replicates data for high availability, giving you the choice of [consistency levels](consistency-levels.md). This replication guarantees a 99.99% [availability SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db) for all single region accounts and all multi-region accounts with relaxed consistency, and 99.999% read availability on all multi-region database accounts.|
+|Automated online backups|Azure Cosmos DB databases are backed up regularly and stored in a geo-redundant store. <br><br>To learn more, see [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).|
+|Restore deleted data|You can use the automated online backups to recover data you might have accidentally deleted up to ~30 days after the event. <br><br>To learn more, see [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md)|
+|Protect and isolate sensitive data|All data in the regions listed in What's new? is now encrypted at rest.<br><br>Personal data and other confidential data can be isolated to specific containers and read-write, or read-only access can be limited to specific users.|
+|Monitor for attacks|By using [audit logging and activity logs](./monitor.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources. This data includes who initiated the operation, when the operation occurred, the status of the operation, and much more.|
+|Respond to attacks|After you've contacted Azure support to report a potential attack, a five-step incident response process begins. The goal is to restore normal service security and operations. The process restores services as quickly as possible after an issue is detected and an investigation is started.<br><br>To learn more, see [Microsoft Azure security response in the cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).|
+|Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, and US Government).|
+|Protected facilities|Data in Azure Cosmos DB is stored on solid state drives in Azure's protected datacenters.<br><br>To learn more, see [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters).|
+|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.2 (included).<br>It's possible to enforce a minimum TLS level on the server side. To do so, see the self-service guide [Self-serve minimum TLS version enforcement in Azure Cosmos DB](./self-serve-minimum-tls-enforcement.md).|
+|Encryption at rest|All data stored in Azure Cosmos DB is encrypted at rest. Learn more in [Azure Cosmos DB encryption at rest](./database-encryption-at-rest.md).|
+|Patched servers|As a managed database, Azure Cosmos DB eliminates the need to manage and patch servers because it's done for you automatically.|
+|Administrative accounts with strong passwords|It's impossible to have an administrative account with no password in Azure Cosmos DB.<br><br> Security via TLS and HMAC secret-based authentication is baked in by default.|
+|Security and data protection certifications| For the most up-to-date list of certifications, see [Azure compliance](https://www.microsoft.com/en-us/trustcenter/compliance/complianceofferings) and the latest [Azure compliance document](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-azure-compliance-offerings/Microsoft%20Azure%20Compliance%20Offerings.pdf) with all Azure certifications, including Azure Cosmos DB.
+
+The following screenshot shows how you can use audit logging and activity logs to monitor your account.
<a id="primary-keys"></a>
The following screenshot shows how you can use audit logging and activity logs t
Primary/secondary keys provide access to all the administrative resources for the database account. Primary/secondary keys: -- Provide access to accounts, databases, users, and permissions.
+- Provide access to accounts, databases, users, and permissions.
- Can't be used to provide granular access to containers and documents. - Are created during the creation of an account. - Can be regenerated at any time.
-Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is so that you can regenerate, or roll keys, providing continuous access to your account and data.
+Each account consists of two keys: a primary key and a secondary key. The purpose of dual keys is so that you can regenerate, or roll, keys, providing continuous access to your account and data.
-Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but don't provide access to read permissions resources.
+Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account. They don't provide access to read permissions resources.
### <a id="key-rotation"></a> Key rotation and regeneration
-The process of key rotation and regeneration is simple. First, make sure that **your application is consistently using either the primary key or the secondary key** to access your Azure Cosmos DB account. Then, follow the steps outlined below. To monitor your account for key updates and key regeneration, see [monitor key updates with metrics and alerts](monitor-account-key-updates.md) article.
+The process of key rotation and regeneration is simple. First, make sure that *your application is consistently using either the primary key or the secondary key* to access your Azure Cosmos DB account. Then, follow the steps in the next section. To monitor your account for key updates and key regeneration, see [Monitor key updates with metrics and alerts](monitor-account-key-updates.md).
# [API for NoSQL](#tab/sql-api) #### If your application is currently using the primary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+1. Select **Keys** from the left menu, and then select **Regenerate Secondary Key** from the ellipsis (**...**) on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot showing how to regenerate the secondary key in the Azure portal when used with the NoSQL API." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot showing how to regenerate the primary key in the Azure portal when used with the NoSQL API." border="true":::
#### If your application is currently using the secondary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+1. Select **Keys** from the left menu, and then select **Regenerate Primary Key** from the ellipsis (**...**) on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot that shows how to regenerate the primary key in the Azure portal when used with the NoSQL API." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot that shows how to regenerate the secondary key in the Azure portal when used with the NoSQL API." border="true":::
# [Azure Cosmos DB for MongoDB](#tab/mongo-api) #### If your application is currently using the primary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Password** from the ellipsis on the right of your secondary password.
+1. Select **Connection String** from the left menu, and then select **Regenerate Password** from the ellipsis (**...**) on the right of your secondary password.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot showing how to regenerate the secondary key in the Azure portal when used with MongoDB." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot showing how to regenerate the primary key in the Azure portal when used with MongoDB." border="true":::
#### If your application is currently using the secondary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account on the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Password** from the ellipsis on the right of your primary password.
+1. Select **Connection String** from the left menu, and then select **Regenerate Password** from the ellipsis (**...**) on the right of your primary password.
- :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot that shows how to regenerate the primary key in the Azure portal when used with MongoDB." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot that shows how to regenerate the secondary key in the Azure portal when used with MongoDB." border="true":::
# [API for Cassandra](#tab/cassandra-api) #### If your application is currently using the primary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Secondary Read-Write Password** from the ellipsis on the right of your secondary password.
+1. Select **Connection String** from the left menu, and then select **Regenerate Secondary Read-Write Password** from the ellipsis (**...**) on the right of your secondary password.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot showing how to regenerate the secondary key in the Azure portal when used with Cassandra." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot showing how to regenerate the primary key in the Azure portal when used with Cassandra." border="true":::
#### If your application is currently using the secondary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account on the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Primary Read-Write Password** from the ellipsis on the right of your primary password.
+1. Select **Connection String** from the left menu, and then select **Regenerate Primary Read-Write Password** from the ellipsis (**...**) on the right of your primary password.
- :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot that shows how to regenerate the primary key in the Azure portal when used with Cassandra." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot that shows how to regenerate the secondary key in the Azure portal when used with Cassandra." border="true":::
# [API for Gremlin](#tab/gremlin-api) #### If your application is currently using the primary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+1. Select **Keys** from the left menu, and then select **Regenerate Secondary Key** from the ellipsis (**...**) on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot showing how to regenerate the secondary key in the Azure portal when used with the Gremlin API." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot showing how to regenerate the primary key in the Azure portal when used with the Gremlin API." border="true":::
#### If your application is currently using the secondary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account on the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+1. Select **Keys** from the left menu, and then select **Regenerate Primary Key** from the ellipsis (**...**) on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot that shows how to regenerate the primary key in the Azure portal when used with the Gremlin API." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot that shows how to regenerate the secondary key in the Azure portal when used with the Gremlin API." border="true":::
# [API for Table](#tab/table-api) #### If your application is currently using the primary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+1. Select **Connection String** from the left menu, and then select **Regenerate Secondary Key** from the ellipsis (**...**) on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot showing how to regenerate the secondary key in the Azure portal when used with the Table API." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot showing how to regenerate the primary key in the Azure portal when used with the Table API." border="true":::
#### If your application is currently using the secondary key
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Connection String** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+1. Select **Connection String** from the left menu, and then select **Regenerate Primary Key** from the ellipsis (**...**) on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot that shows how to regenerate the primary key in the Azure portal when used with the Table API." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot that shows how to regenerate the secondary key in the Azure portal when used with the Table API." border="true":::
## Track the status of key regeneration
-After you rotate or regenerate a key, you can track its status from the Activity log. Use the following steps to track the status:
+After you rotate or regenerate a key, you can track its status from the activity log. Use the following steps to track the status.
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your Azure Cosmos DB account.
1. Select **Keys** from the left menu. You should see the last key regeneration date below each key.
- :::image type="content" source="./media/database-security/track-key-regeneration-status.png" alt-text="Screenshot of status of key regeneration from Activity log." border="true":::
+ :::image type="content" source="./media/database-security/track-key-regeneration-status.png" alt-text="Screenshot that shows status of key regeneration from the activity log." border="true":::
- Microsoft recommends regenerating the keys at least once every 60 days. If your last regeneration was more than 60 days ago, you will see a warning icon. Also, you could see that your key was not recorded. If this is the case, your account was created before 2022-06-18 and the dates were not registered. However, you should be able to regenerate and see your new last regeneration date for the new key.
+ We recommend that you regenerate the keys at least once every 60 days. If your last regeneration was more than 60 days ago, you see a warning icon. Also, you could see that your key wasn't recorded. If this is the case, your account was created before June 18, 2022, and the dates weren't registered. However, you should be able to regenerate and see your new last regeneration date for the new key.
-1. You should see the key regeneration events along with its status, time at which the operation was issued, details of the user who initiated key regeneration. The key generation operation initiates with **Accepted** status, it then changes to **Started** and then to **Succeeded** when the operation completes.
+1. You should see the key regeneration events along with its status, time at which the operation was issued, and details of the user who initiated key regeneration. The key generation operation initiates with **Accepted** status. It changes to **Started** and then to **Succeeded** when the operation is finished.
## Next steps
-For more information about primary keys and resource tokens, see [Securing access to Azure Cosmos DB data](secure-access-to-data.md).
-
-For more information about audit logging, see [Azure Cosmos DB diagnostic logging](./monitor.md).
-
-For more information about Microsoft certifications, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
+* For more information about primary keys and resource tokens, see [Secure access to Azure Cosmos DB data](secure-access-to-data.md).
+* For more information about audit logging, see [Azure Cosmos DB diagnostic logging](./monitor.md).
+* For more information about Microsoft certifications, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/time-to-live.md
Title: Expire data in Azure Cosmos DB with Time to Live
-description: With TTL, Microsoft Azure Cosmos DB provides the ability to have documents automatically purged from the system after a period of time.
+description: With TTL, Microsoft Azure Cosmos DB automatically purges documents from the system after a period of time.
Previously updated : 09/16/2021 Last updated : 11/03/2023 # Time to Live (TTL) in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-With **Time to Live** or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system will automatically delete the expired items based on the TTL value, without needing a delete operation that is explicitly issued by the client application. The maximum value for TTL is 2147483647 seconds, the approximate equivalent of 24,855 days or 68 years.
+With **Time to Live** or TTL, Azure Cosmos DB deletes items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system automatically deletes the expired items based on the TTL value, without needing a delete operation explicitly issued by the client application. The maximum value for TTL is 2147483647 seconds, the approximate equivalent of 24,855 days or 68 years.
-Deletion of expired items is a background task that consumes left-over [Request Units](../request-units.md), that is Request Units that haven't been consumed by user requests. Even after the TTL has expired, if the container is overloaded with requests and if there aren't enough RU's available, the data deletion is delayed. Data is deleted once there are enough RUs available to perform the delete operation. Though the data deletion is delayed, data is not returned by any queries (by any API) after the TTL has expired.
+Deletion of expired items is a background task that consumes left-over [Request Units](../request-units.md) that haven't been consumed by user requests. Even after the TTL expires, if the container is overloaded with requests and if there aren't enough RUs available, the data deletion is delayed. Data is deleted once there are enough RUs available to perform the delete operation. Though the data deletion is delayed, data isn't returned by any queries (by any API) after the TTL expires.
> [!NOTE] > This content is related to Azure Cosmos DB transactional store TTL. If you are looking for analytical store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl). ## Time to live for containers and items
-The time to live value is set in seconds, and it is interpreted as a delta from the time that an item was last modified. You can set time to live on a container or an item within the container:
+The time to live value is set in seconds, and is interpreted as a delta from the time that an item was last modified. You can set time to live on a container or an item within the container:
1. **Time to Live on a container** (set using `DefaultTimeToLive`):
- - If missing (or set to null), items are not expired automatically.
+ - If missing (or set to null), items aren't expired automatically.
- - If present and the value is set to "-1", it is equal to infinity, and items donΓÇÖt expire by default.
+ - If present and the value is set to "-1", it's equal to infinity, and items donΓÇÖt expire by default.
- - If present and the value is set to some *non-zero* number *"n"* ΓÇô items will expire *"n"* seconds after their last modified time.
+ - If present and the value is set to some *nonzero* number *"n"* ΓÇô items will expire *"n"* seconds after their last modified time.
2. **Time to Live on an item** (set using `ttl`):
- - This Property is applicable only if `DefaultTimeToLive` is present and it is not set to null for the parent container.
+ - This Property is applicable only if `DefaultTimeToLive` is present and it isn't set to null for the parent container.
- If present, it overrides the `DefaultTimeToLive` value of the parent container. ## Time to Live configurations -- If TTL is set to *"n"* on a container, then the items in that container will expire after *n* seconds. If there are items in the same container that have their own time to live, set to -1 (indicating they do not expire) or if some items have overridden the time to live setting with a different number, these items expire based on their own configured TTL value.
+- If TTL is set to *"n"* on a container, then the items in that container will expire after *n* seconds. If there are items in the same container that have their own time to live, set to -1 (indicating they don't expire). If some items override the time to live setting with a different number, these items expire based on their own configured TTL value.
-- If TTL is not set on a container, then the time to live on an item in this container has no effect.
+- If TTL isn't set on a container, then the time to live on an item in this container has no effect.
-- If TTL on a container is set to -1, an item in this container that has the time to live set to n, will expire after n seconds, and remaining items will not expire.
+- If TTL on a container is set to -1, an item in this container that has the time to live set to n, will expire after n seconds, and remaining items won't expire.
## Examples This section shows some examples with different time to live values assigned to container and items:
+> [!NOTE]
+> Setting TTL to null on an item isn't supported. The item TTL value must be a nonzero positive integer less than or equal to 2147483647, or -1 which means the item will never expire. To use the default TTL on an item, ensure the TTL property isn't present.
+ ### Example 1 TTL on container is set to null (DefaultTimeToLive = null) |TTL on item| Result| |||
-|ttl = null|TTL is disabled. The item will never expire (default).|
+|ttl property missing |TTL is disabled. The item will never expire (default).|
|ttl = -1|TTL is disabled. The item will never expire.| |ttl = 2000|TTL is disabled. The item will never expire.|
TTL on container is set to -1 (DefaultTimeToLive = -1)
|TTL on item| Result| |||
-|ttl = null|TTL is enabled. The item will never expire (default).|
+|ttl property missing |TTL is enabled. The item will never expire (default).|
|ttl = -1|TTL is enabled. The item will never expire.| |ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
TTL on container is set to 1000 (DefaultTimeToLive = 1000)
|TTL on item| Result| |||
-|ttl = null|TTL is enabled. The item will expire after 1000 seconds (default).|
+|ttl property missing |TTL is enabled. The item will expire after 1000 seconds (default).|
|ttl = -1|TTL is enabled. The item will never expire.| |ttl = 2000|TTL is enabled. The item will expire after 2000 seconds.|
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Azure Cosmos DB provides three ways to control access to your data.
| Access control type | Characteristics | ||| | [Primary/secondary keys](#primary-keys) | Shared secret allowing any management or data operation. It comes in both read-write and read-only variants. |
-| [Role-based access control](#rbac) | Fine-grained, role-based permission model using Microsoft Entra identities for authentication. |
+| [Role-based access control (RBAC)](#rbac) | Fine-grained, role-based permission model using Microsoft Entra identities for authentication. |
| [Resource tokens](#resource-tokens)| Fine-grained permission model based on native Azure Cosmos DB users and permissions. | ## <a id="primary-keys"></a> Primary/secondary keys
-Primary/secondary keys provide access to all the administrative resources for the database account. Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll keys, providing continuous access to your account and data. To learn more about primary/secondary keys, see the [Database security](database-security.md#primary-keys) article.
+Primary/secondary keys provide access to all the administrative resources for the database account. Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is to let you regenerate, or roll, keys, providing continuous access to your account and data. To learn more about primary/secondary keys, see [Overview of database security in Azure Cosmos DB](database-security.md#primary-keys).
-To see your account keys, navigate to Keys from the left menu. Then, click on the ΓÇ£viewΓÇ¥ icon at the right of each key. Click on the copy button to copy the selected key. You can hide them afterwards by clicking the same icon per key, which will be updated as a ΓÇ£hideΓÇ¥ button.
+To see your account keys, on the left menu select **Keys**. Then, select the **View** icon at the right of each key. Select the **Copy** button to copy the selected key. You can hide them afterwards by selecting the same icon per key, which updates the icon to a **Hide** button.
### <a id="key-rotation"></a> Key rotation and regeneration > [!NOTE] > The following section describes the steps to rotate and regenerate keys for the API for NoSQL. If you're using a different API, see the [API for MongoDB](database-security.md?tabs=mongo-api#key-rotation), [API for Cassandra](database-security.md?tabs=cassandra-api#key-rotation), [API for Gremlin](database-security.md?tabs=gremlin-api#key-rotation), or [API for Table](database-security.md?tabs=table-api#key-rotation) sections. >
-> To monitor your account for key updates and key regeneration, see [monitor key updates with metrics and alerts](monitor-account-key-updates.md) article.
+> To monitor your account for key updates and key regeneration, see [Monitor your Azure Cosmos DB account for key updates and key regeneration](monitor-account-key-updates.md).
-The process of key rotation and regeneration is simple. First, make sure that **your application is consistently using either the primary key or the secondary key** to access your Azure Cosmos DB account. Then, follow the steps outlined below.
+The process of key rotation and regeneration is simple. First, make sure that *your application is consistently using either the primary key or the secondary key* to access your Azure Cosmos DB account. Then, follow the steps in the next section.
# [If your application is currently using the primary key](#tab/using-primary-key)
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+1. Select **Keys** on the left menu and then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot that shows the Azure portal showing how to regenerate the secondary key." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot that shows the Azure portal showing how to regenerate the primary key." border="true":::
# [If your application is currently using the secondary key](#tab/using-secondary-key)
-1. Navigate to your Azure Cosmos DB account on the Azure portal.
+1. Go to your Azure Cosmos DB account in the Azure portal.
-1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+1. Select **Keys** on the left menu and then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot that shows the Azure portal showing how to regenerate the primary key." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot that shows the Azure portal showing how to regenerate the secondary key." border="true":::
### Code sample to use a primary key
-The following code sample illustrates how to use an Azure Cosmos DB account endpoint and primary key to instantiate a CosmosClient:
+The following code sample illustrates how to use an Azure Cosmos DB account endpoint and primary key to instantiate a `CosmosClient`:
```csharp // Read the Azure Cosmos DB endpointUrl and authorization keys from config.
CosmosClient client = new CosmosClient(endpointUrl, authorizationKey);
## <a id="rbac"></a> Role-based access control
-Azure Cosmos DB exposes a built-in role-based access control (RBAC) system that lets you:
+Azure Cosmos DB exposes a built-in RBAC system that lets you:
- Authenticate your data requests with a Microsoft Entra identity. - Authorize your data requests with a fine-grained, role-based permission model. Azure Cosmos DB RBAC is the ideal access control method in situations where: -- You don't want to use a shared secret like the primary key, and prefer to rely on a token-based authentication mechanism,-- You want to use Microsoft Entra identities to authenticate your requests,-- You need a fine-grained permission model to tightly restrict which database operations your identities are allowed to perform,-- You wish to materialize your access control policies as "roles" that you can assign to multiple identities.
+- You don't want to use a shared secret like the primary key and prefer to rely on a token-based authentication mechanism.
+- You want to use Microsoft Entra identities to authenticate your requests.
+- You need a fine-grained permission model to tightly restrict which database operations your identities are allowed to perform.
+- You want to materialize your access control policies as "roles" that you can assign to multiple identities.
-See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
+To learn more about Azure Cosmos DB RBAC, see [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md).
For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md).
For information and sample code to configure RBAC for the Azure Cosmos DB for Mo
Resource tokens provide access to the application resources within a database. Resource tokens: -- Provide access to specific containers, partition keys, documents, attachments.
+- Provide access to specific containers, partition keys, documents, and attachments.
- Are created when a [user](#users) is granted [permissions](#permissions) to a specific resource.-- Are recreated when a permission resource is acted upon on by POST, GET, or PUT call.
+- Are re-created when a permission resource is acted upon by a POST, GET, or PUT call.
- Use a hash resource token specifically constructed for the user, resource, and permission.-- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of 24 hours.
+- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, might be explicitly specified, up to a maximum of 24 hours.
- Provide a safe alternative to giving out the primary key.-- Enable clients to read, write, and delete resources in the Azure Cosmos DB account according to the permissions they've been granted.
+- Enable clients to read, write, and delete resources in the Azure Cosmos DB account according to the permissions they were granted.
-You can use a resource token (by creating Azure Cosmos DB users and permissions) when you want to provide access to resources in your Azure Cosmos DB account to a client that cannot be trusted with the primary key.
+You can use a resource token (by creating Azure Cosmos DB users and permissions) when you want to provide access to resources in your Azure Cosmos DB account to a client that can't be trusted with the primary key.
-Azure Cosmos DB resource tokens provide a safe alternative that enables clients to read, write, and delete resources in your Azure Cosmos DB account according to the permissions you've granted, and without need for either a primary or read only key.
+Azure Cosmos DB resource tokens provide a safe alternative that enables clients to read, write, and delete resources in your Azure Cosmos DB account according to the permissions you were granted, and without need for either a primary or read-only key.
-Here is a typical design pattern whereby resource tokens may be requested, generated, and delivered to clients:
+Here's a typical design pattern whereby resource tokens can be requested, generated, and delivered to clients:
1. A mid-tier service is set up to serve a mobile application to share user photos.
-2. The mid-tier service possesses the primary key of the Azure Cosmos DB account.
-3. The photo app is installed on end-user mobile devices.
-4. On login, the photo app establishes the identity of the user with the mid-tier service. This mechanism of identity establishment is purely up to the application.
-5. Once the identity is established, the mid-tier service requests permissions based on the identity.
-6. The mid-tier service sends a resource token back to the phone app.
-7. The phone app can continue to use the resource token to directly access Azure Cosmos DB resources with the permissions defined by the resource token and for the interval allowed by the resource token.
-8. When the resource token expires, subsequent requests receive a 401 unauthorized exception. At this point, the phone app re-establishes the identity and requests a new resource token.
+1. The mid-tier service possesses the primary key of the Azure Cosmos DB account.
+1. The photo app is installed on user mobile devices.
+1. On sign-in, the photo app establishes the identity of the user with the mid-tier service. This mechanism of identity establishment is purely up to the application.
+1. After the identity is established, the mid-tier service requests permissions based on the identity.
+1. The mid-tier service sends a resource token back to the phone app.
+1. The phone app can continue to use the resource token to directly access Azure Cosmos DB resources with the permissions defined by the resource token and for the interval allowed by the resource token.
+1. When the resource token expires, subsequent requests receive a 401 unauthorized exception. At this point, the phone app reestablishes the identity and requests a new resource token.
- :::image type="content" source="./media/secure-access-to-data/resourcekeyworkflow.png" alt-text="Azure Cosmos DB resource tokens workflow" border="false":::
+ :::image type="content" source="./media/secure-access-to-data/resourcekeyworkflow.png" alt-text="Screenshot that shows an Azure Cosmos DB resource tokens workflow." border="false":::
-Resource token generation and management are handled by the native Azure Cosmos DB client libraries; however, if you use REST you must construct the request/authentication headers. For more information on creating authentication headers for REST, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources) or the source code for our [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/Authorization/AuthorizationHelper.cs) or [Node.js SDK](https://github.com/Azure/azure-cosmos-js/blob/master/src/auth.ts).
+Resource token generation and management are handled by the native Azure Cosmos DB client libraries. However, if you use REST, you must construct the request/authentication headers. For more information on creating authentication headers for REST, see [Access control on Azure Cosmos DB resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources) or the source code for our [.NET SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/Authorization/AuthorizationHelper.cs) or [Node.js SDK](https://github.com/Azure/azure-cosmos-js/blob/master/src/auth.ts).
-For an example of a middle tier service used to generate or broker resource tokens, see the [ResourceTokenBroker app](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/xamarin/UserItems/ResourceTokenBroker/ResourceTokenBroker/Controllers).
+For an example of a middle-tier service used to generate or broker resource tokens, see the [ResourceTokenBroker app](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/xamarin/UserItems/ResourceTokenBroker/ResourceTokenBroker/Controllers).
### Users<a id="users"></a>
-Azure Cosmos DB users are associated with an Azure Cosmos DB database. Each database can contain zero or more Azure Cosmos DB users. The following code sample shows how to create an Azure Cosmos DB user using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement).
+Azure Cosmos DB users are associated with an Azure Cosmos DB database. Each database can contain zero or more Azure Cosmos DB users. The following code sample shows how to create an Azure Cosmos DB user by using the [Azure Cosmos DB .NET SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement).
```csharp // Create a user.
User user = await database.CreateUserAsync("User 1");
``` > [!NOTE]
-> Each Azure Cosmos DB user has a ReadAsync() method that can be used to retrieve the list of [permissions](#permissions) associated with the user.
+> Each Azure Cosmos DB user has a `ReadAsync()` method that you can use to retrieve the list of [permissions](#permissions) associated with the user.
### Permissions<a id="permissions"></a>
-A permission resource is associated with a user and assigned to a specific resource. Each user may contain zero or more permissions. A permission resource provides access to a security token that the user needs when trying to access a specific container or data in a specific partition key. There are two available access levels that may be provided by a permission resource:
+A permission resource is associated with a user and assigned to a specific resource. Each user can contain zero or more permissions. A permission resource provides access to a security token that the user needs when trying to access a specific container or data in a specific partition key. There are two available access levels that can be provided by a permission resource:
-- All: The user has full permission on the resource.-- Read: The user can only read the contents of the resource but cannot perform write, update, or delete operations on the resource.
+- **All**: The user has full permission on the resource.
+- **Read**: The user can only read the contents of the resource but can't perform write, update, or delete operations on the resource.
> [!NOTE]
-> In order to run stored procedures the user must have the All permission on the container in which the stored procedure will be run.
+> To run stored procedures, the user must have the **All** permission on the container in which the stored procedure will be run.
If you enable the [diagnostic logs on data-plane requests](monitor-resource-logs.md), the following two properties corresponding to the permission are logged:
-* **resourceTokenPermissionId** - This property indicates the resource token permission ID that you have specified.
+* **resourceTokenPermissionId**: This property indicates the resource token permission ID that you specified.
-* **resourceTokenPermissionMode** - This property indicates the permission mode that you have set when creating the resource token. The permission mode can have values such as "all" or "read".
+* **resourceTokenPermissionMode**: This property indicates the permission mode that you set when you created the resource token. The permission mode can have values such as **All** or **Read**.
#### Code sample to create permission
-The following code sample shows how to create a permission resource, read the resource token of the permission resource, and associate the permissions with the [user](#users) created above.
+The following code sample shows how to create a permission resource, read the resource token of the permission resource, and associate the permissions with the [user](#users) you just created.
```csharp // Create a permission on a container and specific partition key value
await user.CreatePermissionAsync(
#### Code sample to read permission for user
-The following code snippet shows how to retrieve the permission associated with the user created above and instantiate a new CosmosClient on behalf of the user, scoped to a single partition key.
+The following code snippet shows how to retrieve the permission associated with the user you created and instantiate a new `CosmosClient` for the user, scoped to a single partition key.
```csharp // Read a permission, create user client session.
CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrR
| Subject | RBAC | Resource tokens | |--|--|--|
-| Authentication | With Microsoft Entra ID. | Based on the native Azure Cosmos DB users<br>Integrating resource tokens with Microsoft Entra ID requires extra work to bridge Microsoft Entra identities and Azure Cosmos DB users. |
-| Authorization | Role-based: role definitions map allowed actions and can be assigned to multiple identities. | Permission-based: for each Azure Cosmos DB user, you need to assign data access permissions. |
-| Token scope | A Microsoft Entra token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources may require different tokens. |
-| Token refresh | The Microsoft Entra token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh is not supported. When a resource token expires, a new one needs to be issued. |
+| Authentication | With Microsoft Entra ID. | Based on the native Azure Cosmos DB users.<br>Integrating resource tokens with Microsoft Entra ID requires extra work to bridge Microsoft Entra identities and Azure Cosmos DB users. |
+| Authorization | Role-based: Role definitions map allowed actions and can be assigned to multiple identities. | Permission-based: For each Azure Cosmos DB user, you need to assign data access permissions. |
+| Token scope | A Microsoft Entra token carries the identity of the requester. This identity is matched against all assigned role definitions to perform authorization. | A resource token carries the permission granted to a specific Azure Cosmos DB user on a specific Azure Cosmos DB resource. Authorization requests on different resources might require different tokens. |
+| Token refresh | The Microsoft Entra token is automatically refreshed by the Azure Cosmos DB SDKs when it expires. | Resource token refresh isn't supported. When a resource token expires, a new one needs to be issued. |
## Add users and assign roles To add Azure Cosmos DB account reader access to your user account, have a subscription owner perform the following steps in the Azure portal.
-1. Open the Azure portal, and select your Azure Cosmos DB account.
+1. Open the Azure portal and select your Azure Cosmos DB account.
1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
| Setting | Value | | | |
- | Role | Cosmos DB Account Reader |
- | Assign access to | User, group, or service principal |
- | Members | The user, group, or application in your directory to which you wish to grant access. |
+ | Role | Cosmos DB Account Reader. |
+ | Assign access to | User, group, or service principal. |
+ | Members | The user, group, or application in your directory to which you want to grant access. |
- ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the Add role assignment page in the Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
The entity can now read Azure Cosmos DB resources. ## Delete or export user data
-As a database service, Azure Cosmos DB enables you to search, select, modify and delete any data located in your database or containers. It is however your responsibility to use the provided APIs and define logic required to find and erase any personal data if needed. Each multi-model API (SQL, MongoDB, Gremlin, Cassandra, Table) provides different language SDKs that contain methods to search and delete data based on custom predicates. You can also enable the [time to live (TTL)](time-to-live.md) feature to delete data automatically after a specified period, without incurring any additional cost.
+As a database service, Azure Cosmos DB enables you to search, select, modify, and delete any data located in your database or containers. It's your responsibility to use the provided APIs and define logic required to find and erase any personal data if needed.
+
+Each multi-model API (SQL, MongoDB, Gremlin, Cassandra, or Table) provides different language SDKs that contain methods to search and delete data based on custom predicates. You can also enable the [time to live (TTL)](time-to-live.md) feature to delete data automatically after a specified period, without incurring any more cost.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-dsr-and-stp-note.md)] ## Next steps -- To learn more about Azure Cosmos DB database security, see [Azure Cosmos DB Database security](database-security.md).-- To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources).-- For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)-- For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md)
+- To learn more about Azure Cosmos DB database security, see [Azure Cosmos DB database security](database-security.md).
+- To learn how to construct Azure Cosmos DB authorization tokens, see [Access control on Azure Cosmos DB resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources).
+- For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs).
+- For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md).
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
# Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management Vulnerability assessment for Azure, powered by Microsoft Defender Vulnerability Management (MDVM), is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
-r
+ > [!NOTE] > This feature supports scanning of images in the Azure Container Registry (ACR) only. Images that are stored in other container registries should be imported into ACR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Attack path display name | Attack path description | |--|--|
-| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
+| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
+| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | | Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | | Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.|
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Attack path display name | Attack path description | |--|--| | Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
-|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-|SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
+|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+|SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks.| |Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. | |Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.|
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
To connect your AWS to Defender for Cloud by using a native connector:
:::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Screenshot that shows the tab for entering account details for an AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-details.png":::
- Optionally, select **Management account** to create a connector to a management account. Connectors are created for each member account discovered under the provided management account. Auto-provisioning is enabled for all of the newly onboarded accounts.
+ ```suggestion
+ (Optional) Select **Management account** to create a connector to a management account. Connectors are then created for each member account discovered under the provided management account. Auto-provisioning is also enabled for all of the newly onboarded accounts.
+
+ (Optional) Use the AWS regions dropdown menu to select specific AWS regions to be scanned. All regions are selected by default.
## Select Defender plans
In this section of the wizard, you select the Defender for Cloud plans that you
1. By default, the **Databases** plan is set to **On**. This setting is necessary to extend coverage of Defender for SQL to AWS EC2 and RDS Custom for SQL Server.
- Optionally, select **Configure** to edit the configuration as required. We recommend that you leave it set to the default configuration.
+ (Optional) Select **Configure** to edit the configuration as required. We recommend that you leave it set to the default configuration.
-1. Select **Next: Configure access**.
+1. Select **Configure access** and select the following:
-1. On the **Configure access** tab, select **Click to download the CloudFormation template** to download the CloudFormation template.
-
- :::image type="content" source="media/quickstart-onboard-aws/download-cloudformation-template.png" alt-text="Screenshot that shows the button to download the CloudFormation template." lightbox="media/quickstart-onboard-aws/download-cloudformation-template.png":::
-
-1. Continue to configure access by making the following selections:
-
- a. Choose a deployment type:
+ a. Select a deployment type:
- **Default access**: Allows Defender for Cloud to scan your resources and automatically include future capabilities. - **Least privilege access**: Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality for connector health.
- b. Choose a deployment method: **AWS CloudFormation** or **Terraform**.
+ b. Select a deployment method: **AWS CloudFormation** or **Terraform**.
- :::image type="content" source="media/quickstart-onboard-aws/aws-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access.":::
+ :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access." lightbox="media/quickstart-onboard-aws/add-aws-account-configure-access.png":::
> [!NOTE] > If you select **Management account** to create a connector to a management account, then the tab to onboard with Terraform is not visible in the UI, but you can still onboard using Terraform, similar to what's covered at [Onboarding your AWS/GCP environment to Microsoft Defender for Cloud with Terraform - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/onboarding-your-aws-gcp-environment-to-microsoft-defender-for/ba-p/3798664).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To connect your GCP project to Defender for Cloud by using a native connector:
1. Select **Add environment** > **Google Cloud Platform**.
- :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" alt-text="Screenshot that shows selections for adding Google Cloud Platform as a connector." lightbox="media/quickstart-onboard-gcp/google-cloud.png":::
+ :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-environment-settings.png" alt-text="Screenshot that shows selections for adding Google Cloud Platform as a connector." lightbox="media/quickstart-onboard-gcp/add-gcp-project-environment-settings.png":::
1. Enter all relevant information.
- :::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the pane for creating a GCP connector." lightbox="media/quickstart-onboard-gcp/create-connector.png":::
+ :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-details.png" alt-text="Screenshot of the pane for creating a GCP connector." lightbox="media/quickstart-onboard-gcp/add-gcp-project-details.png":::
Optionally, if you select **Organization**, a management project and an organization custom role are created on your GCP project for the onboarding process. Autoprovisioning is enabled for the onboarding of new projects.
In this section of the wizard, you select the Defender for Cloud plans that you
1. For the plans that you want to connect, turn the toggle to **On**. By default, all necessary prerequisites and components are provisioned. [Learn how to configure each plan](#optional-configure-selected-plans).
+ :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-plans-selection.png" alt-text="Screenshot that shows the tab for selecting plans for a GCP project." lightbox="media/quickstart-onboard-gcp/add-gcp-project-plans-selection.png":::
+ If you choose to turn on the Microsoft Defender for Containers plan, ensure that you meet the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for it.
-1. Select **Next: Configure access**.
+1. Select **Configure access** and make the following selections:
- 1. Choose the deployment type:
+ 1. Select the deployment type:
- **Default access**: Allows Defender for Cloud to scan your resources and automatically include future capabilities. - **Least privilege access**: Grants Defender for Cloud access to only the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality for connector health.
- 1. Choose the deployment method: **GCP Cloud Shell** or **Terraform**.
-
-1. Select **Copy**.
+ 1. Select the deployment method: **GCP Cloud Shell** or **Terraform**.
- :::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot that shows the location of the copy button.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access.":::
- > [!NOTE]
- > For the discovery of GCP resources and for the authentication process, you must enable the following APIs: `iam.googleapis.com`, `sts.googleapis.com`, `cloudresourcemanager.googleapis.com`, `iamcredentials.googleapis.com`, and `compute.googleapis.com`. If you don't enable these APIs, we'll enable them during the onboarding process by running the GCloud script.
+1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on GCP.
-1. Select **GCP Cloud Shell >**. The GCP Cloud Shell opens.
+1. Select **Next: Review and generate**.
-1. Paste the script into the GCP Cloud Shell terminal and run it.
+1. Select **Create**.
-1. Ensure that you created the following resources for Microsoft Defender Cloud Security Posture Management (CSPM) and Defender for Containers:
-
- | CSPM | Defender for Containers|
- |--|--|
- | CSPM service account reader role <br><br> Microsoft Defender for Cloud identity federation <br><br> CSPM identity pool <br><br>Microsoft Defender for Servers service account (when the servers plan is enabled) <br><br>*Azure Arc for servers onboarding* service account (when Azure Arc for servers autoprovisioning is enabled) | Microsoft Defender for Containers service account role <br><br> Microsoft Defender Data Collector service account role <br><br> Microsoft Defender for Cloud identity pool |
+ > [!NOTE]
+ > The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
+ > - `iam.googleapis.com`
+ > - `sts.googleapis.com`
+ > - `cloudresourcemanager.googleapis.com`
+ > - `iamcredentials.googleapis.com`
+ > - `compute.googleapis.com`
+ > If you don't enable these APIs at this time, you can enable them during the onboarding process by running the GCloud script.
After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled autoprovisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource.
Connecting your GCP project is part of the multicloud experience available in Mi
- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). - Set up your [on-premises machines](quickstart-onboard-machines.md) and [AWS account](quickstart-onboard-aws.md). - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).-- Get answers to [common questions](faq-general.yml) about connecting your GCP project.
+- Get answers to [common questions](faq-general.yml) about connecting your GCP project.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 10/30/2023 Last updated : 11/05/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
|Date |Update | |-|-|
-| October 30 | [Changing adaptive application controlΓÇÖs security alert's severity](#changing-adaptive-application-controls-security-alerts-severity)
+| October 30 | [Changing adaptive application controlΓÇÖs security alert's severity](#changing-adaptive-application-controls-security-alerts-severity) |
| October 25 | [Offline Azure API Management revisions removed from Defender for APIs](#offline-azure-api-management-revisions-removed-from-defender-for-apis) | | October 19 |[DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview) | October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard) |
-## Changing adaptive application controls security alert's severity
+### Changing adaptive application controls security alert's severity
Announcement date: October 30, 2023
To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ blade in the Microsoft D
:::image type="content" source="media/release-notes/add-informational-severity.png" alt-text="Screenshot that shows you where to add the informational severity for alerts." lightbox="media/release-notes/add-informational-severity.png":::
-## Offline Azure API Management revisions removed from Defender for APIs
+### Offline Azure API Management revisions removed from Defender for APIs
October 25, 2023 Defender for APIs has updated its support for Azure API Management API revisions. Offline revisions no longer appear in the onboarded Defender for APIs inventory and no longer appear to be onboarded to Defender for APIs. Offline revisions don't allow any traffic to be sent to them and pose no risk from a security perspective.
-## DevOps security posture management recommendations available in public preview
+### DevOps security posture management recommendations available in public preview
October 19, 2023
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 10/20/2023 Last updated : 11/03/2023
Azure DNS Private Resolver enables you to query Azure DNS private zones from an
## In this article: - Two VNets are created: **myvnet** and **myvnet2**.-- An Azure DNS Private Resolver is created in the first VNet with an inbound endpoint at **10.10.0.4**.
+- An Azure DNS Private Resolver is created in the first VNet with an inbound endpoint at **10.0.0.4**.
- A DNS forwarding ruleset is created for the private resolver. - The DNS forwarding ruleset is linked to the second VNet. - Example rules are added to the DNS forwarding ruleset.
An Azure subscription is required.
Before you can use **Microsoft.Network** services with your Azure subscription, you must register the **Microsoft.Network** namespace:
-1. Select the **Subscription** blade in the Azure portal, and then choose your subscription by clicking on it.
+1. Select the **Subscription** blade in the Azure portal, and then choose your subscription by selecting on it.
2. Under **Settings** select **Resource Providers**. 3. Select **Microsoft.Network** and then select **Register**.
First, create or choose an existing resource group to host the resources for you
Next, add a virtual network to the resource group that you created, and configure subnets.
-1. Select the resource group you created, select **Create**, select **Networking** from the list of categories, and then next to **Virtual network**, select **Create**.
-2. On the **Basics** tab, enter a name for the new virtual network and select the **Region** that is the same as your resource group.
-3. On the **IP Addresses** tab, modify the **IPv4 address space** to be 10.0.0.0/8.
-4. Select **Add subnet** and enter the subnet name and address range:
- - Subnet name: snet-inbound
- - Subnet address range: 10.0.0.0/28
- - Select **Add** to add the new subnet.
-5. Select **Add subnet** and configure the outbound endpoint subnet:
- - Subnet name: snet-outbound
- - Subnet address range: 10.1.1.0/28
- - Select **Add** to add this subnet.
-6. Select **Review + create** and then select **Create**.
+1. In the Azure portal, search for and select **Virtual networks**.
+2. On the **Virtual networks** page, select **Create**.
+3. On the **Basics** tab, select the resource group you just created, enter **myvnet** for the virtual network name, and select the **Region** that is the same as your resource group.
+4. Select the **IP Addresses** tab and enter an **IPv4 address space** of 10.0.0.0/16. This address range might be entered by default.
+5. Select the **default** subnet.
+6. Enter the following values on the **Edit subnet** page:
+ - Name: snet-inbound
+ - IPv4 address range: 10.0.0.0.16
+ - Starting address: 10.0.0.0
+ - Size: /28 (16 IP addresses)
+ - Select **Save**
+7. Select **Add a subnet** and enter the following values on the **Add a subnet** page:
+ - Subnet purpose: Default
+ - Name: snet-outbound
+ - IPv4 address range: 10.0.0.0/16
+ - Starting address: 10.0.1.0
+ - Size: /28 (16 IP addresses)
+ - Select **Add**
+8. Select the **Review + create** tab and then select **Create**.
![create virtual network](./media/dns-resolver-getstarted-portal/virtual-network.png) ## Create a DNS resolver inside the virtual network
-1. Open the Azure portal and search for **DNS Private Resolvers**.
+1. In the Azure portal, search for **DNS Private Resolvers**.
2. Select **DNS Private Resolvers**, select **Create**, and then on the **Basics** tab for **Create a DNS Private Resolver** enter the following: - Subscription: Choose the subscription name you're using. - Resource group: Choose the name of the resource group that you created.
Next, add a virtual network to the resource group that you created, and configur
![create resolver - basics](./media/dns-resolver-getstarted-portal/dns-resolver.png) 3. Select the **Inbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myinboundendpoint).
-4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28) and then select **Save**.
+4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28).
+5. Next to **IP address assignment**, select **Static**.
+6. Next to IP address, enter **10.0.0.4** and then select **Save**.
-> [!NOTE]
-> You can choose a static or dynamic IP address for the inbound endpoint. A dynamic IP address is used by default. Typically the first available [non-reserved](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) IP address is assigned (example: 10.0.0.4). This dynamic IP address does not change unless the endpoint is deleted and reprovisioned (for example using a different subnet). To specify a static address, select **Static** and enter a non-reserved IP address in the subnet.
+ > [!NOTE]
+ > You can choose a static or dynamic IP address for the inbound endpoint. A dynamic IP address is used by default. Typically the first available [non-reserved](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) IP address is assigned (example: 10.0.0.4). This dynamic IP address does not change unless the endpoint is deleted and reprovisioned (for example using a different subnet). In this example **Static** is selected and the first available IP address is entered.
5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint). 6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**. 7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following: - Ruleset name: Enter a name for your ruleset (ex: **myruleset**).
- - Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint).
+ - Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint).
8. Under **Rules**, select **Add** and enter your conditional DNS forwarding rules. For example: - Rule name: Enter a rule name (ex: contosocom). - Domain Name: Enter a domain name with a trailing dot (ex: contoso.com.). - Rule State: Choose **Enabled** or **Disabled**. The default is enabled.
- - Select **Add a destination** and enter a desired destination IPv4 address (ex: 11.0.1.4).
+ - Under **Destination** enter a desired destination IPv4 address (ex: 11.0.1.4).
- If desired, select **Add a destination** again to add another destination IPv4 address (ex: 11.0.1.5). - When you're finished adding destination IP addresses, select **Add**. 9. Select **Review and Create**, and then select **Create**. ![create resolver - ruleset](./media/dns-resolver-getstarted-portal/resolver-ruleset.png)
- This example has only one conditional forwarding rule, but you can create many. Edit the rules to enable or disable them as needed.
-
- ![create resolver - review](./media/dns-resolver-getstarted-portal/resolver-review.png)
+ This example has only one conditional forwarding rule, but you can create many. Edit the rules to enable or disable them as needed. You can also add or edit rules and rulesets at any time after deployment.
- After selecting **Create**, the new DNS resolver will begin deployment. This process might take a minute or two. The status of each component is displayed during deployment.
+ After selecting **Create**, the new DNS resolver begins deployment. This process might take a minute or two. The status of each component is displayed during deployment.
![create resolver - status](./media/dns-resolver-getstarted-portal/resolver-status.png)
Create a second virtual network to simulate an on-premises or other environment.
2. Select **Create**, and then on the **Basics** tab select your subscription and choose the same resource group that you have been using in this guide (ex: myresourcegroup). 3. Next to **Name**, enter a name for the new virtual network (ex: myvnet2). 4. Verify that the **Region** selected is the same region used previously in this guide (ex: West Central US).
-5. Select the **IP Addresses** tab and edit the default IP address space. Replace the address space with a simulated on-premises address space (ex: 12.0.0.0/8).
-6. Select **Add subnet** and enter the following:
- - Subnet name: backendsubnet
- - Subnet address range: 12.2.0.0/24
-7. Select **Add**, select **Review + create**, and then select **Create**.
-
- ![second vnet review](./media/dns-resolver-getstarted-portal/vnet-review.png)
+5. Select the **IP Addresses** tab and edit the default IP address space. Replace the address space with a simulated on-premises address space (ex: 10.1.0.0/16).
+6. Select and edit the **default** subnet:
+ - Subnet purpose: Default
+ - Name: backendsubnet
+ - Subnet address range: 10.1.0.0/16
+ - Starting address: 10.1.0.0
+ - Size: /24 (256 addresses)
+7. Select **Save**, select **Review + create**, and then select **Create**.
![second vnet create](./media/dns-resolver-getstarted-portal/vnet-create.png) ## Link your forwarding ruleset to the second virtual network
+> [!NOTE]
+> In this procedure, a forwarding ruleset is linked to a VNet that was created earlier to simulate an on-premises environment. It is not possible to create a ruleset link to non-Azure resources. The purpose of the following procedure is only to demonstrate how ruleset links can be added or deleted. To understand how a private resolver can be used to resolve on-premises names, see [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md).
+ To apply your forwarding ruleset to the second virtual network, you must create a virtual link. 1. Search for **DNS forwarding rulesets** in the Azure services list and select your ruleset (ex: **myruleset**).
-2. Select **Virtual Network Links**, select **Add**, choose **myvnet2** and use the default Link Name **myvnet2-link**.
+2. Under **Settings**, select **Virtual Network Links**
+ - The link **myvnet-link** is already present. This was created automatically when the ruleset was provisioned.
+3. Select **Add**, choose **myvnet2** from the **Virtual Network** drop-down list. Use the default **Link Name** of **myvnet2-link**.
3. Select **Add** and verify that the link was added successfully. You might need to refresh the page. ![Screenshot of ruleset virtual network links.](./media/dns-resolver-getstarted-portal/ruleset-links.png)
Add or remove specific rules your DNS forwarding ruleset as desired, such as:
Individual rules can be deleted or disabled. In this example, a rule is deleted.
-1. Search for **Dns Forwarding Rulesets** in the Azure Services list and select it.
-2. Select the ruleset you previously configured (ex: **myruleset**) and then select **Rules**.
+1. Search for **DNS Forwarding Rulesets** in the Azure Services list and select it.
+2. Select the ruleset you previously configured (ex: **myruleset**) and then under **Settings** select **Rules**.
3. Select the **contosocom** sample rule that you previously configured, select **Delete**, and then select **OK**. ### Add rules to the forwarding ruleset Add three new conditional forwarding rules to the ruleset.
-1. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+1. On the **myruleset | Rules** page, select **Add**, and enter the following rule data:
- Rule Name: **AzurePrivate** - Domain Name: **azure.contoso.com.** - Rule State: **Enabled**
-2. Under **Destination IP address** enter 10.0.0.4, and then click **Add**.
-3. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+2. Under **Destination IP address** enter 10.0.0.4, and then select **Add**.
+3. On the **myruleset | Rules** page, select **Add**, and enter the following rule data:
- Rule Name: **Internal** - Domain Name: **internal.contoso.com.** - Rule State: **Enabled**
-4. Under **Destination IP address** enter 192.168.1.2, and then click **Add**.
-5. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+4. Under **Destination IP address** enter 10.1.0.5, and then select **Add**.
+5. On the **myruleset | Rules** page, select **Add**, and enter the following rule data:
- Rule Name: **Wildcard** - Domain Name: **.** (enter only a dot) - Rule State: **Enabled**
-6. Under **Destination IP address** enter 10.5.5.5, and then click **Add**.
+6. Under **Destination IP address** enter 10.5.5.5, and then select **Add**.
![Screenshot of a forwarding ruleset example.](./media/dns-resolver-getstarted-portal/ruleset.png) In this example: - 10.0.0.4 is the resolver's inbound endpoint. -- 192.168.1.2 is an on-premises DNS server.
+- 10.1.0.5 is an on-premises DNS server.
- 10.5.5.5 is a protective DNS service. ## Test the private resolver
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 07/19/2023 Last updated : 11/03/2023
Connect PowerShell to Azure cloud.
Connect-AzAccount -Environment AzureCloud ```
-If multiple subscriptions are present, the first subscription ID will be used. To specify a different subscription ID, use the following command.
+If multiple subscriptions are present, the first subscription ID is used. To specify a different subscription ID, use the following command.
```Azure PowerShell Select-AzSubscription -SubscriptionObject (Get-AzSubscription -SubscriptionId <your-sub-id>)
New-AzResourceGroup -Name myresourcegroup -Location westcentralus
Create a virtual network in the resource group that you created. ```Azure PowerShell
-New-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.0.0.0/8"
+New-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.0.0.0/16"
``` Create a DNS resolver in the virtual network that you created.
Create an inbound endpoint to enable name resolution from on-premises or another
> [!TIP] > Using PowerShell, you can specify the inbound endpoint IP address to be dynamic or static.<br>
-> If the endpoint IP address is specified as dynamic, the address does not change unless the endpoint is deleted and reprovisioned. Typically the same IP address will be assigned again during reprovisioning.<br>
+> If the endpoint IP address is specified as dynamic, the address does not change unless the endpoint is deleted and reprovisioned. Typically the same IP address is assigned again during reprovisioning.<br>
> If the endpoint IP address is static, it can be specified and reused if the endpoint is reprovisioned. The IP address that you choose can't be a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
+#### Dynamic IP address
+ The following commands provision a dynamic IP address: ```Azure PowerShell $ipconfig = New-AzDnsResolverIPConfigurationObject -PrivateIPAllocationMethod Dynamic -SubnetId /subscriptions/<your sub id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet/subnets/snet-inbound New-AzDnsResolverInboundEndpoint -DnsResolverName mydnsresolver -Name myinboundendpoint -ResourceGroupName myresourcegroup -Location westcentralus -IpConfiguration $ipconfig ```
+#### Static IP address
+ Use the following commands to specify a static IP address. Do not use both the dynamic and static sets of commands. You must specify an IP address in the subnet that was created previously. The IP address that you choose can't be a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
$virtualNetworkLink.ToJsonString()
Create a second virtual network to simulate an on-premises or other environment. ```Azure PowerShell
-$vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "12.0.0.0/8"
+$vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.1.0.0/16"
$vnetlink2 = New-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup -VirtualNetworkLinkName "vnetlink2" -VirtualNetworkId $vnet2.Id -SubscriptionId <your sub id> ```
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
In this article, the private zone **azure.contoso.com** and the resource record
[ ![View resource records](./media/private-resolver-hybrid-dns/private-zone-records-small.png) ](./media/private-resolver-hybrid-dns/private-zone-records.png#lightbox)
-**Requirement**: You must create a virtual network link in the zone to the virtual network where you deploy your Azure DNS Private Resolver. In the following example, the private zone is linked to two vnets: **myeastvnet** and **mywestvnet**. At least one link is required.
+**Requirement**: You must create a virtual network link in the zone to the virtual network where you deploy your Azure DNS Private Resolver. In the following example, the private zone is linked to two VNets: **myeastvnet** and **mywestvnet**. At least one link is required.
[ ![View zone links](./media/private-resolver-hybrid-dns/private-zone-links-small.png) ](./media/private-resolver-hybrid-dns/private-zone-links.png#lightbox)
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Title: Use C# and DICOMweb Standard APIs in Azure Health Data Services
description: Learn how to use C# and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 10/18/2023
The filename, studyUID, seriesUID, and instanceUID of the sample DICOM files are
## Prerequisites
-To use the DICOMweb Standard APIs, you need an instance of the DICOM service deployed. If you haven't already deployed an instance of the DICOM service, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
+To use the DICOMweb Standard APIs, you need an instance of the DICOM service deployed. For more information, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
After you deploy an instance of the DICOM service, retrieve the URL for your app service:
In your application, install the following NuGet packages:
## Create a DicomWebClient
-After you deploy your DICOM service, you create a DicomWebClient. Run the code snippet to create DicomWebClient, which you use for the rest of this tutorial. Ensure you have both NuGet packages installed. If you haven't already obtained a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
+After you deploy your DICOM service, you create a DicomWebClient. Run the code snippet to create DicomWebClient, which you use for the rest of this tutorial. Ensure you have both NuGet packages installed. For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
```c# string webServerUrl ="{Your DicomWeb Server URL}"
With the DicomWebClient, we can now perform the Store, Retrieve, Search, and Del
## Store DICOM instances (STOW)
-Using the DicomWebClient that we've created, we can now store DICOM files.
+By using the DicomWebClient, we can now store DICOM files.
### Store single instance
_Details:_
DicomWebResponse response = await client.RetrieveStudyAsync(studyInstanceUid); ```
-All three of the dcm files that we've uploaded previously are part of the same study, so the response should return all three instances. Validate that the response has a status code of OK and that all three instances are returned.
+All three of the dcm files that you uploaded previously are part of the same study, so the response should return all three instances. Validate that the response has a status code of OK and that all three instances are returned.
### Use the retrieved instances
_Details:_
DicomWebResponse response = await client.RetrieveStudyMetadataAsync(studyInstanceUid); ```
-All three of the dcm files that we've uploaded previously are part of the same study, so the response should return the metadata for all three instances. Validate that the response has a status code of OK and that all the metadata is returned.
+All three of the dcm files that we uploaded previously are part of the same study, so the response should return the metadata for all three instances. Validate that the response has a status code of OK and that all the metadata is returned.
### Retrieve all instances within a series
healthcare-apis Dicomweb Standard Apis Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md
Title: Use cURL and DICOMweb Standard APIs in Azure Health Data Services
description: Use cURL and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 10/18/2023
The filename, studyUID, seriesUID, and instanceUID of the sample DICOM files are
## Prerequisites
-To use the DICOM Standard APIs, you must have an instance of the DICOM service deployed. If you haven't already deployed an instance of the DICOM service, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
+To use the DICOM Standard APIs, you must have an instance of the DICOM service deployed. For more information, see [Deploy the DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
-Once you've deployed an instance of the DICOM service, retrieve the URL for your App service:
+After you deploy an instance of the DICOM service, retrieve the URL for your App service.
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Search **Recent resources** and select your DICOM service instance. 3. Copy the **Service URL** of your DICOM service.
-4. If you haven't already obtained a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
+4. If you need an access token, see [Get access token for the DICOM service](dicom-get-access-token-azure-cli.md).
For this code, we access a Public Preview Azure service. It's important that you don't upload any private health information (PHI).
_Details:_
* Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value
-Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. The following have been used successfully.
+Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully.
* Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 * Content-Type: multipart/related
_Details:_
* Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value
-Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. The following have been used successfully.
+Some programming languages and tools behave differently. For instance, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. These tools can be used successfully.
* Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234
curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1
This request deletes a single instance within a single study and single series.
-Delete isn't part of the DICOM standard, but it's been added for convenience.
+Delete isn't part of the DICOM standard, but is added for convenience.
_Details:_ * Path: ../studies/{study}/series/{series}/instances/{instance}
curl --request DELETE "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.49
This request deletes a single series (and all child instances) within a single study.
-Delete isn't part of the DICOM standard, but it's been added for convenience.
+Delete isn't part of the DICOM standard, but is added for convenience.
_Details:_ * Path: ../studies/{study}/series/{series}
curl --request DELETE "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.49
This request deletes a single study (and all child series and instances).
-Delete isn't part of the DICOM standard, but it has been added for convenience.
+Delete isn't part of the DICOM standard, but is added for convenience.
_Details:_ * Path: ../studies/{study}
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
Title: Use Python and DICOMweb Standard APIs in Azure Health Data Services
description: Use Python and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. -+ Last updated 02/15/2022
The filename, studyUID, seriesUID, and instanceUID of the sample DICOM files are
## Prerequisites
-To use the DICOMweb Standard APIs, you must have an instance of the DICOM service deployed. If you haven't already deployed the DICOM service, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
+To use the DICOMweb Standard APIs, you must have an instance of the DICOM service deployed. For more information, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
After you deploy an instance of the DICOM service, retrieve the URL for your App service: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search **Recent resources** and select your DICOM service instance. 1. Copy the **Service URL** of your DICOM service.
-2. If you haven't already obtained a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
+2. If you don't have a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
For this code, you access a Public Preview Azure service. It's important that you don't upload any private health information (PHI).
The DICOMweb Standard makes heavy use of `multipart/related` HTTP requests combi
First, import the necessary Python libraries.
-We've chosen to implement this example using the synchronous `requests` library. For asynchronous support, consider using `httpx` or another async library. Additionally, we're importing two supporting functions from `urllib3` to support working with `multipart/related` requests.
+We implement this example by using the synchronous `requests` library. For asynchronous support, consider using `httpx` or another async library. Additionally, we're importing two supporting functions from `urllib3` to support working with `multipart/related` requests.
Additionally, we're importing `DefaultAzureCredential` to log into Azure and get a token.
instance_uid = "1.2.826.0.1.3680043.8.498.47359123102728459884412887463296905395
### Authenticate to Azure and get a token
-`DefaultAzureCredential` allows us to use various ways to get tokens to log into the service. In this example, use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that are also possible to use. In order to use the AzureCliCredential, you must have logged into Azure from the CLI prior to running this code. (For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).) Alternatively, you can copy and paste the token retrieved while logging in from the CLI.
+`DefaultAzureCredential` allows us to use various ways to get tokens to log into the service. In this example, use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that are also possible to use. To use the AzureCliCredential, you need to sign in to Azure from the CLI before running this code. For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md). Alternatively, copy and paste the token retrieved while signing in from the CLI.
> [!NOTE] > `DefaultAzureCredential` returns several different Credential objects. We reference the `AzureCliCredential` as the 5th item in the returned collection. This may not be consistent. If so, uncomment the `print(credential.credential)` line. This will list all the items. Find the correct index, recalling that Python uses zero-based indexing.
_Details:_
* Body: * Content-Type: application/dicom for each file uploaded, separated by a boundary value
-Some programming languages and tools behave differently. For example, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. The following have been used successfully.
+Some programming languages and tools behave differently. For example, some require you to define your own boundary. For those languages and tools, you might need to use a slightly modified Content-Type header. These languages and tools can be used successfully.
* Content-Type: multipart/related; type="application/dicom"; boundary=ABCD1234 * Content-Type: multipart/related; boundary=ABCD1234 * Content-Type: multipart/related
response = client.get(url, headers=headers, params=params) #, verify=False)
> [!NOTE] > Delete is not part of the DICOM standard, but it has been added for convenience.
-A 204 response code is returned when the deletion is successful. A 404 response code is returned if the item(s) has never existed or it's already been deleted.
+A 204 response code is returned when the deletion is successful. A 404 response code is returned if the item(s) never existed or are already deleted.
### Delete a specific instance within a study and series
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Use DICOMweb Standard APIs with the DICOM servixw in Azure Health Data S
description: This tutorial describes how to use DICOMweb Standard APIs with the DICOM service. -+ Last updated 10/13/2022
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Title: Enable diagnostic logging in the DICOM service - Azure Health Data Servic
description: This article explains how to enable diagnostic logging in the DICOM service. -+ Last updated 10/13/2023
In this article, you'll learn how to enable diagnostic logging in DICOM service
## Enable logs 1. To enable logging DICOM service, select your DICOM service in the Azure portal.
-2. Select the **Activity log** blade, and then select **Diagnostic settings**.
+2. Select the **Activity log** on the left pane, and then select **Diagnostic settings**.
[ ![Screenshot of Azure activity log.](media/dicom-activity-log.png) ](media/dicom-activity-log.png#lightbox)
In this article, you'll learn how to enable diagnostic logging in DICOM service
5. Select the **Category** and **Destination** details for accessing the diagnostic logs.
- * **Send to Log Analytics workspace** in the Azure Monitor. YouΓÇÖll need to create your Logs Analytics workspace before you can select this option. For more information about the platform logs, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
+ * **Send to Log Analytics workspace** in the Azure Monitor. You need to create your Logs Analytics workspace before you can select this option. For more information about the platform logs, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
* **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created.
- * **Stream to an event hub** for ingestion by a third-party service or custom analytic solution. YouΓÇÖll need to create an event hub namespace and event hub policy before you can configure this step.
+ * **Stream to an event hub** for ingestion by a third-party service or custom analytic solution. You need to create an event hub namespace and event hub policy before you can configure this step.
* **Send to partner solution** that you're working with as partner organization in Azure. For information about potential partner integrations, see [Azure partner solutions documentation](../../partner-solutions/overview.md) For information about supported metrics, see [Supported metrics with Azure Monitor](.././../azure-monitor/essentials/metrics-supported.md).
In this article, you'll learn how to enable diagnostic logging in DICOM service
For information on how to work with diagnostic logs, see [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md) ## Log details
-The log schema used differs based on the destination. Log Analytics has a schema that will differ from other destinations. Each log type will also have a schema that differs.
+The log schema used differs based on the destination. Log Analytics has a schema that differs from other destinations. Each log type has a schema that differs.
### Audit log details
The DICOM service returns the following fields in the audit log as seen when str
#### Log Analytics logs
-The DICOM service returns the following fields in the audit log in Log Analytics:
+The DICOM service returns the following fields in the audit sign-in Log Analytics:
|Field Name |Type |Notes | ||||
The DICOM service returns the following fields in the audit log as seen when str
|correlationId|String|Correlation ID |operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) |time|DateTime|Date and time of the event.
-|resultDescription|String|Description of the log entry. An example here is a diagnostic log with a validation warning message when storing a file.
+|resultDescription|String|Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file.
|resourceId|String| Azure path to the resource. |identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM). |location|String|The location of the server that processed the request.
The DICOM service returns the following fields in the audit log as seen when str
#### Log Analytics logs
-The DICOM service returns the following fields in the audit log in Log Analytics:
+The DICOM service returns the following fields in the audit sign-in Log Analytics:
|Field Name |Type |Notes | |||| |CorrelationId|String|Correlation ID |OperationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.) |TimeGenerated|DateTime|Date and time of the event.
-|Message|String|Description of the log entry. An example here is a diagnostic log with a validation warning message when storing a file.
+|Message|String|Description of the log entry. An example is a diagnostic log with a validation warning message when storing a file.
|Location|String|The location of the server that processed the request. |Properties|String|Additional information about the event in JSON array format. Examples include DICOM identifiers present in the request. |LogLevel|String|Log level (Informational, Error). ## Sample Log Analytics queries
-Below are a few basic Application Insights queries you can use to explore your log data.
+Here are a few basic Application Insights queries you can use to explore your log data.
Run the following query to see the **100 most recent** logs:
healthcare-apis Export Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md
# Export DICOM files
-The DICOM&reg; service provides the ability to easily export DICOM data in a file format. The service simplifies the process of using medical imaging in external workflows, such as AI and machine learning. You can use the export API to export DICOM studies, series, and instances in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md). DICOM data that's exported to a storage account is exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
+The DICOM&reg; service allows you to export DICOM data in a file format. The service simplifies the process of using medical imaging in external workflows, such as AI and machine learning. You can use the export API to export DICOM studies, series, and instances in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md). DICOM data exported to a storage account is exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
There are three steps to exporting data from the DICOM service:
The export API exposes one `POST` endpoint for exporting data.
POST <dicom-service-url>/<version>/export ```
-Given a *source*, the set of data to be exported, and a *destination*, the location to which data will be exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section.
+Given a *source*, the set of data to be exported, and a *destination*, the location to which data is exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section.
Any errors encountered while you attempt to export are recorded in an error log. For more information, see the [Errors](#errors) section.
Content-Type: application/json
#### Operation status
-Poll the preceding `href` URL for the current status of the export operation until completion. After the job has reached a terminal state, the API returns a 200 status code instead of 202. The value of its status property is updated accordingly.
+Poll the preceding `href` URL for the current status of the export operation until completion. After the job reaches a terminal state, the API returns a 200 status code instead of 202. The value of its status property is updated accordingly.
```http HTTP/1.1 200 OK
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
Title: Get an access token for the DICOM service in Azure Health Data Services
description: Find out how to secure your access to the DICOM service with a token. Use the Azure command-line tool and unique identifiers to manage your medical images. + Last updated 10/13/2023
healthcare-apis Get Started With Analytics Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md
description: This article demonstrates how to use Azure Data Factory and Microso
-+ Last updated 10/13/2023
This article describes how to get started by using DICOM&reg; data in analytics
## Prerequisites
-Before you get started, ensure that you've done the following steps:
+Before you get started, complete these steps:
* Deploy an instance of the [DICOM service](deploy-dicom-services-in-azure.md). * Create a [storage account with Azure Data Lake Storage Gen2 capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace:
You can monitor trigger runs and their associated pipeline runs on the **Monitor
1. Repeat steps 2 to 9 to add the remaining shortcuts to the other Delta tables in the storage account (for example, `series` and `study`).
-After you've created the shortcuts, expand a table to show the names and types of the columns.
+After you create the shortcuts, expand a table to show the names and types of the columns.
:::image type="content" source="media/fabric-shortcut-schema.png" alt-text="Screenshot that shows the table columns listed in the Explorer view." lightbox="media/fabric-shortcut-schema.png":::
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Title: Get started with the DICOM service - Azure Health Data Services
description: This document describes how to get started with the DICOM service in Azure Health Data Services. -+ Last updated 06/03/2022
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) a
## Access the DICOM service
-The DICOM service is secured by a Microsoft Entra ID that can't be disabled. To access the service API, you must create a client application that's also referred to as a service principal in Microsoft Entra ID and grant it with the right permissions.
+The DICOM service is secured by a Microsoft Entra ID that can't be disabled. To access the service API, you must create a client application also referred to as a service principal in Microsoft Entra ID and grant it with the right permissions.
### Register a client application
You can create or register a client application from the [Azure portal](dicom-re
If the client application is created with a certificate or client secret, ensure that you renew the certificate or client secret before expiration and replace the client credentials in your applications.
-You can delete a client application. Before doing that, ensure that it's not used in production, dev, test, or quality assurance environments.
+You can delete a client application. Before doing that, ensure the application isn't used in production, dev, test, or quality assurance environments.
### Grant access permissions
healthcare-apis Import Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/import-files.md
Title: Import DICOM files into the DICOM service
description: Learn how to import DICOM files by using bulk import in Azure Health Data Services. -+ Last updated 10/05/2023
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md
Title: Overview of the DICOM service in Azure Health Data Services
description: The DICOM service is a cloud-based solution for storing, managing, and exchanging medical imaging data securely and efficiently with any DICOMwebΓäó-enabled systems or applications. Learn more about its benefits and use cases. + Last updated 10/13/2023
FHIR&reg; supports integration of other types of data directly, or through refer
- **Image exchange and collaboration**. Share an image, a subset of images, or an entire image library instantly with or without related EHR data. -- **Create cohorts for research**. To find the right patients for clinical trials, researchers need to query for patients that match data in both clinical and imaging systems. The service allows researchers to use natural language to query across systems. For example, ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥
+- **Create cohorts for research**. To find the right patients for clinical trials, researchers need to query for patients that match data in both clinical and imaging systems. The service allows researchers to use natural language to query across systems. For example, ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has a diagnosis of osteosarcoma over the last two years.ΓÇ¥
- **Plan treatment based on similar patients**. When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis even when these include imaging data.
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Pull DICOM changes using the Change Feed
description: This how-to guide explains how to pull DICOM changes using DICOM Change Feed for Azure Health Data Services. -+ Last updated 10/13/2023
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
Title: References for DICOM service - Azure Health Data Services
description: This reference provides related resources for the DICOM service. -+ Last updated 06/03/2022
This article describes our open-source projects on GitHub that provide source co
### Using the DICOM service with the OHIF viewer
-* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service.
+* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, nondiagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service.
### Medical imaging network demo environment * [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
machine-learning Azure Machine Learning Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-glossary.md
monikerRange: 'azureml-api-2'
# Azure Machine Learning glossary
-The Azure Machine Learning glossary is a short dictionary of terminology for the Azure Machine Learning platform. For the general Azure terminology, see also:
+The Azure Machine Learning glossary is a short dictionary of terminology for the Machine Learning platform. For general Azure terminology, see also:
* [Microsoft Azure glossary: A dictionary of cloud terminology on the Azure platform](../azure-glossary-cloud-terminology.md)
-* [Cloud computing terms](https://azure.microsoft.com/overview/cloud-computing-dictionary/) - General industry cloud terms.
-* [Azure fundamental concepts](/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts) - Microsoft Cloud Adoption Framework for Azure.
+* [Cloud computing terms](https://azure.microsoft.com/overview/cloud-computing-dictionary/): General industry cloud terms
+* [Azure fundamental concepts](/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts): Microsoft Cloud Adoption Framework for Azure
## Component
-An Azure Machine Learning [component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines. Components can do tasks such as data processing, model training, model scoring, and so on. A component is analogous to a function - it has a name, parameters, expects input, and returns output.
-
+A Machine Learning [component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines. Components can do tasks such as data processing, model training, and model scoring. A component is analogous to a function. It has a name and parameters, expects input, and returns output.
## Compute
-A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine Learning supports the following types of compute:
+A compute is a designated compute resource where you run your job or host your endpoint. Machine Learning supports the following types of compute:
-* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud.
+* **Compute cluster**: A managed-compute infrastructure that you can use to easily create a cluster of CPU or GPU compute nodes in the cloud.
[!INCLUDE [serverless compute](./includes/serverless-compute.md)]
-* **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
-* **Kubernetes cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure Machine Learning workspace, or attach an existing AKS cluster.
-* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
-
+* **Compute instance**: A fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine in the cloud.
+* **Kubernetes cluster**: Used to deploy trained machine learning models to Azure Kubernetes Service (AKS). You can create an AKS cluster from your Machine Learning workspace or attach an existing AKS cluster.
+* **Attached compute**: You can attach your own compute resources to your workspace and use them for training and inference.
## Data
-Azure Machine Learning allows you to work with different types of data:
+Machine Learning allows you to work with different types of data:
-* URIs (a location in local/cloud storage)
+* URIs (a location in local or cloud storage):
* `uri_folder` * `uri_file`
-* Tables (a tabular data abstraction)
+* Tables (a tabular data abstraction):
* `mltable`
-* Primitives
+* Primitives:
* `string` * `boolean` * `number`
-For most scenarios, you'll use URIs (`uri_folder` and `uri_file`) - a location in storage that can be easily mapped to the filesystem of a compute node in a job by either mounting or downloading the storage to the node.
-
-`mltable` is an abstraction for tabular data that is to be used for AutoML Jobs, Parallel Jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning and aren't using AutoML, we strongly encourage you to begin with URIs.
+For most scenarios, you use URIs (`uri_folder` and `uri_file`) to identify a location in storage that can be easily mapped to the file system of a compute node in a job by either mounting or downloading the storage to the node.
+The `mltable` parameter is an abstraction for tabular data that's used for automated machine learning (AutoML) jobs, parallel jobs, and some advanced scenarios. If you're starting to use Machine Learning and aren't using AutoML, we strongly encourage you to begin with URIs.
## Datastore
-Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. You can register and create a datastore to easily connect to your storage account, and access the data in your underlying storage service. The CLI v2 and SDK v2 support the following types of cloud-based storage
+Machine Learning datastores securely keep the connection information to your data storage on Azure so that you don't have to code it in your scripts. You can register and create a datastore to easily connect to your storage account and access the data in your underlying storage service. The Azure Machine Learning CLI v2 and SDK v2 support the following types of cloud-based storage
-* Azure Blob Container
-* Azure File Share
-* Azure Data Lake
-* Azure Data Lake Gen2
+* Azure Blob Storage container
+* Azure Files share
+* Azure Data Lake Storage
+* Azure Data Lake Storage Gen2
## Environment
-Azure Machine Learning environments are an encapsulation of the environment where your machine learning task happens. They specify the software packages, environment variables, and software settings around your training and scoring scripts. The environments are managed and versioned entities within your Machine Learning workspace. Environments enable reproducible, auditable, and portable machine learning workflows across various computes.
+Machine Learning environments are an encapsulation of the environment where your machine learning task happens. They specify the software packages, environment variables, and software settings around your training and scoring scripts. The environments are managed and versioned entities within your Machine Learning workspace. Environments enable reproducible, auditable, and portable machine learning workflows across various computes.
### Types of environment
-Azure Machine Learning supports two types of environments: curated and custom.
+Machine Learning supports two types of environments: curated and custom.
-Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks. These pre-created environments also allow for faster deployment time. For a full list, see the [curated environments article](resource-curated-environments.md).
+Curated environments are provided by Machine Learning and are available in your workspace by default. They're intended to be used as is. They contain collections of Python packages and settings to help you get started with various machine learning frameworks. These precreated environments also allow for faster deployment time. For a full list, see [Azure Machine Learning curated environments](resource-curated-environments.md).
-In custom environments, you're responsible for setting up your environment. Make sure to install the packages and any other dependencies that your training or scoring script needs on the compute. Azure Machine Learning allows you to create your own environment using
+In custom environments, you're responsible for setting up your environment. Make sure to install the packages and any other dependencies that your training or scoring script needs on the compute. Machine Learning allows you to create your own environment by using:
-* A docker image
-* A base docker image with a conda YAML to customize further
-* A docker build context
+* A Docker image.
+* A base Docker image with a conda YAML to customize further.
+* A Docker build context.
## Model
-Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure Machine Learning supports three types of storage format for models:
+Machine Learning models consist of the binary files that represent a machine learning model and any corresponding metadata. You can create models from a local or remote file or directory. For remote locations, `https`, `wasbs`, and `azureml` locations are supported. The created model is tracked in the workspace under the specified name and version. Machine Learning supports three types of storage format for models:
* `custom_model` * `mlflow_model`
Azure machine learning models consist of the binary file(s) that represent a mac
## Workspace
-The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components and data asset.
+The workspace is the top-level resource for Machine Learning. It provides a centralized place to work with all the artifacts you create when you use Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components, and data assets.
## Next steps
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
adobe-target: true
# What is Azure Machine Learning?
-Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. Machine learning professionals, data scientists, and engineers can use it in their day-to-day workflows: Train and deploy models, and manage MLOps.
+Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle. ML professionals, data scientists, and engineers can use it in their day-to-day workflows to train and deploy models and manage machine learning operations (MLOps).
-You can create a model in Azure Machine Learning or use a model built from an open-source platform, such as Pytorch, TensorFlow, or scikit-learn. MLOps tools help you monitor, retrain, and redeploy models.
+You can create a model in Machine Learning or use a model built from an open-source platform, such as PyTorch, TensorFlow, or scikit-learn. MLOps tools help you monitor, retrain, and redeploy models.
> [!Tip]
-> **Free trial!** If you don't have an Azure subscription, create a free account before you begin. [Try the free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/machine-learning/search/). You get credits to spend on Azure services. After they're used up, you can keep the account and use [free Azure services](https://azure.microsoft.com/free/). Your credit card is never charged unless you explicitly change your settings and ask to be charged.
+> **Free trial!** If you don't have an Azure subscription, create a free account before you begin. [Try the free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/machine-learning/search/). You get credits to spend on Azure services. After they're used up, you can keep the account and use [free Azure services](https://azure.microsoft.com/free/). Your credit card is never charged unless you explicitly change your settings and ask to be charged.
## Who is Azure Machine Learning for?
-Azure Machine Learning is for individuals and teams implementing MLOps within their organization to bring machine learning models into production in a secure and auditable production environment.
+Machine Learning is for individuals and teams implementing MLOps within their organization to bring ML models into production in a secure and auditable production environment.
-Data scientists and ML engineers will find tools to accelerate and automate their day-to-day workflows. Application developers will find tools for integrating models into applications or services. Platform developers will find a robust set of tools, backed by durable Azure Resource Manager APIs, for building advanced ML tooling.
+Data scientists and ML engineers can use tools to accelerate and automate their day-to-day workflows. Application developers can use tools for integrating models into applications or services. Platform developers can use a robust set of tools, backed by durable Azure Resource Manager APIs, for building advanced ML tooling.
-Enterprises working in the Microsoft Azure cloud will find familiar security and role-based access control (RBAC) for infrastructure. You can set up a project to deny access to protected data and select operations.
+Enterprises working in the Microsoft Azure cloud can use familiar security and role-based access control for infrastructure. You can set up a project to deny access to protected data and select operations.
## Productivity for everyone on the team
-Machine learning projects often require a team with varied skill set to build and maintain. Azure Machine Learning has tools that help enable you to:
+ML projects often require a team with a varied skill set to build and maintain. Machine Learning has tools that help enable you to:
-* Collaborate with your team via shared notebooks, compute resources, [serverless compute (preview)](how-to-use-serverless-compute.md), data, and environments
-
-* Develop models for fairness and explainability, tracking and auditability to fulfill lineage and audit compliance requirements
-
-* Deploy ML models quickly and easily at scale, and manage and govern them efficiently with MLOps
-
-* Run machine learning workloads anywhere with built-in governance, security, and compliance
+* Collaborate with your team via shared notebooks, compute resources, [serverless compute (preview)](how-to-use-serverless-compute.md), data, and environments.
+* Develop models for fairness and explainability and tracking and auditability to fulfill lineage and audit compliance requirements.
+* Deploy ML models quickly and easily at scale, and manage and govern them efficiently with MLOps.
+* Run ML workloads anywhere with built-in governance, security, and compliance.
### Cross-compatible platform tools that meet your needs
Anyone on an ML team can use their preferred tools to get the job done. Whether
* [Azure Machine Learning studio](https://ml.azure.com) * [Python SDK (v2)](https://aka.ms/sdk-v2-install)
-* [CLI (v2)](how-to-configure-cli.md))
-* [Azure Resource Manager REST APIs ](/rest/api/azureml/)
+* [Azure CLI (v2)](how-to-configure-cli.md))
+* [Azure Resource Manager REST APIs](/rest/api/azureml/)
-As you're refining the model and collaborating with others throughout the rest of Machine Learning development cycle, you can share and find assets, resources, and metrics for your projects on the Azure Machine Learning studio UI.
+As you're refining the model and collaborating with others throughout the rest of the Machine Learning development cycle, you can share and find assets, resources, and metrics for your projects on the Machine Learning studio UI.
### Studio
-The [Azure Machine Learning studio](https://ml.azure.com) offers multiple authoring experiences depending on the type of project and the level of your past ML experience, without having to install anything.
-
-* Notebooks: write and run your own code in managed Jupyter Notebook servers that are directly integrated in the studio.
-
-* Visualize run metrics: analyze and optimize your experiments with visualization.
-
- :::image type="content" source="media/overview-what-is-azure-machine-learning/metrics.png" alt-text="Screenshot of metrics for a training run.":::
+[Machine Learning studio](https://ml.azure.com) offers multiple authoring experiences depending on the type of project and the level of your past ML experience, without having to install anything.
-* Azure Machine Learning designer: use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines.
+* **Notebooks**: Write and run your own code in managed Jupyter Notebook servers that are directly integrated in the studio.
+* **Visualize run metrics**: Analyze and optimize your experiments with visualization.
-* Automated machine learning UI: Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface.
-
-* Data labeling: Use Azure Machine Learning data labeling to efficiently coordinate [image labeling](how-to-create-image-labeling-projects.md) or [text labeling](how-to-create-text-labeling-projects.md) projects.
+ :::image type="content" source="media/how-to-log-view-metrics/metrics.png" alt-text="Screenshot that shows metrics for a training run.":::
+* **Azure Machine Learning designer**: Use the designer to train and deploy ML models without writing any code. Drag and drop datasets and components to create ML pipelines.
+* **Automated machine learning UI**: Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface.
+* **Data labeling**: Use Machine Learning data labeling to efficiently coordinate [image labeling](how-to-create-image-labeling-projects.md) or [text labeling](how-to-create-text-labeling-projects.md) projects.
## Enterprise-readiness and security
-Azure Machine Learning integrates with the Azure cloud platform to add security to ML projects.
+Machine Learning integrates with the Azure cloud platform to add security to ML projects.
Security integrations include:
-* Azure Virtual Networks (VNets) with network security groups
-* Azure Key Vault where you can save security secrets, such as access information for storage accounts
-* Azure Container Registry set up behind a VNet
+* Azure Virtual Networks with network security groups.
+* Azure Key Vault, where you can save security secrets, such as access information for storage accounts.
+* Azure Container Registry set up behind a virtual network.
-See [Tutorial: Set up a secure workspace](tutorial-create-secure-workspace.md).
+For more information, see [Tutorial: Set up a secure workspace](tutorial-create-secure-workspace.md).
## Azure integrations for complete solutions
-Other integrations with Azure services support a machine learning project from end-to-end. They include:
+Other integrations with Azure services support an ML project from end to end. They include:
-* Azure Synapse Analytics to process and stream data with Spark
-* Azure Arc, where you can run Azure services in a Kubernetes environment
-* Storage and database options, such as Azure SQL Database, Azure Storage Blobs, and so on
-* Azure App Service allowing you to deploy and manage ML-powered apps
-* [Microsoft Purview allows you to discover and catalog data assets across your organization](../purview/register-scan-azure-machine-learning.md)
+* Azure Synapse Analytics, which is used to process and stream data with Spark.
+* Azure Arc, where you can run Azure services in a Kubernetes environment.
+* Storage and database options, such as Azure SQL Database and Azure Blob Storage.
+* Azure App Service, which you can use to deploy and manage ML-powered apps.
+* [Microsoft Purview, which allows you to discover and catalog data assets across your organization](../purview/register-scan-azure-machine-learning.md).
> [!Important]
-> Azure Machine Learning doesn't store or process your data outside of the region where you deploy.
->
+> Machine Learning doesn't store or process your data outside of the region where you deploy.
## Machine learning project workflow
-Typically models are developed as part of a project with an objective and goals. Projects often involve more than one person. When experimenting with data, algorithms, and models, development is iterative.
+Typically, models are developed as part of a project with an objective and goals. Projects often involve more than one person. When you experiment with data, algorithms, and models, development is iterative.
### Project lifecycle
-While the project lifecycle can vary by project, it will often look like this:
+The project lifecycle can vary by project, but it often looks like this diagram.
-![Machine learning project lifecycle diagram](./media/overview-what-is-azure-machine-learning/overview-ml-development-lifecycle.png)
+![Diagram that shows the machine learning project lifecycle](./media/overview-what-is-azure-machine-learning/overview-ml-development-lifecycle.png)
-A workspace organizes a project and allows for collaboration for many users all working toward a common objective. Users in a workspace can easily share the results of their runs from experimentation in the studio user interface or use versioned assets for jobs like environments and storage references.
+A workspace organizes a project and allows for collaboration for many users all working toward a common objective. Users in a workspace can easily share the results of their runs from experimentation in the studio user interface. Or they can use versioned assets for jobs like environments and storage references.
For more information, see [Manage Azure Machine Learning workspaces](how-to-manage-workspace.md?tabs=python).
-When a project is ready for operationalization, users' work can be automated in a machine learning pipeline and triggered on a schedule or HTTPS request.
+When a project is ready for operationalization, users' work can be automated in an ML pipeline and triggered on a schedule or HTTPS request.
-Models can be deployed to the managed inferencing solution, for both real-time and batch deployments, abstracting away the infrastructure management typically required for deploying models.
+You can deploy models to the managed inferencing solution, for both real-time and batch deployments, abstracting away the infrastructure management typically required for deploying models.
## Train models
-In Azure Machine Learning, you can run your training script in the cloud or build a model from scratch. Customers often bring models they've built and trained in open-source frameworks, so they can operationalize them in the cloud.
+In Machine Learning, you can run your training script in the cloud or build a model from scratch. Customers often bring models they've built and trained in open-source frameworks so that they can operationalize them in the cloud.
### Open and interoperable
-Data scientists can use models in Azure Machine Learning that they've created in common Python frameworks, such as:
+Data scientists can use models in Machine Learning that they've created in common Python frameworks, such as:
* PyTorch * TensorFlow
Data scientists can use models in Azure Machine Learning that they've created in
* XGBoost * LightGBM
-Other languages and frameworks are supported as well, including:
+Other languages and frameworks are also supported:
+ * R * .NET
-See [Open-source integration with Azure Machine Learning](concept-open-source.md).
+For more information, see [Open-source integration with Azure Machine Learning](concept-open-source.md).
-### Automated featurization and algorithm selection (AutoML)
+### Automated featurization and algorithm selection
-In a repetitive, time-consuming process, in classical machine learning data scientists use prior experience and intuition to select the right data featurization and algorithm for training. Automated ML (AutoML) speeds this process and can be used through the studio UI or Python SDK.
+In a repetitive, time-consuming process, in classical ML, data scientists use prior experience and intuition to select the right data featurization and algorithm for training. Automated ML (AutoML) speeds this process. You can use it through the Machine Learning studio UI or the Python SDK.
-See [What is automated machine learning?](concept-automated-ml.md)
+For more information, see [What is automated machine learning?](concept-automated-ml.md).
### Hyperparameter optimization
-Hyperparameter optimization, or hyperparameter tuning, can be a tedious task. Azure Machine Learning can automate this task for arbitrary parameterized commands with little modification to your job definition. Results are visualized in the studio.
+Hyperparameter optimization, or hyperparameter tuning, can be a tedious task. Machine Learning can automate this task for arbitrary parameterized commands with little modification to your job definition. Results are visualized in the studio.
-See [How to tune hyperparameters](how-to-tune-hyperparameters.md).
+For more information, see [Tune hyperparameters](how-to-tune-hyperparameters.md).
### Multinode distributed training
-Efficiency of training for deep learning and sometimes classical machine learning training jobs can be drastically improved via multinode distributed training. Azure Machine Learning compute clusters and [serverless compute (preview)](how-to-use-serverless-compute.md) offer the latest GPU options.
+Efficiency of training for deep learning and sometimes classical ML training jobs can be drastically improved via multinode distributed training. Machine Learning compute clusters and [serverless compute (preview)](how-to-use-serverless-compute.md) offer the latest GPU options.
-Supported via Azure Machine Learning Kubernetes, Azure Machine Learning compute clusters, and [serverless compute (preview)](how-to-use-serverless-compute.md):
+Frameworks supported via Azure Machine Learning Kubernetes, Machine Learning compute clusters, and [serverless compute (preview)](how-to-use-serverless-compute.md) include:
* PyTorch * TensorFlow * MPI
-The MPI distribution can be used for Horovod or custom multinode logic. Additionally, Apache Spark is supported via [serverless Spark compute and attached Synapse Spark pool](apache-spark-azure-ml-concepts.md) that leverage Azure Synapse Analytics Spark clusters.
+You can use MPI distribution for Horovod or custom multinode logic. Apache Spark is supported via [serverless Spark compute and attached Synapse Spark pool](apache-spark-azure-ml-concepts.md) that use Azure Synapse Analytics Spark clusters.
-See [Distributed training with Azure Machine Learning](concept-distributed-training.md).
+For more information, see [Distributed training with Azure Machine Learning](concept-distributed-training.md).
### Embarrassingly parallel training
-Scaling a machine learning project may require scaling embarrassingly parallel model training. This pattern is common for scenarios like forecasting demand, where a model may be trained for many stores.
+Scaling an ML project might require scaling embarrassingly parallel model training. This pattern is common for scenarios like forecasting demand, where a model might be trained for many stores.
## Deploy models
-To bring a model into production, it's deployed. Azure Machine Learning's managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
+To bring a model into production, it's deployed. The Machine Learning managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
### Real-time and batch scoring (inferencing) *Batch scoring*, or *batch inferencing*, involves invoking an endpoint with a reference to data. The batch endpoint runs jobs asynchronously to process data in parallel on compute clusters and store the data for further analysis.
-*Real-time scoring*, or *online inferencing*, involves invoking an endpoint with one or more model deployments and receiving a response in near-real-time via HTTPs. Traffic can be split across multiple deployments, allowing for testing new model versions by diverting some amount of traffic initially and increasing once confidence in the new model is established.
+*Real-time scoring*, or *online inferencing*, involves invoking an endpoint with one or more model deployments and receiving a response in near real time via HTTPS. Traffic can be split across multiple deployments, allowing for testing new model versions by diverting some amount of traffic initially and increasing after confidence in the new model is established.
-See:
- * [Deploy a model with a real-time managed endpoint](how-to-deploy-online-endpoints.md)
- * [Use batch endpoints for scoring](batch-inference/how-to-use-batch-endpoint.md)
+For more information, see:
+ * [Deploy a model with a real-time managed endpoint](how-to-deploy-online-endpoints.md)
+ * [Use batch endpoints for scoring](batch-inference/how-to-use-batch-endpoint.md)
-## MLOps: DevOps for machine learning
+## MLOps: DevOps for machine learning
-DevOps for machine learning models, often called MLOps, is a process for developing models for production. A model's lifecycle from training to deployment must be auditable if not reproducible.
+DevOps for ML models, often called MLOps, is a process for developing models for production. A model's lifecycle from training to deployment must be auditable if not reproducible.
-### ML model lifecycle
+### ML model lifecycle
-![Machine learning model lifecycle * MLOps](./media/overview-what-is-azure-machine-learning/model-lifecycle.png)
+![Diagram that shows the machine learning model lifecycle * MLOps.](./media/overview-what-is-azure-machine-learning/model-lifecycle.png)
Learn more about [MLOps in Azure Machine Learning](concept-model-management-and-deployment.md). ### Integrations enabling MLOPs
-Azure Machine Learning is built with the model lifecycle in mind. You can audit the model lifecycle down to a specific commit and environment.
+Machine Learning is built with the model lifecycle in mind. You can audit the model lifecycle down to a specific commit and environment.
Some key features enabling MLOps include:
-* `git` integration
-* MLflow integration
-* Machine learning pipeline scheduling
-* Azure Event Grid integration for custom triggers
-* Easy to use with CI/CD tools like GitHub Actions or Azure DevOps
+* `git` integration.
+* MLflow integration.
+* Machine learning pipeline scheduling.
+* Azure Event Grid integration for custom triggers.
+* Ease of use with CI/CD tools like GitHub Actions or Azure DevOps.
+
+Machine Learning also includes features for monitoring and auditing:
-Also, Azure Machine Learning includes features for monitoring and auditing:
-* Job artifacts, such as code snapshots, logs, and other outputs
-* Lineage between jobs and assets, such as containers, data, and compute resources
+* Job artifacts, such as code snapshots, logs, and other outputs.
+* Lineage between jobs and assets, such as containers, data, and compute resources.
## Next steps Start using Azure Machine Learning:+ - [Set up an Azure Machine Learning workspace](quickstart-create-resources.md) - [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)-- [How to run training jobs](how-to-train-model.md)
+- [Run training jobs](how-to-train-model.md)
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-decide-on-right-migration-tools.md
To help you select the right tools for migrating to Azure Database for MySQL, co
| Migration Scenario | Tool(s) | Details | More information | |--||||
-| Single to Flexible Server (Azure portal) | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Recommended |
-| Single to Flexible Server (Azure CLI) | [Custom shell script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in five easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057) | The [script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) also moves other server components such as security settings and server parameter configurations. |
+| Single to Flexible Server (Azure portal) | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Suitable for < 1TB workloads; cross-region, cross-storage type and cross-version migrations. |
+| Single to Flexible Server (Azure CLI) | Azure MySQL Import CLI | [Tutorial: Azure MySQL Import](../migrate/migrate-single-flexible-mysql-import-cli.md) | **Recommended** - Suitable for all sizes of workloads, extremely performant for > 500 GB workloads.|
| MySQL databases (>= 1 TB) to Azure Database for MySQL | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) | [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699) |
-| MySQL databases (< 1 TB) to Azure Database for MySQL | Database Migration Service (classic) and the Azure portal | [Migrate MySQL databases to Azure Database for MySQL using DMS (classic)](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) | If network bandwidth between source and target is good (e.g: High-speed express route), use Azure DMS (database migration service) |
-| Amazon RDS for MySQL databases (< 1 TB) to Azure Database for MySQL | MySQL Workbench | [Migrate Amazon RDS for MySQL databases ( < 1 TB) to Azure Database for MySQL using MySQL Workbench](../single-server/how-to-migrate-rds-mysql-workbench.md) | If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks |
-| Import and export MySQL databases (< 1 TB) in Azure Database for MySQL | mysqldump or MySQL Workbench Import/Export utility | [Import and export - Azure Database for MySQL](../single-server/concepts-migrate-import-export.md) | Use the **mysqldump** and **MySQL Workbench Export/Import** utility tool to perform offline migrations for smaller databases. |
### Online
To help you select the right tools for migrating to Azure Database for MySQL - F
| Migration Scenario | Tool(s) | Details | More information | |--|||| | Single to Flexible Server (Azure portal) | Database Migration Service (classic) | [Tutorial: DMS (classic) with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) | Recommended |
-| Single to Flexible Server | Mydumper/Myloader with Data-in replication | [Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools](how-to-migrate-single-flexible-minimum-downtime.md) | N/A |
+| Single to Flexible Server | Mydumper/Myloader with Data-in replication | [Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools](how-to-migrate-single-flexible-minimum-downtime.md) | N/A |
| Azure Database for MySQL Flexible Server Data-in replication | **Mydumper/Myloader with Data-in replication** | [Configure Data-in replication - Azure Database for MySQL Flexible Server](../flexible-server/how-to-data-in-replication.md) | N/A | ## Next steps
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
Following described are the ways to review your migration schedule once you have
* The Single Server instance should be in **ready state** and should not be in stopped state during the planned maintenance window for automigration to take place. * For Single Server instance with **SSL enabled**, ensure you have all three certificates (**[BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [DigiCertGlobalRootG2 Root CA](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and [DigiCertGlobalRootCA Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate with all three certificates before scheduled auto-migration to ensure business continuity post-migration. * The MySQL engine doesn't guarantee any sort order if there is no 'SORT' clause present in queries. Post in-place automigration, you may observe a change in the sort order. If preserving sort order is crucial, ensure your queries are updated to include 'SORT' clause before the scheduled in-place automigration.
+* If your source Azure Database for MySQL Single Server has engine version v8.x, ensure to upgrade your source server's .NET client driver version to 8.0.32 to avoid any encoding incompatibilities post migration to Flexible Server.
## How is the target MySQL Flexible Server auto-provisioned?
mysql Migrate Single Flexible Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md
az account set --subscription <subscription id>
## Limitations and pre-requisites
+- If your source Azure Database for MySQL Single Server has engine version v8.x, ensure to upgrade your source server's .NET client driver version to 8.0.32 to avoid any encoding incompatibilities post migration to Flexible Server.
- The source Azure Database for MySQL - Single Server and the target Azure Database for MySQL - Flexible Server must be in the same subscription, resource group, region, and on the same MySQL version. MySQL Import across subscriptions, resource groups, regions, and versions isn't possible. - MySQL versions supported by Azure MySQL Import are 5.7 and 8.0. If you are on a different major MySQL version on Single Server, make sure to upgrade your version on your Single Server instance before triggering the import command. - If the Azure Database for MySQL - Single Server instance has server parameter 'lower_case_table_names' set to 2 and your application used partition tables, MySQL Import will result in corrupted partition tables. The recommendation is to set 'lower_case_table_names' to 1 for your Azure Database for MySQL - Single Server instance in order to proceed with corruption-free MySQL Import operation.
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
The following table list common errors that you might encounter during the migra
## Related content -- [Migrate from Network Performance Monitor to Connection Monitor](migrate-to-connection-monitor-from-network-performance-monitor.md).-- [Create Connection Monitor by using the Azure portal](connection-monitor-create-using-portal.md).
+- [Migrate from Network performance monitor to Connection monitor](migrate-to-connection-monitor-from-network-performance-monitor.md).
+- [Create a connection monitor using the Azure portal](connection-monitor-create-using-portal.md).
network-watcher Migrate To Connection Monitor From Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
Title: Migrate to Connection Monitor from Network Performance Monitor
+ Title: Migrate to Connection monitor from Network performance monitor
-description: Learn how to migrate to Connection Monitor from Network Performance Monitor.
+description: Learn how to migrate your tests from Network performance monitor to the new Connection monitor in Azure Network Watcher.
+ - Previously updated : 06/30/2021--
-#Customer intent: I need to migrate from Network Performance Monitor to Connection Monitor.
Last updated : 11/03/2023+
+#CustomerIntent: As an Azure administrator, I want to migrate my tests from Network performance monitor to the new Connection monitor so I avoid service disruption.
-# Migrate to Connection Monitor from Network Performance Monitor
-> [!IMPORTANT]
-> Starting 1 July 2021, you'll not be able to add new tests in an existing workspace or enable a new workspace with Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, migrate your tests from Network Performance Monitor to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+# Migrate to Connection monitor from Network performance monitor
-You can migrate tests from Network Performance Monitor to new, improved Connection Monitor with a single click and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md).
+> [!IMPORTANT]
+> Starting July 1, 2021, you won't be able to add new tests in an existing workspace or enable a new workspace with Network performance monitor. You can continue to use the tests created prior to July 1, 2021. To minimize service disruption to your current workloads, migrate your tests from Network performance monitor to the new Connection monitor in Azure Network Watcher before February 29, 2024.
+You can migrate existing tests from Network performance monitor to the new, improved Connection monitor with a single click and with zero downtime. To learn more about the benefits of the new Connection monitor, see [Connection monitor overview](connection-monitor-overview.md).
## Key points to note The migration helps produce the following results:
-* On-premises agents and firewall settings work as is. No changes are required. Log Analytics agents that are installed on Azure virtual machines need to be replaced with the [Network Watcher extension](../virtual-machines/extensions/network-watcher-windows.md).
-* Existing tests are mapped to Connection Monitor > Test Group > Test format. By selecting **Edit**, you can view and modify the properties of the new Connection Monitor, download a template to make changes to it, and submit the template via Azure Resource Manager.
-* Agents send data to both the Log Analytics workspace and the metrics.
-* Data monitoring:
- * **Data in Log Analytics**: Before migration, the data remains in the workspace in which Network Performance Monitor is configured in the NetworkMonitoring table. After the migration, the data goes to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table in the same workspace. After the tests are disabled in Network Performance Monitor, the data is stored only in the NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
- * **Log-based alerts, dashboards, and integrations**: You must manually edit the queries based on the new NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. To re-create the alerts in metrics, see [Network connectivity monitoring with Connection Monitor](./connection-monitor-overview.md#metrics-in-azure-monitor).
-* For ExpressRoute Monitoring:
- * **End to end loss and latency**: Connection Monitor will power this, and it will be easier than Network Performance Monitor, as users don't need to configure which circuits and peerings to monitor. Circuits in the path will automatically be discovered, data will be available in metrics (faster than LA, which was where Network Performance Monitor stored the results). Topology will work as is as well.
- * **Bandwidth measurements**: With the launch of bandwidth related metrics, Network Performance MonitorΓÇÖs log analytics based approach wasn't effective in bandwidth monitoring for ExpressRoute customers. This capability is now not available in Connection Monitor.
+- On-premises agents and firewall settings work as is. No changes are required. Log Analytics agents that are installed on Azure virtual machines need to be replaced with the [Network Watcher extension](../virtual-machines/extensions/network-watcher-windows.md?toc=/azure/network-watcher/toc.json).
+- Existing tests are mapped to Connection monitor > Test group > Test format. By selecting **Edit**, you can view and modify the properties of the new Connection monitor, download a template to make changes to it, and submit the template via Azure Resource Manager.
+- Agents send data to both the Log Analytics workspace and the metrics.
+- Data monitoring:
+ - **Data in Log Analytics**: Before migration, the data remains in the workspace in which Network performance monitor is configured in the NetworkMonitoring table. After the migration, the data goes to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table in the same workspace. After the tests are disabled in Network performance monitor, the data is stored only in the NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
+ - **Log-based alerts, dashboards, and integrations**: You must manually edit the queries based on the new NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. To re-create the alerts in metrics, see [Metrics in Azure Monitor](connection-monitor-overview.md#metrics-in-azure-monitor).
+- For ExpressRoute monitoring:
+ - **End to end loss and latency**: This is easier in Connection monitor than in Network performance monitor, as you don't need to configure which circuits and peerings to monitor. Circuits in the path are automatically discovered, data is available in metrics (faster than LA, which was where Network performance monitor stored the results).
+ - **Bandwidth measurements**: With the launch of bandwidth related metrics, Network performance monitorΓÇÖs log analytics based approach wasn't effective in bandwidth monitoring for ExpressRoute customers. This capability is now not available in Connection monitor.
## Prerequisites
-* Ensure that Network Watcher is enabled in your subscription and the region of the Log Analytics workspace. If not done, you'll see an error stating "Before you attempt migrate, enable Network watcher extension in selection subscription and location of LA workspace selected."
-* In case Azure VM belonging to a different region/subscription than that of Log Analytics workspace is used as an endpoint, make sure Network Watcher is enabled for that subscription and region.
-* Azure virtual machines with Log Analytics agents installed must be enabled with the Network Watcher extension.
+- Ensure that Network Watcher is enabled in the subscription and region of the Log Analytics workspace. If not done, you see an error stating "Before you attempt to migrate, enable Network watcher extension in subscription and location of LA workspace selected."
+- In case Azure virtual machine (VM) belongs to a different region/subscription than that of Log Analytics workspace is used as an endpoint, make sure Network Watcher is enabled for that subscription and region.
+- Azure virtual machines with Log Analytics agents installed must be enabled with the Network Watcher extension.
## Migrate the tests
-To migrate the tests from Network Performance Monitor to Connection Monitor, do the following:
+To migrate the tests from Network performance monitor to Connection monitor, follow these steps:
1. In Network Watcher, select **Connection Monitor**, and then select the **Import tests from NPM** tab.
- :::image type="content" source="./media/connection-monitor-2-preview/migrate-netpm-to-cm-preview.png" alt-text="Migrate tests from Network Performance Monitor to Connection Monitor" lightbox="./media/connection-monitor-2-preview/migrate-netpm-to-cm-preview.png":::
+ :::image type="content" source="./media/migrate-to-connection-monitor-from-network-performance-monitor/migrate-from-network-performance-monitor.png" alt-text="Migrate tests from Network performance monitor to Connection monitor" lightbox="./media/migrate-to-connection-monitor-from-network-performance-monitor/migrate-from-network-performance-monitor.png":::
-1. In the drop-down lists, select your subscription and workspace, and then select the Network Performance Monitor feature you want to migrate.
+1. In the drop-down lists, select your subscription and workspace, and then select the Network performance monitor feature you want to migrate.
1. Select **Import** to migrate the tests.
-* If Network Performance Monitor isn't enabled on the workspace, you'll see an error stating "No valid NPM config found".
-* If no tests exist in the feature you chose in step2, you'll see an error stating "Workspace selected doesn't have \<feature\> config".
-* If there are no valid tests, you'll see an error stating "Workspace selected does not have valid tests"
-* Your tests may contain agents that are no longer active, but may have been active in the past. You'll see an error stating "Few tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor. Select continue to migrate the tests that do not contain agents that are not active."
+ - If Network performance monitor isn't enabled on the workspace, you see an error stating "No valid NPM config found".
+ - If no tests exist in the feature you chose in step 2, you'll see an error stating "Workspace selected doesn't have \<feature\> config".
+ - If there are no valid tests, you'll see an error stating "Workspace selected does not have valid tests"
+ - Your tests might contain agents that are no longer active, but have been active in the past. You'll see an error stating "Few tests contain agents that are no longer active. These agents might be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection monitor. Select continue to migrate the tests that do not contain agents that are not active."
After the migration begins, the following changes take place:
-* A new connection monitor resource is created.
- * One connection monitor per region and subscription is created. For tests with on-premises agents, the new connection monitor name is formatted as `<workspaceName>_"workspace_region_name"`. For tests with Azure agents, the new connection monitor name is formatted as `<workspaceName>_<Azure_region_name>`.
- * Monitoring data is now stored in the same Log Analytics workspace in which Network Performance Monitor is enabled, in new tables called NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
- * The test name is carried forward as the test group name. The test description isn't migrated.
- * Source and destination endpoints are created and used in the new test group. For on-premises agents, the endpoints are formatted as `<workspaceName>_<FQDN of on-premises machine>`. The Agent description isn't migrated.
- * Destination port and probing interval are moved to a test configuration called `TC_<protocol>_<port>` and `TC_<protocol>_<port>_AppThresholds`. The protocol is set based on the port values. For ICMP, the test configurations are named as `TC_<protocol>` and `TC_<protocol>_AppThresholds`. Success thresholds and other optional properties if set, are migrated, otherwise are left blank.
- * If the migrating tests contain agents that aren't running, you need to enable the agents and migrate again.
-* Network Performance Monitor isn't disabled, so the migrated tests can continue to send data to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. This approach ensures that existing log-based alerts and integrations are unaffected.
-* The newly created connection monitor is visible in Connection Monitor.
+- A new connection monitor resource is created.
+ - One connection monitor per region and subscription is created. For tests with on-premises agents, the new connection monitor name is formatted as `<workspaceName>_<workspace_region_name>`. For tests with Azure agents, the new connection monitor name is formatted as `<workspaceName>_<Azure_region_name>`.
+ - Monitoring data is now stored in the same Log Analytics workspace in which Network performance monitor is enabled, in new tables called NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
+ - The test name is carried forward as the test group name. The test description isn't migrated.
+ - Source and destination endpoints are created and used in the new test group. For on-premises agents, the endpoints are formatted as `<workspaceName>_<FQDN of on-premises machine>`. The Agent description isn't migrated.
+ - Destination port and probing interval are moved to a test configuration called `TC_<protocol>_<port>` and `TC_<protocol>_<port>_AppThresholds`. The protocol is set based on the port values. For ICMP, the test configurations are named as `TC_<protocol>` and `TC_<protocol>_AppThresholds`. Success thresholds and other optional properties if set, are migrated, otherwise are left blank.
+ - If the migrating tests contain agents that aren't running, you need to enable the agents and migrate again.
+- Network performance monitor isn't disabled, so the migrated tests can continue to send data to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. This approach ensures that existing log-based alerts and integrations are unaffected.
+- The newly created connection monitor is visible in Connection monitor.
After the migration, be sure to:
-* Manually disable the tests in Network Performance Monitor. Until you do so, you'll continue to be charged for them.
-* While you're disabling Network Performance Monitor, re-create your alerts on the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables or use metrics.
-* Migrate any external integrations to the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables. Examples of external integrations are dashboards in Power BI and Grafana, and integrations with Security Information and Event Management (SIEM) systems.
+- Manually disable the tests in Network performance monitor. Until you do so, you'll continue to be charged for them.
+- While you're disabling Network performance monitor, re-create your alerts on the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables or use metrics.
+- Migrate any external integrations to the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables. Examples of external integrations are dashboards in Power BI and Grafana, and integrations with Security Information and Event Management (SIEM) systems.
## Common Errors Encountered
-Below are some common errors faced during the migration:
-
-| Error | Reason |
-|||
-| No valid NPM config found. Go to NPM UI to check config | This error occurs when User is selecting Import Tests from Network Performance Monitor to migrate the tests but Network Performance Monitor isn't enabled in the workspace. |
-|Workspace selected does not have 'Service Connectivity Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs Service Connectivity Monitor to Connection Monitor but there are no tests configured in Service Connectivity Monitor. |
-|Workspace selected does not have 'ExpressRoute Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs ExpressRoute Monitor to Connection Monitor but there are no tests configured in ExpressRoute Monitor. |
-|Workspace selected does not have 'Performance Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs Performance Monitor to Connection Monitor but there are no tests configured in Performance Monitor. |
-|Workspace selected does not have valid '{0}' tests | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor but there are no valid tests present in the feature chosen by User to migrate. |
-|Before you attempt migrate, enable Network watcher extension in selection subscription and location of LA workspace selected | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and Network Watcher Extension isn't enabled in the LA workspace selected. User needs to enable NW Extension before migrating tests. |
-|Few {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor. Select continue to migrate the tests that do not contain agents that are not active. | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and some selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User can deselect these tests and continue to select and migrate the tests, which don't contain any such inactive agents. |
-|Your {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User needs to enable the agents and then continue to migrate these tests to Connection Monitor. |
-|An error occurred while importing tests to connection monitor | This error occurs when the User is trying to migrate tests from Network Performance Monitor to CM but the migration isn't successful due to errors. |
-
+The following table list common errors that you might encounter during the migration:
+| Error | Reason |
+| -- | |
+| No valid NPM config found. Go to NPM UI to check config | This error occurs when User is selecting Import Tests from Network performance monitor to migrate the tests but Network performance monitor isn't enabled in the workspace. |
+| Workspace selected does not have 'Service Connectivity Monitor' config | This error occurs when User is migrating tests from Network performance monitorΓÇÖs Service Connectivity Monitor to Connection monitor but there are no tests configured in Service Connectivity Monitor. |
+| Workspace selected does not have 'ExpressRoute Monitor' config | This error occurs when User is migrating tests from Network performance monitorΓÇÖs ExpressRoute Monitor to Connection monitor but there are no tests configured in ExpressRoute Monitor. |
+| Workspace selected does not have 'Performance Monitor' config | This error occurs when User is migrating tests from Network performance monitorΓÇÖs performance monitor to Connection monitor but there are no tests configured in performance monitor. |
+| Workspace selected does not have valid '{0}' tests | This error occurs when User is migrating tests from Network performance monitor to Connection monitor but there are no valid tests present in the feature chosen by User to migrate. |
+| Before you attempt to migrate, enable Network watcher extension in selection subscription and location of LA workspace selected | This error occurs when User is migrating tests from Network performance monitor to Connection monitor and Network Watcher Extension isn't enabled in the LA workspace selected. User needs to enable NW Extension before migrating tests. |
+| Few {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents might be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection monitor. Select continue to migrate the tests that do not contain agents that are not active. | This error occurs when User is migrating tests from Network performance monitor to Connection monitor and some selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User can deselect these tests and continue to select and migrate the tests, which don't contain any such inactive agents. |
+| Your {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents might be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection monitor | This error occurs when User is migrating tests from Network performance monitor to Connection monitor and selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User needs to enable the agents and then continue to migrate these tests to Connection monitor. |
+| An error occurred while importing tests to connection monitor | This error occurs when the User is trying to migrate tests from Network performance monitor to CM but the migration isn't successful due to errors. |
-## Next steps
+## Related content
-To learn more about Connection Monitor, see:
-* [Migrate from Connection Monitor (classic) to Connection Monitor](./migrate-to-connection-monitor-from-connection-monitor-classic.md)
-* [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)
+- [Migrate from Connection monitor (classic) to Connection monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md).
+- [Create a connection monitor using the Azure portal](connection-monitor-create-using-portal.md).
postgresql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-auto-grow-storage-portal.md
Last updated 06/24/2022
This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
-or servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values is smaller. Conversely, for servers with storage under 1 TB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
+For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values is smaller. Conversely, for servers with storage under 1 TB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
As an illustration, take a server with a storage capacity of 2 TiB ( greater than 1 TIB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (less than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB.
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
An inventory job can take a longer amount of time in these cases:
The inventory run might take longer time to run as compared to the subsequent inventory runs. -- In inventory run is processing a large amount of data in hierarchical namespace enabled accounts
+- An inventory run is processing a large amount of data in hierarchical namespace enabled accounts
An inventory job might take more than one day to complete for hierarchical namespace enabled accounts that have hundreds of millions of blobs. Sometimes the inventory job fails and doesn't create an inventory file. If a job doesn't complete successfully, check subsequent jobs to see if they're complete before contacting support.
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
The built-in lookup table is small, but on Unix, it's augmented by the local sys
On Windows, MIME types are extracted from the registry. This feature can be turned off with the help of a flag. Refer to the flag section.
-If you set an environment variable by using the command line, that variable will be readable in your command line history. Consider clearing variables that contain credentials from your command line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
+If you set an environment variable by using the command line, that variable is readable in your command line history. Consider clearing variables that contain credentials from your command line history. To keep variables from appearing in your history, you can use a script to prompt the user for their credentials, and to set the environment variable.
```azcopy azcopy copy [source] [destination] [flags]
Upload files and directories to Azure Storage account and set the query-string e
- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax: - `azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"` - Keys and values are URL encoded and the key-value pairs are separated by an ampersand('&')-- While setting tags on the blobs, there are more permissions('t' for tags) in SAS without which the service will give authorization error back.
+- While setting tags on the blobs, there are more permissions('t' for tags) in SAS without which the service gives authorization error back.
Download a single file by using OAuth authentication. If you haven't yet logged into AzCopy, run the azcopy login command before you run the following command.
Download a subset of containers within a storage account by using a wildcard sym
`azcopy cp "https://[srcaccount].blob.core.windows.net/[container*name]" "/path/to/dir" --recursive`
-Download all the versions of a blob from Azure Storage to local directory. Ensure that source is a valid blob, destination is a local folder and `versionidsFile` which takes in a path to the file where each version is written on a separate line. All the specified versions will get downloaded in the destination folder specified.
+Download all the versions of a blob from Azure Storage to local directory. Ensure that source is a valid blob, destination is a local folder and `versionidsFile` which takes in a path to the file where each version is written on a separate line. All the specified versions get downloaded in the destination folder specified.
`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]/[blobname]" "/path/to/dir" --list-of-versions="/another/path/to/dir/[versionidsFile]"`
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--preserve-smb-permissions` will still preserve ACLs but Owner and Group is based on the user running AzCopy (default true)
-`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2). For Hierarchical Namespace accounts, you'll need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern).
+`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2). For accounts that have a hierarchical namespace, your security principal must be the owning user of the target container or it must be assigned the Storage Blob Data Owner role, scoped to the target container, storage account, parent resource group, or subscription. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern).
`--preserve-smb-info` For SMB-aware locations, flag is set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files are transferred; any others are ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for `Last Write Time` which is never preserved for folders. (default true)
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-metrics.md
Azure offers metrics in the Azure portal that provide insight on how your virtual machines (VM) and disks perform. The metrics can also be retrieved through an API call. This article is broken into 3 subsections: -- **Disk IO, throughput and queue depth metrics** - These metrics allow you to see the storage performance from the perspective of a disk and a virtual machine.
+- **Disk IO, throughput, queue depth and latency metrics** - These metrics allow you to see the storage performance from the perspective of a disk and a virtual machine.
- **Disk bursting metrics** - These are the metrics provide observability into our [bursting](disk-bursting.md) feature on our premium disks. - **Storage IO utilization metrics** - These metrics help diagnose bottlenecks in your storage performance with disks. All metrics are emitted every minute, except for the bursting credit percentage metric, which is emitted every 5 minutes.
-## Disk IO, throughput and queue depth metrics
-The following metrics are available to get insight on VM and Disk IO, throughput, and queue depth performance:
+## Disk IO, throughput, queue depth and latency metrics
+The following metrics are available to get insight on VM and disk IO, throughput, and queue depth performance:
+- **OS Disk Latency (Preview)**: The average time to complete IOs during the monitoring for the OS disk. Values are in miliseconds.
- **OS Disk Queue Depth**: The number of current outstanding IO requests that are waiting to be read from or written to the OS disk. - **OS Disk Read Bytes/Sec**: The number of bytes that are read in a second from the OS disk. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of bytes read from the cache. - **OS Disk Read Operations/Sec**: The number of input operations that are read in a second from the OS disk. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache. - **OS Disk Write Bytes/Sec**: The number of bytes that are written in a second from the OS disk. - **OS Disk Write Operations/Sec**: The number of output operations that are written in a second from the OS disk.
+- **Data Disk Latency (Preview)**: The average time to complete IOs during the monitoring for the data disk. Values are in miliseconds.
- **Data Disk Queue Depth**: The number of current outstanding IO requests that are waiting to be read from or written to the data disk(s). - **Data Disk Read Bytes/Sec**: The number of bytes that are read in a second from the data disk(s). If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of bytes read from the cache. - **Data Disk Read Operations/Sec**: The number of input operations that are read in a second from data disk(s). If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache.
The following metrics are available to get insight on VM and Disk IO, throughput
- **Disk Read Operations/Sec**: The number of input operations that are read in a second from all disks attached to a VM. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache. - **Disk Write Bytes**: The number of bytes that are written in a minute from all disks attached to a VM. - **Disk Write Operations/Sec**: The number of output operations that are written in a second from all disks attached to a VM.
+- **Temp Disk Latency (Preview)**: The average time to complete IOs during the monitoring for the temporary disk. Values are in miliseconds.
+- **Temp Disk Queue Depth**: The number of current outstanding IO requests that are waiting to be read from or written to the temporary disk.
+- **Temp Disk Read Bytes/Sec**: The number of bytes that are read in a second from the temporary disk.
+- **Temp Disk Read Operations/Sec**: The number of input operations that are read in a second from the temporary disk.
+- **Temp Disk Write Bytes/Sec**: The number of bytes that are written in a second from the temporary disk.
+- **Temp Disk Write Operations/Sec**: The number of output operations that are written in a second from the temporary disk.
## Bursting metrics The following metrics help with observability into our [bursting](disk-bursting.md) feature on our premium disks:
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
IPv6 for Azure Virtual Network includes the following capabilities:
- Instance-level public IP provides IPv6 Internet connectivity directly to individual VMs. -- [Add IPv6 to Existing IPv4-only deployments](../../load-balancer/ipv6-add-to-existing-vnet-powershell.md)- this feature enables you to easily add IPv6 connectivity to existing IPv4-only deployments without the need to recreate deployments. The IPv4 network traffic is unaffected during this process so depending on your application and OS you may be able to add IPv6 even to live services.
+- [Add IPv6 to Existing IPv4-only deployments](../../load-balancer/ipv6-add-to-existing-vnet-powershell.md)- this feature enables you to easily add IPv6 connectivity to existing IPv4-only deployments without the need to recreate deployments. The IPv4 network traffic is unaffected during this process so depending on your application and OS you might be able to add IPv6 even to live services.
- Let Internet clients seamlessly access your dual stack application using their protocol of choice with Azure DNS support for IPv6 (AAAA) records.
The current IPv6 for Azure Virtual Network release has the following limitations
- Dual-stack configurations that use floating IP can only be used with public load balancers, not internal load balancers. -- Application Gateway v2 doesn't currently support IPv6. It can operate in a dual stack virtual network using only IPv4, but the gateway subnet must be IPv4-only. Application Gateway v1 doesn't support dual stack virtual networks.
+- Support for IPv6 in Application Gateway v2 is currently in public preview. For more information, see the [How to configure IPv6 Application Gateway](../../application-gateway/ipv6-application-gateway-portal.md) guides. Application Gateway v1 doesn't support a dual stack frontend.
- The Azure platform (AKS, etc.) doesn't support IPv6 communication for Containers.